Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains a collection of annotated ultrasound images of the liver, designed to aid in the development of computer vision models for liver analysis, segmentation, and disease detection. The annotations include outlines of the liver and liver mass regions, as well as classifications into benign, malignant, and normal cases.
Creators: Xu Yiming, Zheng Bowen, Liu Xiaohong, Wu Tao, Ju Jinxiu, Wang Shijie, Lian Yufan, Zhang Hongjun, Liang Tong, Sang Ye, Jiang Rui, Wang Guangyu, Ren Jie, Chen Ting
Published: November 2, 2022 Version: v1 DOI: 10.5281/zenodo.7272660
This dataset provides ultrasound images of the liver with detailed annotations. The annotations highlight the liver itself and any liver mass regions present. The images are categorized into three classes:
The dataset is organized into three zip files:
The ultrasound images have been annotated to show:
These annotations make the dataset suitable for tasks such as segmentation of the liver and liver masses, as well as classification of liver conditions.
This dataset can be valuable for a variety of applications, including:
This dataset is subject to copyright. Any use of the data must include appropriate acknowledgement and credit. Please contact the authors of the published data and cite the publication and the provided URL.
Citation:
Xu Yiming, Zheng Bowen, Liu Xiaohong, Wu Tao, Ju Jinxiu, Wang Shijie, Lian Yufan, Zhang Hongjun, Liang Tong, Sang Ye, Jiang Rui, Wang Guangyu, Ren Jie, & Chen Ting. (2022). Annotated Ultrasound Liver images [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7272660
APA Style Citation:
Xu, Y., Bowen, Z., Xiaohong, L., Tao, W., Jinxiu, J., Shijie, W., Yufan, L., Hongjun, Z., Tong, L., Ye, S., Rui, J., Guangyu, W., Jie, R., & Ting, C. (2022). Annotated Ultrasound Liver images [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7272660
Creative Commons Attribution 4.0 International
We hope this dataset is helpful for your research and projects!
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The information is based on medical ultrasound scans that show signs of fetus data's. There are three types of pictures included in the Fetus Ultrasound Dataset: normal, benign, and malignant. When paired with deep learning, breast ultrasound images may provide impressive results for fetus disease level detection, segmentation, and classification.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Breast cancer is one of the most common causes of death among women worldwide. Early detection helps in reducing the number of early deaths. The data reviews the medical images of breast cancer using ultrasound scan. Breast Ultrasound Dataset is categorized into three classes: normal, benign, and malignant images. Breast ultrasound images can produce great results in classification, detection, and segmentation of breast cancer when combined with machine learning.
Data The data collected at baseline include breast ultrasound images among women in ages between 25 and 75 years old. This data was collected in 2018. The number of patients is 600 female patients. The dataset consists of 780 images with an average image size of 500*500 pixels. The images are in PNG format. The ground truth images are presented with original images. The images are categorized into three classes, which are normal, benign, and malignant.
If you use this dataset, please cite: Al-Dhabyani W, Gomaa M, Khaled H, Fahmy A. Dataset of breast ultrasound images. Data in Brief. 2020 Feb;28:104863. DOI: 10.1016/j.dib.2019.104863.
Facebook
TwitterBoost your AI projects with our 40,000-image high-quality Ultrasound dataset in DICOM, ideal for healthcare computer vision.
Facebook
TwitterA dataset of high quality is one of the key factors to train a neural network. Unfortunately, there are few open-source abdominal ultrasound image datasets.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Worldwide, breast cancer ranks high among women's leading causes of death. Reducing the number of premature deaths can be achieved through early detection. The information is based on medical ultrasound scans that show signs of breast cancer. There are three types of images included in the Breast Ultrasound Dataset: normal, benign, and malignant. Incorporating machine learning into breast ultrasound images improves their ability to detect, classify, and segment breast cancer. Data Image data… See the full description on the dataset page: https://huggingface.co/datasets/gymprathap/Breast-Cancer-Ultrasound-Images-Dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset comprises micro-ultrasound scans and human prostate annotations of 75 patients who underwent micro-ultrasound guided prostate biopsy at the University of Florida. All images and segmentations have been fully de-identified in the NIFTI format.
Under the "train" folder, you'll find three subfolders:
In the "test" folder, there are five subfolders:
If you use this dataset, please cite our paper: Jiang, Hongxu, et al. "MicroSegNet: A deep learning approach for prostate segmentation on micro-ultrasound images." Computerized Medical Imaging and Graphics (2024): 102326. DOI: https://doi.org/10.1016/j.compmedimag.2024.102326.
For any dataset-related queries, please reach out to Dr. Wei Shao: weishao@ufl.edu.
Facebook
TwitterBTX24/busi-classification-ultrasound-dataset dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains a curated benchmark collection of 1,062 labelled lung ultrasound (LUS) images collected from patients at Mulago National Referral Hospital and Kiruddu Referral Hospital in Kampala, Uganda. The images were acquired and annotated by senior radiologists to support the development and evaluation of artificial intelligence (AI) models for pulmonary disease diagnosis. Each image is categorized into one of three classes: Probably COVID-19 (COVID-19), Diseased Lung but Probably Not COVID-19 (Other Lung Disease), and Healthy Lung.
The dataset addresses key challenges in LUS interpretation, including inter-operator variability, low signal-to-noise ratios, and reliance on expert sonographers. It is particularly suitable for training and testing convolutional neural network (CNN)-based models for medical image classification tasks in low-resource settings. The images are provided in standard formats such as PNG or JPEG, with corresponding labels stored in structured files like CSV or JSON to facilitate ease of use in machine learning workflows.
In this second version of the dataset, we have extended the resource by including a folder containing the original unprocessed raw data, as well as the scripts used to process, clean, and sort the data into the final labelled set. These additions promote transparency and reproducibility, allowing researchers to understand the full data pipeline and adapt it for their own applications. This resource is intended to advance research in deep learning for lung ultrasound analysis and to contribute toward building more accessible and reliable diagnostic tools in global health.
Facebook
Twitterhttps://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
This dataset was derived from tracked biopsy sessions using the Artemis biopsy system, many of which included image fusion with MRI targets. Patients received a 3D transrectal ultrasound scan, after which nonrigid registration (e.g. “fusion”) was performed between real-time ultrasound and preoperative MRI, enabling biopsy cores to be sampled from MR regions of interest. Most cases also included sampling of systematic biopsy cores using a 12-core digital template. The Artemis system tracked targeted and systematic core locations using encoder kinematics of a mechanical arm, and recorded locations relative to the Ultrasound scan. MRI biopsy coordinates were also recorded for most cases. STL files and biopsy overlays are available and can be visualized in 3D Slicer with the SlicerHeart extension. Spreadsheets summarizing biopsy and MR target data are also available. See the Detailed Description tab below for more information.
MRI targets were defined using multiparametric MRI, e.g. t2-weighted, diffusion-weighted, and perfusion-weighted sequences, and scored on a Likert-like scale with close correspondence to PIRADS version 2. t2-weighted MRI was used to trace ROI contours, and is the only sequence provided in this dataset. MR imaging was performed on a 3 Tesla Trio, Verio or Skyra scanner (Siemens, Erlangen, Germany). A transabdominal phased array was used in all cases, and an endorectal coil was used in a subset of cases. The majority of pulse sequences are 3D T2:SPC, with TR/TE 2200/203, Matrix/FOV 256 × 205/14 × 14 cm, and 1.5mm slice spacing. Some cases were instead 3D T2:TSE with TR/TE 3800–5040/101, and a small minority were imported from other institutions (various T2 protocols.)
Ultrasound scans were performed with Hitachi Hi-Vision 5500 7.5 MHz or the Noblus C41V 2-10 MHz end-fire probe. 3D scans were acquired by rotation of the end-fire probe 200 degrees about its axis, and interpolating to resample the volume with isotropic resolution.
Patients with suspicion of prostate cancer due to elevated PSA and/or suspicious imaging findings were consecutively accrued. Any consented patient who underwent or had planned to receive a routine, standard-of-care prostate biopsy at the UCLA Clark Urology Center was included.
Note: Some Private Tags in this collection are critical to properly displaying the STL surface and the Prostate anatomy. Private Tag (1129,"Eigen, Inc",1016) DS VoxelSize is especially important for multi-frame US cases.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Objective: To investigate if artificial intelligence can identify fetus intracranial structures in pregnancy weeks 11–14; This dataset contains 1528 2D sagittal-view ultrasound images of 1519 females collected from Shenzhen People’s Hospital. Data was used for standard/non-standard (S-NS) plane classification, a key step for NT measurement and Down Syndrome assessment. An external dataset with 156 images from the Longhua branch of Shenzhen People’s Hospital was also provided to test AI performance. Annotations for nine key structures (thalami, midbrain, palate, 4th ventricle, cisterna magna, nuchal translucency (NT), nasal tip, nasal skin, and nasal bone) could be found in ObjectDetection.xlsx file.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
RUS contains Real UltraSound Images from the Type 1 Dataset. It is not fully labelled and hence only has the test set annotated. AUS conatins Artificial UltraSound Images from Type 1, 2, and 3. In each of the folders there is an explanation.txt
Type 1:- https://www.kaggle.com/datasets/ignaciorlando/ussimandsegm
Type 2:- https://www.kaggle.com/datasets/timurrashitov/ultrasoundnervesorted
Type 3:- https://www.kaggle.com/datasets/aryashah2k/breast-ultrasound-images-dataset
Facebook
TwitterThe BUS-BRA Dataset is a publicly available collection of anonymized breast ultrasound (BUS) images from 1,064 patients. This dataset is designed to support the development and evaluation of computer-aided diagnosis (CAD) systems for breast cancer detection and analysis.
The dataset includes biopsy-proven tumor cases and BI-RADS (Breast Imaging Reporting and Data System) annotations across categories 2, 3, 4, and 5. Crucially, it also provides ground truth delineations, segmenting the ultrasound images into tumoral and normal regions.
This dataset contains a wealth of information for researchers working on breast ultrasound analysis:
A comprehensive description of the BUS-BRA dataset can be found in the following publication, which must be cited in any research utilizing this dataset:
Wilfrido Gómez-Flores, Maria Julia Gregorio-Calas, and Wagner Coelho de Albuquerque Pereira, "BUS-BRA: A Breast Ultrasound Dataset for Assessing Computer-aided Diagnosis Systems," Medical Physics, vol. 51, pp. 3110-3123, 2024, DOI: 10.1002/mp.16812.
The dataset is copyrighted and primarily distributed by the Program of Biomedical Engineering of the Federal University of Rio de Janeiro (PEB/COPPE-UFRJ, Brazil). The Centro de Investigación y de Estudios Avanzados (Cinvestav, Mexico) is also actively involved in its development for research purposes.
This dataset is highly valuable for a range of research and development activities, including:
This dataset is available under the Creative Commons Attribution 4.0 International license. This means you are free to share and adapt the dataset for any purpose, even commercially, as long as you give appropriate credit to the creators.
When using the BUS-BRA dataset in your research, please cite the following paper:
Wilfrido Gómez-Flores, Maria Julia Gregorio-Calas, and Wagner Coelho de Albuquerque Pereira, "BUS-BRA: A Breast Ultrasound Dataset for Assessing Computer-aided Diagnosis Systems," Medical Physics, vol. 51, pp. 3110-3123, 2024, DOI: 10.1002/mp.16812.
Wagner Coelho de Albuquerque Pereira
Program of Biomedical Engineering, Federal University of Rio de Janeiro (PEB/COPPE-UFRJ, Brazil) Centro de Investigación y de Estudios Avanzados (Cinvestav, Mexico) Program of Biomedical Engineering, Federal University of Rio de Janeiro (PEB/COPPE-UFRJ, Brazil)
The creators acknowledge the Program of Biomedical Engineering of the Federal University of Rio de Janeiro (PEB/COPPE-UFRJ, Brazil) and the Centro de Investigación y de Estudios Avanzados (Cinvestav, Mexico) for their roles in developing and distributing this dataset.
The source codes used to replicate the experiments described in the related article are available on GitHub: https://github.com/wgomezf/BUS-BRA
breast ultrasound, computer-aided diagnosis, BI-RADS categories, tumor segmentation and classification
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Liver Ultrasound Semantic Segmentation is a dataset for semantic segmentation tasks - it contains Liver Ultrasounds annotations for 100 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterThis dataset was created by Ankit8467
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The item published is a dataset that provides the raw data and original code to generate Figure 4 in the research paper, Correlative non-destructive techniques to investigate ageing and orientation effects in automotive Li-ion pouch cells, https://doi.org/10.5522/04/c.6868027 of which I am first author. The measurements and following data analysis took place between January 2022 – November 2022.
The figure illustrates the ultrasonic mapping measurements of pouch cells that have been extracted from electric vehicles and have been aged in real-world conditions. The degradation of the cells was measured using four different complementary characterisation measurement techniques, one of which was ultrasonic mapping.
The ultrasonic mapping measurements were performed using an Olympus Focus PX phased-array instrument (Olympus Corp., Japan) with a 5 MHz 1D linear phased array probe consisting of 64 transducers. The transducer had an active aperture of 64 mm with an element pitch (centre-to-centre distance between elements) of 1 mm. The cell was covered with ultrasonic couplant (Fannin UK Ltd.), prior to every scan to ensure good acoustic transmission. The transducer was moved along the length of each cell at a fixed pressure using an Olympus GLIDER 2-axis encoded scanner with the step size set at 1 mm to give a resolution of ca. 1 mm2. Due to the large size of the cells, the active aperture of the probe was wide enough to cover 1/3 the width, meaning that three measurements for each cell were taken and the data was combined to form the colour maps.
Data from the ultrasonic signals were analysed using FocusPC software. The waveforms recorded by the transducer were exported and plotted using custom Python code to compare how the signal changes at different points in the cell. For consistency, a specific ToF range was selected for all cells, chosen because it is where the part of the waveform, known as the ‘echo-peak’, is located.74 The echo-peak is useful to monitor as it is where the waveform has travelled the whole way through the cell and reflected from the back surface, so characterising the entire cell. The maximum amplitude of the ultrasonic signal within this ToF range, at each point, are combined to produce a colour map. The signal amplitude is a percentage proportion of 100 where 100 is the maximum intensity of the signal, meaning that the signal has been attenuated the least as it travels through the cell, and 0 is the minimum intensity. The intensity is absolute and not normalised across all scans, meaning that an amplitude values on different cells can be directly compared. The Pristine cell is a second-generation Nissan Leaf pouch, different to the first-generation aged cells of varying orientation. The authors were not able to acquire an identical first-generation pristine Nissan Leaf cell. Nonetheless, it was expected that the Pristine cell would contain a uniform internal structure regardless of the specific chemistry and this would be identified in an ultrasound map consisting of a single colour (or narrow colour range).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Musculoskeletal disorders present significant health and economic challenges on a global scale. Current intraoperative imaging techniques, including computed tomography (CT) and radiography, involve high radiation exposure and limited soft tissue visualization. Ultrasound (US) offers a non-invasive, real-time alternative but is highly observer-dependent and underutilized intraoperatively. US enhanced by artificial intelligence shows high potential for observer-independent pattern recognition and robot-assisted applications in orthopedics. Given the limited availability of in-vivo imaging data, we introduce a comprehensive dataset from a comparative collection of handheld US (HUS) and robot-assisted ultrasound (RUS) lumbar spine imaging in 63 healthy volunteers. This dataset includes demographic data, paired CT, HUS, RUS imaging, synchronized tracking data for HUS and RUS, and 3D-CT-segmentations. It establishes a robust baseline for machine learning algorithms by focusing on healthy individuals, circumventing the limitations of simulations and pathological anatomy. To our knowledge, this extensive collection is the first healthy anatomy dataset for the lumbar spine that includes paired CT, HUS, and RUS imaging, supporting advancements in computer- and robotic-assisted diagnostic and intraoperative techniques for musculoskeletal disorders.
Facebook
TwitterBTX24/PCOS-ultrasound-dataset dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
Twitterhttps://www.futuremarketinsights.com/privacy-policyhttps://www.futuremarketinsights.com/privacy-policy
The ultrasound systems market is estimated to reach USD 11,260.1 million in 2025. It is estimated that revenue will increase at a CAGR of 5.5% between 2025 and 2035. The market is anticipated to reach USD 19,233.9 million by 2035.
| Attributes | Key Insights |
|---|---|
| Historical Size, 2024 | USD 10,673.1 million |
| Estimated Size, 2025 | USD 11,260.1 million |
| Projected Size, 2035 | USD 19,233.9 million |
| Value-based CAGR (2025 to 2035) | 5.5% |
Semi-Annual Industry Outlook
| Particular | Value CAGR |
|---|---|
| H1 | 6.3% (2024 to 2034) |
| H2 | 6.0% (2024 to 2034) |
| H1 | 5.5% (2025 to 2035) |
| H2 | 5.0% (2025 to 2035) |
Country-wise Insights
| Countries | Value CAGR (2025 to 2035) |
|---|---|
| United States | 3.4% |
| Canada | 4.4% |
| Germany | 4.5% |
| France | 3.8% |
| Italy | 4.7% |
| UK | 6.6% |
| Spain | 4.5% |
| China | 5.5% |
Category-wise Insights
| Modality | Cart/Trolley Based Ultrasound Systems |
|---|---|
| Value Share (2025) | 66.4% |
| Application | Radiology |
|---|---|
| Value Share (2025) | 41.6% |
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Ultrasound imaging data (RF and scan-converted formats) of custom-made elastography phantoms. The data are robotically-acquired image sequences with controlled and gradually increasing phantom indentation. The phantom consists of a background medium traversed by a cylindrical inclusion. The data (images and RF signals) are contained in the 6 folders Acqui1 → Acqui6 corresponding to the 6 acquired image sequences on the phantom. Each Acqui_i folder contains two subfolders: the RF folder containing RF signals in .mat file format (readable in Matlab), and the folder US_Image containing the images of the sequence in PNG format. Both images and RF signals are numbered, with each index corresponding to an indentation level and a force measured by the force sensor as outlined in the Excel file (Tabulation 1). Each RFi.mat file comprises 3152 rows representing the signal along the temporal axis, and 256 columns corresponding to the number of A lines in the image. The Excel file has 5 tabs: -The first tab contains, for each of the 6 acquired image sequences, the frame number in the sequence, the corresponding indentation of the probe (in mm), the recorded voltage value on the force sensor (V), and the corresponding calculated force value (N). Thus, each image in the 6 sequences is identified by a frame number, an indentation, and a force value. -The second tab provide the acquisition parameters of the ultrasound images (frequency, depth, gain, etc.) performed using the SonixTablet ultrasound system from Ultrasonix (now bk medical). -The third tab contains stress-strain curves, the mean, and the standard deviation of the Young's modulus for both the inclusion and the background of the phantom. The Young's modulus is obtained through compression tests conducted by an electromechanical testing machine (Bose, Electroforce 3200) on 10 small cylindrical samples taken from the background and the inclusion. -The fourth tab contains the geometry and dimensions of the phantom. -The fifth tab contains the recipe used to make the gelatin phantom.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains a collection of annotated ultrasound images of the liver, designed to aid in the development of computer vision models for liver analysis, segmentation, and disease detection. The annotations include outlines of the liver and liver mass regions, as well as classifications into benign, malignant, and normal cases.
Creators: Xu Yiming, Zheng Bowen, Liu Xiaohong, Wu Tao, Ju Jinxiu, Wang Shijie, Lian Yufan, Zhang Hongjun, Liang Tong, Sang Ye, Jiang Rui, Wang Guangyu, Ren Jie, Chen Ting
Published: November 2, 2022 Version: v1 DOI: 10.5281/zenodo.7272660
This dataset provides ultrasound images of the liver with detailed annotations. The annotations highlight the liver itself and any liver mass regions present. The images are categorized into three classes:
The dataset is organized into three zip files:
The ultrasound images have been annotated to show:
These annotations make the dataset suitable for tasks such as segmentation of the liver and liver masses, as well as classification of liver conditions.
This dataset can be valuable for a variety of applications, including:
This dataset is subject to copyright. Any use of the data must include appropriate acknowledgement and credit. Please contact the authors of the published data and cite the publication and the provided URL.
Citation:
Xu Yiming, Zheng Bowen, Liu Xiaohong, Wu Tao, Ju Jinxiu, Wang Shijie, Lian Yufan, Zhang Hongjun, Liang Tong, Sang Ye, Jiang Rui, Wang Guangyu, Ren Jie, & Chen Ting. (2022). Annotated Ultrasound Liver images [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7272660
APA Style Citation:
Xu, Y., Bowen, Z., Xiaohong, L., Tao, W., Jinxiu, J., Shijie, W., Yufan, L., Hongjun, Z., Tong, L., Ye, S., Rui, J., Guangyu, W., Jie, R., & Ting, C. (2022). Annotated Ultrasound Liver images [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7272660
Creative Commons Attribution 4.0 International
We hope this dataset is helpful for your research and projects!