Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset provides a large, paired collection of robotic and handheld lumbar spine ultrasound (US) imaging with ground truth computed tomography (CT) benchmarking. The data includes comprehensive imaging from 63 healthy volunteers, offering a robust baseline for machine learning algorithms. The dataset is structured to include key metrics such as demographic data, paired CT, handheld US (HUS), and robot-assisted ultrasound (RUS) imaging, synchronized tracking data, and 3D-CT segmentations, providing a robust foundation for analyzing and advancing musculoskeletal imaging techniques.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
626 additional images with thyroid/nodule masks.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset was created by hahnec
Released under CC0: Public Domain
BTX24/bus_uc_classification-ultrasound-dataset dataset hosted on Hugging Face and contributed by the HF Datasets community
EchoNet-Dynamic is a dataset of over 10k echocardiogram, or cardiac ultrasound, videos from unique patients at Stanford University Medical Center. Each apical-4-chamber video is accompanied by an estimated ejection fraction, end-systolic volume, end-diastolic volume, and tracings of the left ventricle performed by an advanced cardiac sonographer and reviewed by an imaging cardiologist.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here are a few use cases for this project:
Medical Diagnosis Assistance: Physicians can use the "liver_ultrasound" computer vision model to assist them in identifying and classifying different liver conditions such as Hepatocellular Carcinoma (HCC), thus aiding in early diagnosis and treatment plan determination.
Medical Training/Education: As part of their training, medical students or junior doctors may utilize the algorithm to get familiar with and better interpret ultrasound images, improving their understanding of different organ classifications.
Telemedicine: In remote areas without immediate access to experienced radiologists, the model can be used to analyze patient ultrasound images and provide preliminary results and advice, benefiting in telemedicine applications.
Research Purposes: The model could be used by researchers studying liver conditions, to automize the image categorizing process, thus increasing efficiency and the ability to process large amounts of data.
AI in Maternity Care: Given an ultrasound image of a baby is included in the dataset, this model could potentially also aid in prenatal checkups, identifying anomalies in the liver structure of the fetus and informing early necessary interventions.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A large dataset of routinely acquired maternal-fetal screening ultrasound images collected from two different hospitals by several operators and ultrasound machines. All images were manually labeled by an expert maternal fetal clinician. Images are divided into 6 classes: four of the most widely used fetal anatomical planes (Abdomen, Brain, Femur and Thorax), the mother’s cervix (widely used for prematurity screening) and a general category to include any other less common image plane. Fetal brain images are further categorized into the 3 most common fetal brain planes (Trans-thalamic, Trans-cerebellum, Trans-ventricular) to judge fine grain categorization performance. Meta information (patient number, us machine, operator) is also provided, as well as the training-test split used in the Nature Sci Rep paper.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Breast Ultrasound Images Dataset includes 780 high-quality PNG images collected in 2018 from 600 female patients aged 25 to 75. Each image is paired with a ground truth mask and categorized into normal, benign, or malignant classes, supporting supervised learning and diagnostic research for breast cancer detection.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Ultrasound is a primary diagnostic tool commonly used to evaluate internal body structures, including organs, blood vessels, the musculoskeletal system, and fetal development. Due to challenges such as operator dependence, noise, limited field of view, difficulty in imaging through bone and air, and variability across different systems make diagnosing abnormalities in ultrasound images particularly challenging for less experienced clinicians. The development of artificial intelligence technology could assist in the diagnosis of ultrasound images. However, many databases are created using a single device type and collection site, limiting the generalizability of machine learning classification models. Therefore, we have collected a large, publicly accessible ultrasound challenge database that is intended to significantly enhance the performance of traditional ultrasound image classification. This dataset is derived from publicly available data on the Internet and comprises a total of 1,833 distinct ultrasound data. It includes 13 different ultrasound image anomalies, and all data have been anonymized. Our data-sharing program aims to support benchmark testing of ultrasound image disease diagnosis and classification accuracy in multicenter environments.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Ultrasound is a dataset for object detection tasks - it contains Objects annotations for 646 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
This dataset was derived from tracked biopsy sessions using the Artemis biopsy system, many of which included image fusion with MRI targets. Patients received a 3D transrectal ultrasound scan, after which nonrigid registration (e.g. “fusion”) was performed between real-time ultrasound and preoperative MRI, enabling biopsy cores to be sampled from MR regions of interest. Most cases also included sampling of systematic biopsy cores using a 12-core digital template. The Artemis system tracked targeted and systematic core locations using encoder kinematics of a mechanical arm, and recorded locations relative to the Ultrasound scan. MRI biopsy coordinates were also recorded for most cases. STL files and biopsy overlays are available and can be visualized in 3D Slicer with the SlicerHeart extension. Spreadsheets summarizing biopsy and MR target data are also available. See the Detailed Description tab below for more information.
MRI targets were defined using multiparametric MRI, e.g. t2-weighted, diffusion-weighted, and perfusion-weighted sequences, and scored on a Likert-like scale with close correspondence to PIRADS version 2. t2-weighted MRI was used to trace ROI contours, and is the only sequence provided in this dataset. MR imaging was performed on a 3 Tesla Trio, Verio or Skyra scanner (Siemens, Erlangen, Germany). A transabdominal phased array was used in all cases, and an endorectal coil was used in a subset of cases. The majority of pulse sequences are 3D T2:SPC, with TR/TE 2200/203, Matrix/FOV 256 × 205/14 × 14 cm, and 1.5mm slice spacing. Some cases were instead 3D T2:TSE with TR/TE 3800–5040/101, and a small minority were imported from other institutions (various T2 protocols.)
Ultrasound scans were performed with Hitachi Hi-Vision 5500 7.5 MHz or the Noblus C41V 2-10 MHz end-fire probe. 3D scans were acquired by rotation of the end-fire probe 200 degrees about its axis, and interpolating to resample the volume with isotropic resolution.
Patients with suspicion of prostate cancer due to elevated PSA and/or suspicious imaging findings were consecutively accrued. Any consented patient who underwent or had planned to receive a routine, standard-of-care prostate biopsy at the UCLA Clark Urology Center was included.
Note: Some Private Tags in this collection are critical to properly displaying the STL surface and the Prostate anatomy. Private Tag (1129,"Eigen, Inc",1016) DS VoxelSize is especially important for multi-frame US cases.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Musculoskeletal disorders present significant health and economic challenges on a global scale. Current intraoperative imaging techniques, including computed tomography (CT) and radiography, involve high radiation exposure and limited soft tissue visualization. Ultrasound (US) offers a non-invasive, real-time alternative but is highly observer-dependent and underutilized intraoperatively. US enhanced by artificial intelligence shows high potential for observer-independent pattern recognition and robot-assisted applications in orthopedics. Given the limited availability of in-vivo imaging data, we introduce a comprehensive dataset from a comparative collection of handheld US (HUS) and robot-assisted ultrasound (RUS) lumbar spine imaging in 63 healthy volunteers. This dataset includes demographic data, paired CT, HUS, RUS imaging, synchronized tracking data for HUS and RUS, and 3D-CT-segmentations. It establishes a robust baseline for machine learning algorithms by focusing on healthy individuals, circumventing the limitations of simulations and pathological anatomy. To our knowledge, this extensive collection is the first healthy anatomy dataset for the lumbar spine that includes paired CT, HUS, and RUS imaging, supporting advancements in computer- and robotic-assisted diagnostic and intraoperative techniques for musculoskeletal disorders.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains a curated benchmark collection of 1,062 labelled lung ultrasound (LUS) images collected from patients at Mulago National Referral Hospital and Kiruddu Referral Hospital in Kampala, Uganda. The images were acquired and annotated by senior radiologists to support the development and evaluation of artificial intelligence (AI) models for pulmonary disease diagnosis. Each image is categorized into one of three classes: Probably COVID-19 (COVID-19), Diseased Lung but Probably Not COVID-19 (Other Lung Disease), and Healthy Lung.
The dataset addresses key challenges in LUS interpretation, including inter-operator variability, low signal-to-noise ratios, and reliance on expert sonographers. It is particularly suitable for training and testing convolutional neural network (CNN)-based models for medical image classification tasks in low-resource settings. The images are provided in standard formats such as PNG or JPEG, with corresponding labels stored in structured files like CSV or JSON to facilitate ease of use in machine learning workflows.
In this second version of the dataset, we have extended the resource by including a folder containing the original unprocessed raw data, as well as the scripts used to process, clean, and sort the data into the final labelled set. These additions promote transparency and reproducibility, allowing researchers to understand the full data pipeline and adapt it for their own applications. This resource is intended to advance research in deep learning for lung ultrasound analysis and to contribute toward building more accessible and reliable diagnostic tools in global health.
DukeUltrasound is an ultrasound dataset collected at Duke University with a Verasonics c52v probe. It contains delay-and-sum (DAS) beamformed data as well as data post-processed with Siemens Dynamic TCE for speckle reduction, contrast enhancement and improvement in conspicuity of anatomical structures. These data were collected with support from the National Institute of Biomedical Imaging and Bioengineering under Grant R01-EB026574 and National Institutes of Health under Grant 5T32GM007171-44. A usage example is available here.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('duke_ultrasound', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset comprises 9416 images categorized into 'Normal' and 'Stone' with 4414 and 5002 images respectively, collected from various scan centers and hospitals while ensuring the privacy and confidentiality of patient information. These images are obtained by using different ultrasound machines namely: SAMSUNG RS85, SAMSUNG HS60, SAMSUNG RS80A, SAMSUNG HS70A etc.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset comprises micro-ultrasound scans and human prostate annotations of 75 patients who underwent micro-ultrasound guided prostate biopsy at the University of Florida. All images and segmentations have been fully de-identified in the NIFTI format.
Under the "train" folder, you'll find three subfolders:
"micro_ultrasound_scans" contains micro-ultrasound images from 55 patients for training.
"expert_annotations" contains ground truth prostate segmentations annotated by our expert urologist.
"non_expert_annotations" contains prostate segmentations annotated by a graduate student.
In the "test" folder, there are five subfolders:
"micro_ultrasound_scans" contains micro-ultrasound images from 20 patients for testing.
"expert_annotations" contains ground truth prostate segmentations by the expert urologist.
"master_student_annotations" contains segmentations by a master's student.
"medical_student_annotations" contains segmentations by a medical student.
"clinician_annotations" contains segmentations by a urologist with limited experience in reading micro-ultrasound images.
If you use this dataset, please cite our paper: Jiang, Hongxu, et al. "MicroSegNet: A deep learning approach for prostate segmentation on micro-ultrasound images." Computerized Medical Imaging and Graphics (2024): 102326. DOI: https://doi.org/10.1016/j.compmedimag.2024.102326.
For any dataset-related queries, please reach out to Dr. Wei Shao: weishao@ufl.edu.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This resource is a dataset of routinely acquired maternal-fetal screening ultrasound images collected in five centers of five countries in Africa (Malawi, Egypt, Uganda, Ghana and Algeria) that is associated to the journal article Sendra-Bacells et al. "Generalisability of fetal ultrasound deep learning models to low-resource imaging settings in five African countries", Scientific Reports. The images correspond to the four most common fetal planes: abdomen, brain, femur and thorax. A CSV file is provided where image filenames are associated to plane types and patient number as well as the partitioning in training and testing splits as used in the associated publication.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Ultrasound imaging data (RF and scan-converted formats) of custom-made elastography phantoms. The data are robotically-acquired image sequences with controlled and gradually increasing phantom indentation. The phantom consists of a background medium traversed by a cylindrical inclusion. The data (images and RF signals) are contained in the 6 folders Acqui1 → Acqui6 corresponding to the 6 acquired image sequences on the phantom. Each Acqui_i folder contains two subfolders: the RF folder containing RF signals in .mat file format (readable in Matlab), and the folder US_Image containing the images of the sequence in PNG format. Both images and RF signals are numbered, with each index corresponding to an indentation level and a force measured by the force sensor as outlined in the Excel file (Tabulation 1). Each RFi.mat file comprises 3152 rows representing the signal along the temporal axis, and 256 columns corresponding to the number of A lines in the image. The Excel file has 5 tabs: -The first tab contains, for each of the 6 acquired image sequences, the frame number in the sequence, the corresponding indentation of the probe (in mm), the recorded voltage value on the force sensor (V), and the corresponding calculated force value (N). Thus, each image in the 6 sequences is identified by a frame number, an indentation, and a force value. -The second tab provide the acquisition parameters of the ultrasound images (frequency, depth, gain, etc.) performed using the SonixTablet ultrasound system from Ultrasonix (now bk medical). -The third tab contains stress-strain curves, the mean, and the standard deviation of the Young's modulus for both the inclusion and the background of the phantom. The Young's modulus is obtained through compression tests conducted by an electromechanical testing machine (Bose, Electroforce 3200) on 10 small cylindrical samples taken from the background and the inclusion. -The fourth tab contains the geometry and dimensions of the phantom. -The fifth tab contains the recipe used to make the gelatin phantom.
https://aimistanford-web-api.azurewebsites.net/licenses/f1f352a6-243f-4905-8e00-389edbca9e83/viewhttps://aimistanford-web-api.azurewebsites.net/licenses/f1f352a6-243f-4905-8e00-389edbca9e83/view
We collected data from 167 patients with biopsy-confirmed thyroid nodules (n=192) at the Stanford University Medical Center. The dataset consists of ultrasound cine-clip images, radiologist-annotated segmentations, patient demographics, lesion size and location, TI-RADS descriptors, and histopathological diagnoses.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The MMOTU dataset consists of ovarian ultrasound images collected from Beijing Shijitan Hospital, Capital Medical University. The dataset is divided into two subsets: OTU 2D and OTU CEUS. The OTU 2D subset contains ultrasound images.The OTU CEUS subset consists of 170 images extracted from CEUS sequences.The MMOTU ovarian tumor ultrasound dataset used in the paper titled "PMFFNet: A hybrid network based on feature pyramid for ovarian tumor segmentation" is stored here. If needed, you can download and access it yourself. The dataset we employed in our study is sourced from the MMOTU image dataset, which comprises ovarian ultrasound images collected from Beijing Shijitan Hospital, Capital Medical University.If you would like to access the original MMOTU dataset, please click on the following link: https://drive.google.com/drive/folders/1c5n0fVKrM9-SZE1kacTXPt1pt844iAs1
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset provides a large, paired collection of robotic and handheld lumbar spine ultrasound (US) imaging with ground truth computed tomography (CT) benchmarking. The data includes comprehensive imaging from 63 healthy volunteers, offering a robust baseline for machine learning algorithms. The dataset is structured to include key metrics such as demographic data, paired CT, handheld US (HUS), and robot-assisted ultrasound (RUS) imaging, synchronized tracking data, and 3D-CT segmentations, providing a robust foundation for analyzing and advancing musculoskeletal imaging techniques.