100+ datasets found
  1. D

    High-Quality Ultrasound Dataset

    • defined.ai
    Updated Apr 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Defined.ai (2024). High-Quality Ultrasound Dataset [Dataset]. https://defined.ai/datasets/medical-scans-ultrasounds
    Explore at:
    Dataset updated
    Apr 20, 2024
    Dataset provided by
    Defined.ai
    Description

    Boost your AI projects with our 40,000-image high-quality Ultrasound dataset in DICOM, ideal for healthcare computer vision.

  2. h

    Breast-Cancer-Ultrasound-Images-Dataset

    • huggingface.co
    Updated Oct 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gym Prathap (2025). Breast-Cancer-Ultrasound-Images-Dataset [Dataset]. https://huggingface.co/datasets/gymprathap/Breast-Cancer-Ultrasound-Images-Dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 14, 2025
    Authors
    Gym Prathap
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Worldwide, breast cancer ranks high among women's leading causes of death. Reducing the number of premature deaths can be achieved through early detection. The information is based on medical ultrasound scans that show signs of breast cancer. There are three types of images included in the Breast Ultrasound Dataset: normal, benign, and malignant. Incorporating machine learning into breast ultrasound images improves their ability to detect, classify, and segment breast cancer. Data Image data… See the full description on the dataset page: https://huggingface.co/datasets/gymprathap/Breast-Cancer-Ultrasound-Images-Dataset.

  3. m

    Data from: A Dataset of Lung Ultrasound Images for Automated AI-based Lung...

    • data.mendeley.com
    Updated Jul 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Katumba (2025). A Dataset of Lung Ultrasound Images for Automated AI-based Lung Disease Classification [Dataset]. http://doi.org/10.17632/hb3p34ytvx.2
    Explore at:
    Dataset updated
    Jul 10, 2025
    Authors
    Andrew Katumba
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains a curated benchmark collection of 1,062 labelled lung ultrasound (LUS) images collected from patients at Mulago National Referral Hospital and Kiruddu Referral Hospital in Kampala, Uganda. The images were acquired and annotated by senior radiologists to support the development and evaluation of artificial intelligence (AI) models for pulmonary disease diagnosis. Each image is categorized into one of three classes: Probably COVID-19 (COVID-19), Diseased Lung but Probably Not COVID-19 (Other Lung Disease), and Healthy Lung.

    The dataset addresses key challenges in LUS interpretation, including inter-operator variability, low signal-to-noise ratios, and reliance on expert sonographers. It is particularly suitable for training and testing convolutional neural network (CNN)-based models for medical image classification tasks in low-resource settings. The images are provided in standard formats such as PNG or JPEG, with corresponding labels stored in structured files like CSV or JSON to facilitate ease of use in machine learning workflows.

    In this second version of the dataset, we have extended the resource by including a folder containing the original unprocessed raw data, as well as the scripts used to process, clean, and sort the data into the final labelled set. These additions promote transparency and reproducibility, allowing researchers to understand the full data pipeline and adapt it for their own applications. This resource is intended to advance research in deep learning for lung ultrasound analysis and to contribute toward building more accessible and reliable diagnostic tools in global health.

  4. c

    Prostate MRI and Ultrasound With Pathology and Coordinates of Tracked Biopsy...

    • cancerimagingarchive.net
    • stage.cancerimagingarchive.net
    dicom, n/a, xlsx, zip
    Updated Sep 17, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Cancer Imaging Archive (2020). Prostate MRI and Ultrasound With Pathology and Coordinates of Tracked Biopsy [Dataset]. http://doi.org/10.7937/TCIA.2020.A61IOC1A
    Explore at:
    zip, xlsx, dicom, n/aAvailable download formats
    Dataset updated
    Sep 17, 2020
    Dataset authored and provided by
    The Cancer Imaging Archive
    License

    https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/

    Time period covered
    Oct 20, 2023
    Dataset funded by
    National Cancer Institutehttp://www.cancer.gov/
    Description

    This dataset was derived from tracked biopsy sessions using the Artemis biopsy system, many of which included image fusion with MRI targets. Patients received a 3D transrectal ultrasound scan, after which nonrigid registration (e.g. “fusion”) was performed between real-time ultrasound and preoperative MRI, enabling biopsy cores to be sampled from MR regions of interest. Most cases also included sampling of systematic biopsy cores using a 12-core digital template. The Artemis system tracked targeted and systematic core locations using encoder kinematics of a mechanical arm, and recorded locations relative to the Ultrasound scan. MRI biopsy coordinates were also recorded for most cases. STL files and biopsy overlays are available and can be visualized in 3D Slicer with the SlicerHeart extension. Spreadsheets summarizing biopsy and MR target data are also available. See the Detailed Description tab below for more information.

    MRI targets were defined using multiparametric MRI, e.g. t2-weighted, diffusion-weighted, and perfusion-weighted sequences, and scored on a Likert-like scale with close correspondence to PIRADS version 2. t2-weighted MRI was used to trace ROI contours, and is the only sequence provided in this dataset. MR imaging was performed on a 3 Tesla Trio, Verio or Skyra scanner (Siemens, Erlangen, Germany). A transabdominal phased array was used in all cases, and an endorectal coil was used in a subset of cases. The majority of pulse sequences are 3D T2:SPC, with TR/TE 2200/203, Matrix/FOV 256 × 205/14 × 14 cm, and 1.5mm slice spacing. Some cases were instead 3D T2:TSE with TR/TE 3800–5040/101, and a small minority were imported from other institutions (various T2 protocols.)

    Ultrasound scans were performed with Hitachi Hi-Vision 5500 7.5 MHz or the Noblus C41V 2-10 MHz end-fire probe. 3D scans were acquired by rotation of the end-fire probe 200 degrees about its axis, and interpolating to resample the volume with isotropic resolution.

    Patients with suspicion of prostate cancer due to elevated PSA and/or suspicious imaging findings were consecutively accrued. Any consented patient who underwent or had planned to receive a routine, standard-of-care prostate biopsy at the UCLA Clark Urology Center was included.

    Note: Some Private Tags in this collection are critical to properly displaying the STL surface and the Prostate anatomy. Private Tag (1129,"Eigen, Inc",1016) DS VoxelSize is especially important for multi-frame US cases.

  5. Z

    Micro-Ultrasound Prostate Segmentation Dataset

    • data.niaid.nih.gov
    Updated Jan 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shao, Wei; Brisbane, Wayne (2024). Micro-Ultrasound Prostate Segmentation Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10475292
    Explore at:
    Dataset updated
    Jan 9, 2024
    Dataset provided by
    University of Florida
    University of California, Los Angeles
    Authors
    Shao, Wei; Brisbane, Wayne
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset comprises micro-ultrasound scans and human prostate annotations of 75 patients who underwent micro-ultrasound guided prostate biopsy at the University of Florida. All images and segmentations have been fully de-identified in the NIFTI format.

    Under the "train" folder, you'll find three subfolders:

    "micro_ultrasound_scans" contains micro-ultrasound images from 55 patients for training.

    "expert_annotations" contains ground truth prostate segmentations annotated by our expert urologist.

    "non_expert_annotations" contains prostate segmentations annotated by a graduate student.

    In the "test" folder, there are five subfolders:

    "micro_ultrasound_scans" contains micro-ultrasound images from 20 patients for testing.

    "expert_annotations" contains ground truth prostate segmentations by the expert urologist.

    "master_student_annotations" contains segmentations by a master's student.

    "medical_student_annotations" contains segmentations by a medical student.

    "clinician_annotations" contains segmentations by a urologist with limited experience in reading micro-ultrasound images.

    If you use this dataset, please cite our paper: Jiang, Hongxu, et al. "MicroSegNet: A deep learning approach for prostate segmentation on micro-ultrasound images." Computerized Medical Imaging and Graphics (2024): 102326. DOI: https://doi.org/10.1016/j.compmedimag.2024.102326.

    For any dataset-related queries, please reach out to Dr. Wei Shao: weishao@ufl.edu.

  6. f

    Data from: An annotated heterogeneous ultrasound database

    • springernature.figshare.com
    bin
    Updated Jan 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuezhe Yang; Yonglin Chen; Xingbo Dong; Junning Zhang; Chihui Long; Zhe Jin; Yong Dai (2025). An annotated heterogeneous ultrasound database [Dataset]. http://doi.org/10.6084/m9.figshare.26889334.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 26, 2025
    Dataset provided by
    figshare
    Authors
    Yuezhe Yang; Yonglin Chen; Xingbo Dong; Junning Zhang; Chihui Long; Zhe Jin; Yong Dai
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Ultrasound is a primary diagnostic tool commonly used to evaluate internal body structures, including organs, blood vessels, the musculoskeletal system, and fetal development. Due to challenges such as operator dependence, noise, limited field of view, difficulty in imaging through bone and air, and variability across different systems make diagnosing abnormalities in ultrasound images particularly challenging for less experienced clinicians. The development of artificial intelligence technology could assist in the diagnosis of ultrasound images. However, many databases are created using a single device type and collection site, limiting the generalizability of machine learning classification models. Therefore, we have collected a large, publicly accessible ultrasound challenge database that is intended to significantly enhance the performance of traditional ultrasound image classification. This dataset is derived from publicly available data on the Internet and comprises a total of 1,833 distinct ultrasound data. It includes 13 different ultrasound image anomalies, and all data have been anonymized. Our data-sharing program aims to support benchmark testing of ultrasound image disease diagnosis and classification accuracy in multicenter environments.

  7. s

    EchoNet-Dynamic Cardiac Ultrasound

    • aimi.stanford.edu
    Updated Jan 15, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). EchoNet-Dynamic Cardiac Ultrasound [Dataset]. https://aimi.stanford.edu/echonet-dynamic-cardiac-ultrasound
    Explore at:
    Dataset updated
    Jan 15, 2020
    Description

    EchoNet-Dynamic is a dataset of over 10k echocardiogram, or cardiac ultrasound, videos from unique patients at Stanford University Medical Center. Each apical-4-chamber video is accompanied by an estimated ejection fraction, end-systolic volume, end-diastolic volume, and tracings of the left ventricle performed by an advanced cardiac sonographer and reviewed by an imaging cardiologist.

  8. FETAL_PLANES_DB: Common maternal-fetal ultrasound images

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jun 23, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xavier P. Burgos-Artizzu; Xavier P. Burgos-Artizzu; David Coronado-Gutierrez; David Coronado-Gutierrez; Brenda Valenzuela-Alcaraz; Brenda Valenzuela-Alcaraz; Elisenda Bonet-Carne; Elisenda Bonet-Carne; Elisenda Eixarch; Elisenda Eixarch; Fatima Crispi; Fatima Crispi; Eduard Gratacós; Eduard Gratacós (2020). FETAL_PLANES_DB: Common maternal-fetal ultrasound images [Dataset]. http://doi.org/10.5281/zenodo.3904280
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 23, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Xavier P. Burgos-Artizzu; Xavier P. Burgos-Artizzu; David Coronado-Gutierrez; David Coronado-Gutierrez; Brenda Valenzuela-Alcaraz; Brenda Valenzuela-Alcaraz; Elisenda Bonet-Carne; Elisenda Bonet-Carne; Elisenda Eixarch; Elisenda Eixarch; Fatima Crispi; Fatima Crispi; Eduard Gratacós; Eduard Gratacós
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A large dataset of routinely acquired maternal-fetal screening ultrasound images collected from two different hospitals by several operators and ultrasound machines. All images were manually labeled by an expert maternal fetal clinician. Images are divided into 6 classes: four of the most widely used fetal anatomical planes (Abdomen, Brain, Femur and Thorax), the mother’s cervix (widely used for prematurity screening) and a general category to include any other less common image plane. Fetal brain images are further categorized into the 3 most common fetal brain planes (Trans-thalamic, Trans-cerebellum, Trans-ventricular) to judge fine grain categorization performance. Meta information (patient number, us machine, operator) is also provided, as well as the training-test split used in the Nature Sci Rep paper.

  9. breast-ultrasound-images

    • kaggle.com
    Updated May 7, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ly Tran Hoang Hieu (2023). breast-ultrasound-images [Dataset]. https://www.kaggle.com/datasets/lytranhoanghieu/breast-ultrasound-images
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 7, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Ly Tran Hoang Hieu
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Dataset

    This dataset was created by Ly Tran Hoang Hieu

    Released under CC0: Public Domain

    Contents

  10. i

    Data from: Ultrasound Robot

    • ieee-dataport.org
    Updated Dec 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dahu Zhu (2024). Ultrasound Robot [Dataset]. https://ieee-dataport.org/documents/ultrasound-robot
    Explore at:
    Dataset updated
    Dec 3, 2024
    Authors
    Dahu Zhu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    the area of the gallbladder extracted from the image segmentation

  11. u

    Raw data and Original Code for Generating Ultrasonic Acoustic Mapping Figure...

    • rdr.ucl.ac.uk
    zip
    Updated Oct 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arthur Fordham; Rhodri Owen; Rhodri Jervis (2023). Raw data and Original Code for Generating Ultrasonic Acoustic Mapping Figure [Dataset]. http://doi.org/10.5522/04/24271045.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 10, 2023
    Dataset provided by
    University College London
    Authors
    Arthur Fordham; Rhodri Owen; Rhodri Jervis
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The item published is a dataset that provides the raw data and original code to generate Figure 4 in the research paper, Correlative non-destructive techniques to investigate ageing and orientation effects in automotive Li-ion pouch cells, https://doi.org/10.5522/04/c.6868027 of which I am first author. The measurements and following data analysis took place between January 2022 – November 2022.

    The figure illustrates the ultrasonic mapping measurements of pouch cells that have been extracted from electric vehicles and have been aged in real-world conditions. The degradation of the cells was measured using four different complementary characterisation measurement techniques, one of which was ultrasonic mapping.

    The ultrasonic mapping measurements were performed using an Olympus Focus PX phased-array instrument (Olympus Corp., Japan) with a 5 MHz 1D linear phased array probe consisting of 64 transducers. The transducer had an active aperture of 64 mm with an element pitch (centre-to-centre distance between elements) of 1 mm. The cell was covered with ultrasonic couplant (Fannin UK Ltd.), prior to every scan to ensure good acoustic transmission. The transducer was moved along the length of each cell at a fixed pressure using an Olympus GLIDER 2-axis encoded scanner with the step size set at 1 mm to give a resolution of ca. 1 mm2. Due to the large size of the cells, the active aperture of the probe was wide enough to cover 1/3 the width, meaning that three measurements for each cell were taken and the data was combined to form the colour maps.

    Data from the ultrasonic signals were analysed using FocusPC software. The waveforms recorded by the transducer were exported and plotted using custom Python code to compare how the signal changes at different points in the cell. For consistency, a specific ToF range was selected for all cells, chosen because it is where the part of the waveform, known as the ‘echo-peak’, is located.74 The echo-peak is useful to monitor as it is where the waveform has travelled the whole way through the cell and reflected from the back surface, so characterising the entire cell. The maximum amplitude of the ultrasonic signal within this ToF range, at each point, are combined to produce a colour map. The signal amplitude is a percentage proportion of 100 where 100 is the maximum intensity of the signal, meaning that the signal has been attenuated the least as it travels through the cell, and 0 is the minimum intensity. The intensity is absolute and not normalised across all scans, meaning that an amplitude values on different cells can be directly compared. The Pristine cell is a second-generation Nissan Leaf pouch, different to the first-generation aged cells of varying orientation. The authors were not able to acquire an identical first-generation pristine Nissan Leaf cell. Nonetheless, it was expected that the Pristine cell would contain a uniform internal structure regardless of the specific chemistry and this would be identified in an ultrasound map consisting of a single colour (or narrow colour range).

  12. K

    A large, paired dataset of robotic and handheld lumbar spine ultrasound with...

    • rdr.kuleuven.be
    Updated Aug 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicola Cavalcanti; Nicola Cavalcanti; Ruixuan Li; Ruixuan Li; Laura Arango; Ayoob Davoodi; Ayoob Davoodi; Kaat Van Assche; Kaat Van Assche; Yunke Ao; Aidana Massalimova; Aidana Massalimova; Mehrdad Saleh; Lukas Zingg; Tobias Götschi; Gianni Borghesan; Gianni Borghesan; Christoph J. Laux; Reto Sutter; Reto Sutter; Mazda Farshad; Mazda Farshad; Matthias Tummers; Matthias Tummers; Philipp Fürnstahl; Philipp Fürnstahl; Emmanuel Vander Poorten; Emmanuel Vander Poorten; Fabio Carrillo; Fabio Carrillo; Laura Arango; Yunke Ao; Mehrdad Saleh; Lukas Zingg; Tobias Götschi; Christoph J. Laux (2025). A large, paired dataset of robotic and handheld lumbar spine ultrasound with ground truth CT benchmarking. [Dataset]. http://doi.org/10.48804/3XPCAE
    Explore at:
    zip(1547553733), zip(307027072), zip(358027292), zip(993718164), zip(967568745), zip(950169917), zip(591362367), zip(850872229), zip(65072931), zip(1135617429), zip(1227918852), zip(16108256), zip(1239980946), zip(621110766), zip(1237332402), zip(1545485535), zip(917126990), zip(2019076883), zip(937558219), zip(1156087347), zip(18583861), zip(1119035028), zip(927805311), zip(1104679408), zip(842490929), zip(603808877), zip(933012970), zip(829014961), zip(961085577), zip(417324008), zip(645328573), zip(17476361), zip(636133415), zip(814645954), zip(952611534), zip(13980373), zip(368629531), zip(622995078), zip(973420035), zip(943673247), zip(1711277206), zip(892961528), zip(960969167), zip(874690393), zip(387431130), zip(410608347), zip(397273255), zip(16443610), zip(1171112761), zip(1168591237), zip(1344243412), zip(801610838), zip(82763949), zip(1108822471), zip(1511489857), zip(844169979), zip(605382326), zip(853834537), zip(293400829), zip(1533028434), zip(846591821), zip(1190187133), zip(1013161162), zip(1502967505), zip(1077992096), zip(1150097350), zip(803085542), zip(943286110), zip(425548705), zip(817657362), zip(1675670157), zip(2055639936), zip(984199), zip(2420255), zip(2911568), zip(786518), zip(2187384), zip(952444606), zip(850587616), zip(1334549915), zip(592745593), zip(1126011699), zip(647371419), zip(883159398), zip(1880641527), zip(129954922), zip(590838753), zip(80330499), zip(448329785), zip(847284861), zip(1524510182), zip(2093040194), zip(1261275527), zip(924606691), zip(394023800), zip(2061846122), zip(113948402), zip(400819152), zip(1277706460), zip(455869658), zip(1477474779), zip(412271025), zip(1137134510), zip(1224577300), zip(411411033), zip(810719813), zip(418938275), zip(207467463), zip(883732139), zip(1092654387), zip(794115837), zip(713551556), zip(751071790), zip(1057687733), zip(988197613), zip(914015090), zip(1027703455), zip(1686843113), zip(655577791), zip(19785657), zip(109165246), zip(993628342), zip(1013243590), zip(866661124), zip(651453336), zip(1428340126), zip(14343505), zip(934537746), zip(13894482), zip(17586175), zip(1162464696), zip(113475216), zip(50805281), zip(1895731858), zip(1867038069), zip(1119652273), zip(673966960), zip(13682730), zip(1242479891), zip(14323537), zip(398750735), zip(1215388428), zip(1125269724), zip(596992478), zip(400755539), zip(792068906), zip(706214559), zip(631530878), zip(702157807), zip(953988600), zip(690140869), zip(1544434485), zip(1335290038), zip(1042686465), zip(2184489464), zip(432344920), zip(602641108), zip(14944563), zip(622281296), zip(814769499), zip(1194150148), zip(1331114437), zip(15631937), zip(174182023), zip(923275774), zip(643007618), zip(777966974), zip(194487276), zip(1644738241), zip(805404082), zip(15290676), zip(871409741), zip(560168563), zip(619825811), zip(898328427), zip(1106488847), zip(659990062), zip(860657064), zip(894874008), zip(132377270), zip(655347492), zip(1407511995), zip(947144152), zip(925137875), zip(1370841155), zip(14208017), zip(944538484), zip(537011795), zip(466849621), zip(899591159), zip(433030134), zip(745542454), zip(17320136), zip(507692891), zip(1151123942), zip(1338424550), zip(2058680959), zip(713111051), zip(624824112), zip(1906815378), zip(549356280), zip(444157413), zip(399603444), zip(803217802), zip(607775993), zip(902369280), zip(1156960307), zip(799568627), zip(359037668), zip(1192340257), zip(1021460620), zip(544748313), zip(412980287), zip(20417159), zip(1125834397), zip(784530430), zip(958505641), zip(1384469836), zip(373634553), zip(1915136317), zip(752164781), zip(864873623), zip(533316602), zip(943362179), zip(1017492385), zip(1030061919), zip(1110531831), zip(528107861), zip(1385317919), zip(174371438), zip(873542540), zip(115218655), zip(1319170890), zip(365515606), zip(866494178), zip(795234832), zip(1159106935), zip(1470850880), zip(1058626416), zip(766301733), zip(752377137), zip(380545655), zip(3184639299), zip(1105567234), zip(1373956525), zip(2111589371), zip(1575262277), zip(621007553), zip(1198622125), zip(629304505), zip(1249229392), zip(1037497832), zip(644408533), zip(13988470), zip(604802301), zip(116946442), zip(765436256), zip(1252569048), zip(532262633), zip(868620121), zip(542037699), zip(89625988), zip(116333161), zip(875065709), zip(1175076552), zip(1258982753), zip(391604541), zip(1262580520), zip(471968655), zip(977577709), zip(348302434), zip(613835750), zip(1261440550), zip(408784181), zip(1814165010), zip(605178799), zip(917668757), zip(665565199), zip(749548214), zip(1318113898), zip(1194641406), zip(566116041), zip(764797998), zip(1060431336), zip(497891158), zip(2890436666), zip(1072890963), zip(625827640), zip(3004398), zip(2520237), zip(3441491), zip(2679309858), zip(914891370), zip(2077031642), zip(1629047275), zip(2689161859), zip(621324183), zip(541178511), zip(1550023829), zip(14209859), zip(367664669), zip(610421193), zip(1029537281), zip(743395846), zip(1547199327), zip(977348109), zip(1364074057), zip(760911207), zip(1303996238), zip(1016634887), zip(11551803), zip(845430006), zip(2266913018), zip(119884742), zip(1016141558), zip(430734882), zip(1186280696), zip(1153192211), zip(787984691), zip(1939052477), zip(597678219), zip(1512157538), zip(1401590853), zip(654267061), zip(1132401970), zip(873948143), zip(3094118678), zip(591803910), zip(2165745211), zip(1688666858), zip(883165231), zip(77961126), zip(107723067), zip(972983378), zip(302280096), zip(370578852), zip(1109370276), zip(327682445), zip(973492054), zip(1428700674), zip(92069842), zip(526842778), zip(934419604), zip(757513019), zip(1113837993), zip(15070090), zip(660199561), zip(429743432), zip(736718379), zip(560511222), zip(763582193), zip(2984562321), zip(582925765), zip(684305744), zip(312354189), zip(604722052), zip(393670747), zip(1467988978), zip(888896251), zip(528584742), zip(537327817), zip(1170279217), zip(1143928345), zip(509166672), zip(387721660), zip(744426374), zip(1123410407), zip(507131544), zip(17141717), zip(819508600), zip(413659811), zip(1003861596), zip(17562559), zip(1149164738), zip(731603776), zip(1183006786), zip(119393727), zip(15187287), zip(1375273926), zip(17255325), zip(634088912), zip(329320533), zip(16029349), zip(884224128), zip(21234246), zip(736182489), zip(484516073), zip(1561678421), zip(2200460021), zip(387559192), zip(1414794478), zip(974147259), zip(798736383), zip(457792289), zip(912053791), zip(831768767), zip(807451764), zip(1629077318), zip(1185615388), zip(16719070), zip(919782859), zip(654383679), zip(1105706741), zip(414625678), zip(1497486847), zip(1547827198), zip(14312829), zip(13081601), zip(1661184267), zip(151303935), zip(622215680), zip(15060839), zip(1035703228), zip(706571866), zip(960110667), zip(726518244), zip(642125739), zip(341026866), zip(922695265), zip(577277274), zip(2636117673), zip(761744076), zip(1224041038), text/comma-separated-values(1688), zip(2120889749), zip(15267735), zip(1046601876), zip(864828868), zip(917498913), zip(14073912), zip(1116420287), zip(2868016879), zip(427951168), zip(874767916), zip(453705728), zip(925030296), zip(1601263705), zip(413568032), zip(957579161), zip(1248073837), zip(16976243), zip(1246006237), zip(834325952), zip(692667108), zip(1601976142), zip(889453813), zip(1018538321), zip(420490536), zip(447706186), zip(602127744), zip(1499525107), zip(1168477852), zip(1650150032), zip(2514299450), zip(1039265644), zip(1044538143), zip(722976498), zip(748260667), zip(1411319530), zip(1736160479), zip(1250116502), zip(1114401695), zip(587637614), zip(1174294108), zip(13924057), zip(931075347), zip(17093892), zip(872176342), zip(108964439), zip(1037633553), zip(813587175), zip(1830320734), zip(1998695269), zip(1028133679), zip(613564938), zip(1261580807), zip(1115417630), zip(1561765351), zip(1300408989), zip(624734007), zip(1502083793), zip(653195310), zip(778606495), zip(46412138), zip(644533970), zip(1780882398), zip(116962238), zip(1654550), zip(2913795), zip(1167323), zip(2495954), zip(2017142059), zip(1832412), zip(362915324), zip(394036138), zip(356261354), zip(427535782), zip(374535834), zip(381463119), zip(342695392), zip(1304714010), zip(481984707), zip(300857333), zip(1084098371), zip(1261176863), zip(801059655), zip(633742507), zip(632483497), zip(586528683), zip(948712378), zip(512621545), zip(2028227), zip(1505773), zip(1675190), zip(2919532088), zip(1108885607), zip(366662448), zip(681607807), zip(1282229019), zip(21770860), zip(817469960), zip(597884065), zip(806319750), zip(378144499), zip(989186846), zip(1209357538), zip(14809492), zip(1170418598), zip(1080244955), zip(232080638), zip(799619948), zip(862265280), zip(563286241), zip(1487431400), zip(796075047), zip(506449149), zip(681100603), zip(767553095), zip(1222584436), zip(1018616733), zip(657065204), zip(1688988069), zip(527106831), zip(845577165), zip(944267226), zip(1079444866), zip(1428330798), zip(1163954884), zip(433949493), zip(1228521508), zip(1012760334), zip(600373253), zip(2077823465), zip(641497696), zip(685066101), zip(981144622), zip(1015935592), zip(391094566), zip(1507217128), zip(862278137), zip(512084127), zip(969971918), zip(707171924), zip(871891701), zip(1520907368), zip(893396743), zip(15857654), zip(811866018), zip(15742902), zip(17332817), zip(419069604), zip(636786451), zip(231351732), zip(1122103106), zip(574946967), zip(1603839289), zip(14458057), zip(336045900), zip(15756324), zip(1425761153), zip(257673434), zip(841134799), zip(435211457), zip(716857968), zip(416249414), zip(13140110), zip(1188440946), zip(916156168), zip(1937184847), zip(998437930), zip(719517858), zip(956246318), zip(630611997), zip(399617641), zip(17710342), zip(400103701), zip(1144764904), zip(806334709), zip(936051604), zip(15838010), zip(752442440), zip(671628889), zip(1069516672), zip(605769346), zip(475467455), zip(455082800), zip(1131204245), zip(13859994), zip(12768977), zip(15087169), zip(891669293), zip(2243440295), zip(18482166), zip(135247178), zip(380510980), zip(347852474), zip(1302144468), zip(963877764), zip(404230717), zip(842408214), zip(1299648390), zip(706352203), zip(369823591), zip(2058347405), zip(689203626), zip(631177904), zip(348930014), zip(920204770), zip(236655128), zip(1441033834), zip(620008299), zip(1649955289), zip(383549229), zip(615198995), zip(565700324), zip(1122987264), zip(3267664514), zip(1478154068), zip(584876935), zip(415163451), zip(13066656), zip(1499372152), zip(917874894), zip(429728914), zip(661312122), zip(1145242445), zip(2038669689), zip(433919318), zip(522912904), zip(1571091575), zip(1028664031), zip(786941429), zip(904239419), zip(14674654), zip(2884427003), zip(1310036469), zip(950479794), zip(791130784), zip(1179268357), zip(504863269), zip(69409052), zip(441253020), zip(14942607), zip(517542863), zip(23235773), zip(385628759), zip(957140768), zip(1297074278), zip(874524794), zip(373953452), zip(311956468), zip(2102275477), zip(18774547), zip(853217768), zip(536066636), zip(330824393), zip(1299058802), zip(988339308), zip(908706753), zip(16186365), zip(1397430646), zip(15604467), zip(15736193), zip(1972952033), zip(17324910), zip(17058547), zip(1327047262), zip(407942380), zip(523080516), zip(1637377925), zip(736723613), zip(893744967), zip(1250579398), zip(500409184), zip(1174002882), zip(1904661629), zip(667923128), zip(871187747), zip(2052297880), zip(1299529823), zip(803432613), zip(932041026), zip(447393801), zip(376234695), zip(684671851), zip(594875762), zip(1069470447), zip(2403033908), zip(617734484), zip(1245455427), zip(753216489), zip(1189455107), zip(682071016), zip(1129041007), zip(2130226622), zip(1145116487), zip(967160004), zip(407786659), zip(1574827563), zip(376298967), zip(692435598), zip(1003303799), zip(387261253), zip(1349076803), zip(506586415), zip(1204360334), zip(1409863396), zip(1017859049), zip(1240988574), zip(634464294), zip(362761007), zip(783891902), zip(375682025), zip(12994556), zip(1068667127), zip(1269840034), zip(293393633), zip(911917558), zip(399441078), zip(665543159), zip(860495749), zip(874748008), zip(2062966879), zip(312382554), txt(3482), zip(513941296), zip(621890384), zip(18318843), zip(942234690), zip(455608907), zip(1384738148), zip(12548424), zip(969984259), zip(4005138), zip(1577127), zip(354284380), zip(415388708), text/comma-separated-values(56120), zip(387287945), zip(341825994), zip(52087881), zip(272829440), zip(20835130), zip(1593579877), text/comma-separated-values(48655)Available download formats
    Dataset updated
    Aug 19, 2025
    Dataset provided by
    KU Leuven RDR
    Authors
    Nicola Cavalcanti; Nicola Cavalcanti; Ruixuan Li; Ruixuan Li; Laura Arango; Ayoob Davoodi; Ayoob Davoodi; Kaat Van Assche; Kaat Van Assche; Yunke Ao; Aidana Massalimova; Aidana Massalimova; Mehrdad Saleh; Lukas Zingg; Tobias Götschi; Gianni Borghesan; Gianni Borghesan; Christoph J. Laux; Reto Sutter; Reto Sutter; Mazda Farshad; Mazda Farshad; Matthias Tummers; Matthias Tummers; Philipp Fürnstahl; Philipp Fürnstahl; Emmanuel Vander Poorten; Emmanuel Vander Poorten; Fabio Carrillo; Fabio Carrillo; Laura Arango; Yunke Ao; Mehrdad Saleh; Lukas Zingg; Tobias Götschi; Christoph J. Laux
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    European Union’s Horizon 2020 research and innovation programme
    Description

    Musculoskeletal disorders present significant health and economic challenges on a global scale. Current intraoperative imaging techniques, including computed tomography (CT) and radiography, involve high radiation exposure and limited soft tissue visualization. Ultrasound (US) offers a non-invasive, real-time alternative but is highly observer-dependent and underutilized intraoperatively. US enhanced by artificial intelligence shows high potential for observer-independent pattern recognition and robot-assisted applications in orthopedics. Given the limited availability of in-vivo imaging data, we introduce a comprehensive dataset from a comparative collection of handheld US (HUS) and robot-assisted ultrasound (RUS) lumbar spine imaging in 63 healthy volunteers. This dataset includes demographic data, paired CT, HUS, RUS imaging, synchronized tracking data for HUS and RUS, and 3D-CT-segmentations. It establishes a robust baseline for machine learning algorithms by focusing on healthy individuals, circumventing the limitations of simulations and pathological anatomy. To our knowledge, this extensive collection is the first healthy anatomy dataset for the lumbar spine that includes paired CT, HUS, and RUS imaging, supporting advancements in computer- and robotic-assisted diagnostic and intraoperative techniques for musculoskeletal disorders.

  13. f

    BEHSOF: Advanced Non-alcoholic fatty liver dataset with clinical metadata...

    • figshare.com
    application/csv
    Updated Aug 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hamed Zamanian; Ahmad Shalbaf; Amir Sadeghi; Mohammadreza Zali; Aneseh Salehnia; Pooneh Dehghan (2024). BEHSOF: Advanced Non-alcoholic fatty liver dataset with clinical metadata and ultrasound images for Deep learning Models [Dataset]. http://doi.org/10.6084/m9.figshare.26389069.v4
    Explore at:
    application/csvAvailable download formats
    Dataset updated
    Aug 3, 2024
    Dataset provided by
    figshare
    Authors
    Hamed Zamanian; Ahmad Shalbaf; Amir Sadeghi; Mohammadreza Zali; Aneseh Salehnia; Pooneh Dehghan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The liver is regarded as one of the essential organs in the body, playing a crucial role in digestion, nutrient absorption, and food processing. This indispensable organ is tasked with cleansing the blood that flows from the digestive tract. Additionally, the liver detoxifies harmful substances and metabolizes various medications. A key function of this organ involves intricate metabolic processes that transform food into energy. However, the liver's heightened sensitivity makes it susceptible to a variety of common ailments, underscoring the need for careful attention to its health.Among liver diseases, fatty liver stands out as the most prevalent, characterized by the buildup of fat within liver cells. This condition is particularly common among individuals who are overweight or have abdominal obesity. Specifically, non-alcoholic fatty liver disease (NAFLD) refers to the excessive fat accumulation in liver cells, a condition known as steatosis. Various diagnostic techniques have been developed for this disease, each offering distinct advantages and drawbacks. Ultrasound imaging has gained popularity due to its accessibility, non-invasive nature, and affordability. One of the approaches utilized in this diagnostic process is the employment of deep learning models. Sporadic studies have been conducted to leverage the available data in diagnosing the degree of non-alcoholic fatty liver disease, and given the significance of the issue, this body of research is continually expanding. Developing robust model training and conducting comprehensive analyses necessitates access to a diverse and comprehensive data repository that can be leveraged for training and validating the proposed models. To this end, we present the BEHSOF dataset, which comprises a collection of samples gathered with varying degrees of non-alcoholic fatty liver disease. Specifically, this data bank consists of ultrasound images from a population of 113 individuals under study, along with the corresponding labels for the levels of steatosis and fibrosis. In addition to the ultrasound images, the data bank provides clinical information, blood test results, and Fibroscan outcomes for the participants, which serve as reference data for fibrosis assessment. Finally, we showcase the results of two deep learning models as examples for training and testing the introduced data set.The ultrasound images have been categorized into two distinct groups based on the diagnostic findings and labeling conventions. Each file within these categories is further distinguished by the location of the medical facility (Taleghani Hospital (TAL) or Behbood Clinic (BEH)), the row number, and the order of grading (steatosis score by expert, steatosis score by CAPscore, fibrosis score by the rate of Elasticity) (TALXXXXX, BEHXXXXX).

  14. B

    Dataset of deformed ultrasound images and signals under controlled...

    • borealisdata.ca
    • dataone.org
    Updated Jun 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jawad Dahmani; Yvan Petit; Catherine Laporte (2025). Dataset of deformed ultrasound images and signals under controlled indentation [Dataset]. http://doi.org/10.5683/SP3/ASTGWY
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 3, 2025
    Dataset provided by
    Borealis
    Authors
    Jawad Dahmani; Yvan Petit; Catherine Laporte
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Ultrasound imaging data (RF and scan-converted formats) of custom-made elastography phantoms. The data are robotically-acquired image sequences with controlled and gradually increasing phantom indentation. The phantom consists of a background medium traversed by a cylindrical inclusion. The data (images and RF signals) are contained in the 6 folders Acqui1 → Acqui6 corresponding to the 6 acquired image sequences on the phantom. Each Acqui_i folder contains two subfolders: the RF folder containing RF signals in .mat file format (readable in Matlab), and the folder US_Image containing the images of the sequence in PNG format. Both images and RF signals are numbered, with each index corresponding to an indentation level and a force measured by the force sensor as outlined in the Excel file (Tabulation 1). Each RFi.mat file comprises 3152 rows representing the signal along the temporal axis, and 256 columns corresponding to the number of A lines in the image. The Excel file has 5 tabs: -The first tab contains, for each of the 6 acquired image sequences, the frame number in the sequence, the corresponding indentation of the probe (in mm), the recorded voltage value on the force sensor (V), and the corresponding calculated force value (N). Thus, each image in the 6 sequences is identified by a frame number, an indentation, and a force value. -The second tab provide the acquisition parameters of the ultrasound images (frequency, depth, gain, etc.) performed using the SonixTablet ultrasound system from Ultrasonix (now bk medical). -The third tab contains stress-strain curves, the mean, and the standard deviation of the Young's modulus for both the inclusion and the background of the phantom. The Young's modulus is obtained through compression tests conducted by an electromechanical testing machine (Bose, Electroforce 3200) on 10 small cylindrical samples taken from the background and the inclusion. -The fourth tab contains the geometry and dimensions of the phantom. -The fifth tab contains the recipe used to make the gelatin phantom.

  15. In vivo rat brain for Ultrasound Localization Microscopy: raw and beamformed...

    • zenodo.org
    • data.niaid.nih.gov
    bin, json, pdf, zip
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chavignon Arthur; Chavignon Arthur; Baptiste Heiles; Baptiste Heiles; Hingot Vincent; Hingot Vincent; Lopez Pauline; Teston Eliott; Couture Olivier; Couture Olivier; Lopez Pauline; Teston Eliott (2024). In vivo rat brain for Ultrasound Localization Microscopy: raw and beamformed data. [Dataset]. http://doi.org/10.5281/zenodo.7883227
    Explore at:
    json, zip, pdf, binAvailable download formats
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Chavignon Arthur; Chavignon Arthur; Baptiste Heiles; Baptiste Heiles; Hingot Vincent; Hingot Vincent; Lopez Pauline; Teston Eliott; Couture Olivier; Couture Olivier; Lopez Pauline; Teston Eliott
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Datasets provided for Open Platform for Ultrasound Localization Microscopy: Performance Assessment of Localization Algorithms.

    Abstract:

    Ultrasound Localization Microscopy (ULM) is an ultrasound imaging technique that relies on the acoustic response of sub-wavelength ultrasound scatterers to map the microcirculation with an order of magnitude increase in resolution. Initially demonstrated in vitro, this technique has matured and sees implementation in vivo for vascular imaging of organs, and tumors in both animal models and humans. The performance of the localization algorithm greatly defines the quality of vascular mapping. We compiled and implemented a collection of ultrasound localization algorithms and devised three datasets in silico and in vivo to compare their performance through 18 metrics. We also present two novel algorithms designed to increase speed and performance. By openly providing a complete package to perform ULM with the algorithms, the datasets used, and the metrics, we aim to give researchers a tool to identify the optimal localization algorithm for their usage, benchmark their software and enhance the overall image quality in the field while uncovering its limits.

    This article provides all materials and post-processing scripts and functions.

    Methods:

    200.000 ultrasound images have been acquired in vivo on a rat brain with skull removal at 1000 Hz with a 15 MHz linear probe.

    This dataset contains raw radiofrequency data (RF) and beamformed images (IQ) of the brain vascularization with flowing microbubbles (ultrasound contrast agent).

    Article to be cited: Heiles, Chavignon, Hingot, Lopez, Teston and Couture.
    Performance benchmarking of microbubble-localization algorithms for ultrasound localization microscopy, Nature Biomedical Engineering, 2022, (doi.org/10.1038/s41551-021-00824-8).

    Related processing scripts and codes: github.com/AChavignon/PALA

    Related datasets: doi.org/10.5281/zenodo.4343435

    Acknowledgments:

    We thank Cyrille Orset (INSERM UMR-S U1237, Physiopathology and Imaging of Neurological Disorders, GIP Cyceron, BB@C, Caen, France) for animals’ preparation and perfusion of contrast agent and the biomedical imaging platform CYCERON (UMS 3408 Unicaen/CNRS, Caen, France).

  16. H

    Ultrasound Systems Market Growth – Trends & Forecast 2025-2035

    • futuremarketinsights.com
    html, pdf
    Updated Jan 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sabyasachi Ghosh (2025). Ultrasound Systems Market Growth – Trends & Forecast 2025-2035 [Dataset]. https://www.futuremarketinsights.com/reports/ultrasound-systems-market
    Explore at:
    html, pdfAvailable download formats
    Dataset updated
    Jan 23, 2025
    Authors
    Sabyasachi Ghosh
    License

    https://www.futuremarketinsights.com/privacy-policyhttps://www.futuremarketinsights.com/privacy-policy

    Time period covered
    2025 - 2035
    Area covered
    Worldwide
    Description

    The ultrasound systems market is estimated to reach USD 11,260.1 million in 2025. It is estimated that revenue will increase at a CAGR of 5.5% between 2025 and 2035. The market is anticipated to reach USD 19,233.9 million by 2035.

    AttributesKey Insights
    Historical Size, 2024USD 10,673.1 million
    Estimated Size, 2025USD 11,260.1 million
    Projected Size, 2035USD 19,233.9 million
    Value-based CAGR (2025 to 2035)5.5%

    Semi-Annual Industry Outlook

    ParticularValue CAGR
    H16.3% (2024 to 2034)
    H26.0% (2024 to 2034)
    H15.5% (2025 to 2035)
    H25.0% (2025 to 2035)

    Country-wise Insights

    CountriesValue CAGR (2025 to 2035)
    United States3.4%
    Canada4.4%
    Germany4.5%
    France3.8%
    Italy4.7%
    UK6.6%
    Spain4.5%
    China5.5%

    Category-wise Insights

    ModalityCart/Trolley Based Ultrasound Systems
    Value Share (2025)66.4%
    ApplicationRadiology
    Value Share (2025)41.6%
  17. G

    Breast Ultrasound Images Dataset

    • gts.ai
    png
    Updated Jul 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2024). Breast Ultrasound Images Dataset [Dataset]. https://gts.ai/dataset-download/breast-ultrasound-images-dataset/
    Explore at:
    pngAvailable download formats
    Dataset updated
    Jul 18, 2024
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Ultrasound Image, Ground Truth Mask, Benign Classification, Normal Classification, Malignant Classification
    Description

    The Breast Ultrasound Images Dataset includes 780 high-quality PNG images collected in 2018 from 600 female patients aged 25 to 75. Each image is paired with a ground truth mask and categorized into normal, benign, or malignant classes, supporting supervised learning and diagnostic research for breast cancer detection.

  18. Data from: Ovarian ultrasound dataset

    • kaggle.com
    Updated Sep 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Baizid MD Ashadzzaman (2025). Ovarian ultrasound dataset [Dataset]. https://www.kaggle.com/datasets/baizidmdashadzzaman/ovarian-ultrasound-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 3, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Baizid MD Ashadzzaman
    Description

    Dataset

    This dataset was created by Baizid MD Ashadzzaman

    Contents

  19. OPULM PALA

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    bin, pdf, zip
    Updated Jul 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chavignon Arthur; Chavignon Arthur; Baptiste Heiles; Baptiste Heiles; Hingot Vincent; Hingot Vincent; Lopez Pauline; Eliott Teston; Couture Olivier; Couture Olivier; Lopez Pauline; Eliott Teston (2024). OPULM PALA [Dataset]. http://doi.org/10.5281/zenodo.4343435
    Explore at:
    zip, pdf, binAvailable download formats
    Dataset updated
    Jul 19, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Chavignon Arthur; Chavignon Arthur; Baptiste Heiles; Baptiste Heiles; Hingot Vincent; Hingot Vincent; Lopez Pauline; Eliott Teston; Couture Olivier; Couture Olivier; Lopez Pauline; Eliott Teston
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Datasets provided for Open Platform for Ultrasound Localization Microscopy: Performance Assessment of Localization Algorithms.

    Abstract:

    Ultrasound Localization Microscopy (ULM) is an ultrasound imaging technique that relies on the acoustic response of sub-wavelength ultrasound scatterers to map the microcirculation with an order of magnitude increase in resolution. Initially demonstrated in vitro, this technique has matured and sees implementation in vivo for vascular imaging of organs, and tumors in both animal models and humans. The performance of the localization algorithm greatly defines the quality of vascular mapping. We compiled and implemented a collection of ultrasound localization algorithms and devised three datasets in silico and in vivo to compare their performance through 18 metrics. We also present two novel algorithms designed to increase speed and performance. By openly providing a complete package to perform ULM with the algorithms, the datasets used, and the metrics, we aim to give researchers a tool to identify the optimal localization algorithm for their usage, benchmark their software and enhance the overall image quality in the field while uncovering its limits.

    This article provides all materials and post-processing scripts and functions.

    Article to be cited: Heiles, Chavignon, Hingot, Lopez, Teston and Couture.
    Performance benchmarking of microbubble-localization algorithms for ultrasound localization microscopy, Nature Biomedical Engineering, 2022, (doi.org/10.1038/s41551-021-00824-8).

    Related processing scripts and codes: github.com/AChavignon/PALA

    Request on data: arthur.chavignon.pro(at)gmail.com

  20. R

    Ire Ultrasound Dataset

    • universe.roboflow.com
    zip
    Updated Nov 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    segmentation (2023). Ire Ultrasound Dataset [Dataset]. https://universe.roboflow.com/segmentation-ssohl/ire-ultrasound/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 24, 2023
    Dataset authored and provided by
    segmentation
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Area Bounding Boxes
    Description

    IRE Ultrasound

    ## Overview
    
    IRE Ultrasound is a dataset for object detection tasks - it contains Area annotations for 218 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Defined.ai (2024). High-Quality Ultrasound Dataset [Dataset]. https://defined.ai/datasets/medical-scans-ultrasounds

High-Quality Ultrasound Dataset

Explore at:
Dataset updated
Apr 20, 2024
Dataset provided by
Defined.ai
Description

Boost your AI projects with our 40,000-image high-quality Ultrasound dataset in DICOM, ideal for healthcare computer vision.

Search
Clear search
Close search
Google apps
Main menu