9 datasets found
  1. Data from: fastMRI

    • datacatalog.med.nyu.edu
    • opendatalab.com
    Updated Aug 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Florian Knoll; Patricia M. Johnson; Daniel K. Sodickson; Michael P. Recht; Yvonne W. Lui (2023). fastMRI [Dataset]. https://datacatalog.med.nyu.edu/dataset/10389
    Explore at:
    Dataset updated
    Aug 7, 2023
    Dataset provided by
    NYU Health Sciences Library
    Authors
    Florian Knoll; Patricia M. Johnson; Daniel K. Sodickson; Michael P. Recht; Yvonne W. Lui
    Description

    This deidentified imaging dataset is comprised of raw k-space data in several sub-dataset groups. Raw and DICOM data have been deidentified via conversion to the vendor-neutral ISMRMRD format and the RSNA Clinical Trial Processor, respectively. Manual inspection of each DICOM image was also performed to check for the presence of any unexpected protected health information (PHI), with spot checking of both metadata and image content.

    Knee MRI: Data from more than 1,500 fully sampled knee MRIs obtained on 3 and 1.5 Tesla magnets and DICOM images from 10,000 clinical knee MRIs also obtained at 3 or 1.5 Tesla. The raw dataset includes coronal proton density-weighted images with and without fat suppression. The DICOM dataset contains coronal proton density-weighted with and without fat suppression, axial proton density-weighted with fat suppression, sagittal proton density, and sagittal T2-weighted with fat suppression.

    Brain MRI: Data from 6,970 fully sampled brain MRIs obtained on 3 and 1.5 Tesla magnets. The raw dataset includes axial T1 weighted, T2 weighted and FLAIR images. Some of the T1 weighted acquisitions included admissions of contrast agent.

    Additional information on file structure, data loader, and transforms are available on GitHub.

    Prostate MRI: Data obtained on 3 Tesla magnets from 312 male patients referred for clinical prostate MRI exams. The raw dataset includes axial T2-weighted and axial diffusion-weighted images for each of the 312 exams.

  2. t

    FastMRI - Dataset - LDM

    • service.tib.eu
    Updated Dec 2, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). FastMRI - Dataset - LDM [Dataset]. https://service.tib.eu/ldmservice/dataset/fastmri
    Explore at:
    Dataset updated
    Dec 2, 2024
    Description

    Fast and accurate reconstruction of magnetic resonance (MR) images from under-sampled data is important in many clinical applications. In recent years, deep learning-based methods have been shown to produce superior performance on MR image reconstruction. However, these methods require large amounts of data which is difficult to collect and share due to the high cost of acquisition and medical data privacy regulations.

  3. h

    fastmri-tiny

    • huggingface.co
    Updated Jun 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Armeet Singh Jatyani (2025). fastmri-tiny [Dataset]. https://huggingface.co/datasets/armeet/fastmri-tiny
    Explore at:
    Dataset updated
    Jun 16, 2025
    Authors
    Armeet Singh Jatyani
    Description

    A Unified Model for Compressed Sensing MRI Across Undersampling Patterns

    A Unified Model for Compressed Sensing MRI Across Undersampling PatternsArmeet Singh Jatyani, Jiayun Wang, Aditi Chandrashekar, Zihui Wu, Miguel Liu-Schiaffini, Bahareh Tolooshams, Anima AnandkumarPaper at CVPR 2025

    This is a tiny subset of 230 fastMRI samples, used in the demo for the above paper at CVPR 2025!

      Citation
    

    If you found our work helpful or used any of our models (UDNO), please cite
 See the full description on the dataset page: https://huggingface.co/datasets/armeet/fastmri-tiny.

  4. MRI Brain Perturbations

    • zenodo.org
    bin
    Updated May 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Borong Zhang; Borong Zhang; Leonardo Zepeda-NĂșñez; Leonardo Zepeda-NĂșñez (2025). MRI Brain Perturbations [Dataset]. http://doi.org/10.5281/zenodo.14760123
    Explore at:
    binAvailable download formats
    Dataset updated
    May 18, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Borong Zhang; Borong Zhang; Leonardo Zepeda-NĂșñez; Leonardo Zepeda-NĂșñez
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    We have uploaded MRI brain perturbations for data generation in Back-Projection Diffusion.

    The Brain MRI images used as our perturbations were obtained from the NYU fastMRI Initiative database (fastmri.med.nyu.edu), as described in the works of Florian Knoll et al., fastmri: A publicly available raw k-space and dicom dataset of knee images for accelerated mr image reconstruction using machine learning, 2020, and Jure Zbontar et al., fastmri: An open dataset and benchmarks for accelerated mri, 2019. As such, NYU fastMRI investigators provided the data but did not participate in the analysis or writing of this report. A listing of NYU fastMRI investigators, subject to updates, can be found at fastmri.med.nyu.edu. The primary goal of fastMRI is to test whether machine learning can aid in the reconstruction of medical images.

    We padded, resized, and normalized the perturbations to a native resolution of 240 points. Then, we downsampled the perturbations to resolutions of 60, 80, 120, and 160.

    We make the MRI brain perturbations publicly available. They are stored as HDF5 files with filenames in the format eta-n.h5, where n corresponds to the resolution. The files have the following structure:

    eta-n.h5/
    ├── /eta

    For a formal description of the dataset, please refer to our preprint:

    Zhang, B., Guerra, M., Li, Q., & Zepeda-NĂșñez, L. (2025). Back-Projection Diffusion: Solving the wideband inverse scattering problem with diffusion models. Computer Methods in Applied Mechanics and Engineering, 443, 118036. https://doi.org/10.1016/j.cma.2025.118036

    For usage instructions for Back-Projection Diffusion, please refer to our GitHub repository.

    If this dataset is useful to your research, please cite our preprint:
    @article{ZHANG2025118036,
    title = {Back-Projection Diffusion: Solving the wideband inverse scattering problem with diffusion models},
    journal = {Computer Methods in Applied Mechanics and Engineering},
    volume = {443},
    pages = {118036},
    year = {2025},
    issn = {0045-7825},
    doi = {https://doi.org/10.1016/j.cma.2025.118036},
    url = {https://www.sciencedirect.com/science/article/pii/S0045782525003081},
    author = {Borong Zhang and Martin Guerra and Qin Li and Leonardo Zepeda-NĂșñez},
    keywords = {Machine learning, Inverse scattering, Generative modeling, Wave propagation, Diffusion models}
    }

  5. N

    A New Strategy for Fast MRI-Based Quantification of the Myelin Water...

    • neurovault.org
    nifti
    Updated Jun 30, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). A New Strategy for Fast MRI-Based Quantification of the Myelin Water Fraction: Application to Brain Imaging in Infants: Infant 13wo: vm090373 Fmy [Dataset]. http://identifiers.org/neurovault.image:28761
    Explore at:
    niftiAvailable download formats
    Dataset updated
    Jun 30, 2018
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    vm090373_Fmy.nii (old ext: .nii)

    glassbrain

    Collection description

    Individual quantitative maps in healthy infants (3-21 weeks old) and young adults: volume fraction of water related to myelin, T1 and T2 relaxation times

    Subject species

    homo sapiens

    Modality

    Other

    Cognitive paradigm (task)

    None / Other

    Map type

    Other

  6. SPIRiT-Net_test_result

    • zenodo.org
    zip
    Updated Feb 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jizhong Duan; Xinmin Ren; Jizhong Duan; Xinmin Ren (2024). SPIRiT-Net_test_result [Dataset]. http://doi.org/10.5281/zenodo.10556450
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 17, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jizhong Duan; Xinmin Ren; Jizhong Duan; Xinmin Ren
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data in this repository are the experimental results of the paper "Improved Complex Convolutional NeuralNetwork Based on SPIRiT and Dense Connection for Parallel MRI Reconstruction", which contains the test results of the Coronal-PD knee dataset and the fastMRI brain dataset.

  7. Data from: M4Raw: A multi-contrast, multi-repetition, multi-channel MRI...

    • zenodo.org
    zip
    Updated Aug 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mengye Lyu; Mengye Lyu; Lifeng Mei; Lifeng Mei; Shoujin Huang; Shoujin Huang; Sixing Liu; Sixing Liu; Yi Li; Kexin Yang; Yilong Liu; Yilong Liu; Yu Dong; Linzheng Dong; Ed X. Wu; Ed X. Wu; Yi Li; Kexin Yang; Yu Dong; Linzheng Dong (2023). M4Raw: A multi-contrast, multi-repetition, multi-channel MRI k-space dataset for low-field MRI research [Dataset]. http://doi.org/10.5281/zenodo.7523691
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 2, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Mengye Lyu; Mengye Lyu; Lifeng Mei; Lifeng Mei; Shoujin Huang; Shoujin Huang; Sixing Liu; Sixing Liu; Yi Li; Kexin Yang; Yilong Liu; Yilong Liu; Yu Dong; Linzheng Dong; Ed X. Wu; Ed X. Wu; Yi Li; Kexin Yang; Yu Dong; Linzheng Dong
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Recently, low-field magnetic resonance imaging (MRI) has gained renewed interest to promote MRI accessibility and affordability worldwide. The presented M4Raw dataset aims to facilitate methodology development and reproducible research in this field. The dataset comprises multi-channel brain k-space data collected from 183 healthy volunteers using a 0.3 Tesla whole-body MRI system, and includes T1-weighted, T2-weighted, and fluid attenuated inversion recovery (FLAIR) images with in-plane resolution of ~1.2 mm and through-plane resolution of 5 mm. Importantly, each contrast contains multiple repetitions, which can be used individually or to form multi-repetition averaged images. After excluding motion-corrupted data, the partitioned training and validation subsets contain 1024 and 240 volumes, respectively. To demonstrate the potential utility of this dataset, we trained deep learning models for image denoising and parallel imaging tasks and compared their performance with traditional reconstruction methods. This M4Raw dataset will be valuable for the development of advanced data-driven methods specifically for low-field MRI. It can also serve as a benchmark dataset for general MRI reconstruction algorithms.

    Imaging protocol
    A total of 183 healthy volunteers were enrolled in the study. The majority of the subjects were college students (aged 18 to 32, mean = 20.1, standard deviation (std) = 1.5; 116 males, 67 females). Axial brain MRI data were obtained from each subject using a clinical 0.3T scanner (Oper-0.3, Ningbo Xingaoyi) equipped with a four-channel head coil. This scanner is a classical open-type permanent magnet-based whole-body system. Three common sequences were used: T1w, T2w, and FLAIR, each acquiring 18 slices with a thickness of 5 mm and an in-plane resolution of 0.94x1.23 mm2. To facilitate flexible research applications, T1w and T2w data were acquired with three individual repetitions and FLAIR with two repetitions.

    Data processing
    The k-space data from individual repetitions were exported from the scanner console without averaging. The corresponding raw images were in scanner coordinate space and may be off-centered due to patient positioning. To correct this, an off-center distance was estimated along the left-right direction for each subject using the vendor DICOM images, and the k-space data were multiplied by a corresponding linear phase modulation. The k-space matrices were then converted to Hierarchical Data Format Version 5 (H5) format, with imaging parameters stored in the H5 file header in an ISMRMRD-compatible format.

    Subset partition
    The dataset was divided into three subsets: training, validation, and motion-corrupted. After motion estimation, 26 subjects were placed in the motion-corrupted subset, and the remaining data were randomly split into a training subset of 128 subjects (1024 volumes) and a validation subset of 30 subjects (240 volumes).

    Data Records
    The training, validation, and motion-corrupted subsets are separately compressed into three zip files, containing 1024, 240, and 200 H5 files, respectively. Among the 200 files in the motion-corrupted subset, 64 files are placed in the “inter-scan_motion” sub-directory and 136 files in the “intra-scan_motion” sub-directory.
    All the H5 files are named in the format of "study-id_contrast_repetition-id.h5" (e.g., "2022061003_FLAIR01.h5"). In each file, the imaging parameters, multi-channel k-space, and the single-repetition images can be accessed via the dictionary keys of "ismrmrd_header", "kspace", and "reconstruction_rss", respectively. The k-space dimensions are arranged in the order of slice, coil channel, frequency encoding, and phase encoding, following the convention of the fastMRI dataset.

  8. o

    K2S: from undersampled K-space to Automatic Segmentation

    • explore.openaire.eu
    Updated Mar 16, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Upasana U Bharadwaj; Aniket Tolpadi; Francesco Caliva; Misung Han; Emma Bahroos; Pablo Damasceno; Jinhee Lee; Paula Giesler; Thomas M Link; Peder Larson; Sharmila Majumdar; Valentina Pedoia (2022). K2S: from undersampled K-space to Automatic Segmentation [Dataset]. http://doi.org/10.5281/zenodo.6362605
    Explore at:
    Dataset updated
    Mar 16, 2022
    Authors
    Upasana U Bharadwaj; Aniket Tolpadi; Francesco Caliva; Misung Han; Emma Bahroos; Pablo Damasceno; Jinhee Lee; Paula Giesler; Thomas M Link; Peder Larson; Sharmila Majumdar; Valentina Pedoia
    Description

    Magnetic resonance imaging (MRI) is the modality of choice for evaluating knee joint degeneration, but it can be susceptible to long acquisition times, tedious post processing, and lack of standardization. One of the most compelling applications of deep learning, therefore, is accelerated analysis of knee MRI. In addition to faster MRI acquisition, deep learning has enhanced image post-processing applications such as tissue segmentation. While fast, undersampled MRI acquisition may not have qualitative, visual acuity that comes from fully-sampled data, the underlying embedding space may be adequate for some applications. The implications for down-stream tasks such as tissue segmentation using convolutional neural networks are not well-characterized. Efficient segmentation of key anatomical structures from undersampled data is an open question that has clinical relevance, e.g., patient triage. The goal of this challenge, therefore, is to train segmentation models from 8x undersampled knee MRI. We have curated a dataset of high-resolution 3D knee MRI including raw k-space data and post-processing annotations with masks for tissue segmentation. The 8x undersampled , multi-channel k-space data of 300 fatsuppressed 3D FSE Cube sequences along with segmentation masks of cartilage, meniscus, and bone will be shared as part of the challenge. Fully-sampled images will be shared for training as auxiliary information, but will not be available for inference on the test set. Submissions will be evaluated in an end-to-end fashion from undersampled k-space data to segmentation on a test set of 200 fat-suppressed FSE-Cube sequences (undersampling pattern same with training data).

  9. Fast MRI Breast Screening Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Jul 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Fast MRI Breast Screening Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/fast-mri-breast-screening-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Jul 5, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Fast MRI Breast Screening Market Outlook



    According to our latest research, the global Fast MRI Breast Screening market size reached USD 1.62 billion in 2024 and is anticipated to expand at a robust CAGR of 9.4% during the forecast period from 2025 to 2033. By the end of 2033, the Fast MRI Breast Screening market is projected to attain a value of approximately USD 3.73 billion. This significant growth trajectory is primarily driven by the increasing prevalence of breast cancer, rising awareness about early detection, and rapid technological advancements in MRI imaging modalities. As per our latest research, the market is also benefitting from a surge in investments in healthcare infrastructure and the growing adoption of non-invasive and radiation-free diagnostic techniques worldwide.




    One of the primary growth factors propelling the Fast MRI Breast Screening market is the escalating incidence of breast cancer globally. According to the World Health Organization, breast cancer remains the most common cancer among women, with over 2.3 million new cases diagnosed annually. This alarming statistic has heightened the need for effective and early detection methods, which in turn is fueling the demand for advanced screening solutions such as Fast MRI. Unlike traditional mammography, Fast MRI offers superior sensitivity and specificity, especially among women with dense breast tissues, which are often challenging to assess using standard techniques. The ability of Fast MRI to deliver rapid, high-resolution images without the use of ionizing radiation makes it an increasingly preferred choice among both patients and clinicians, thereby accelerating its adoption across healthcare facilities worldwide.




    Another critical driver is the continuous technological innovation in MRI hardware and software, leading to reduced scan times and enhanced diagnostic accuracy. Leading manufacturers are investing heavily in research and development to introduce next-generation Fast MRI systems that can complete breast scans in under ten minutes, significantly improving patient comfort and throughput. Furthermore, the integration of artificial intelligence (AI) and machine learning algorithms into MRI software is enabling automated image analysis, facilitating quicker and more accurate interpretation of results. Such advancements not only streamline workflow efficiencies in diagnostic centers and hospitals but also reduce operational costs, making Fast MRI Breast Screening more accessible to a broader population segment. The increasing availability of reimbursement policies for advanced breast screening procedures in developed regions is another factor supporting market expansion.




    Additionally, the growing emphasis on personalized medicine and preventive healthcare is contributing to the widespread adoption of Fast MRI Breast Screening. Healthcare policymakers and insurance providers are recognizing the long-term benefits of early breast cancer detection, which include improved patient outcomes and reduced treatment costs. As a result, several countries have initiated public health campaigns and screening programs that encourage regular breast examinations using advanced imaging modalities. The expanding network of diagnostic centers and specialty clinics, particularly in emerging economies, is also playing a pivotal role in enhancing access to Fast MRI Breast Screening. These trends, combined with rising consumer awareness and the increasing participation of women in regular health check-ups, are expected to sustain the market's upward momentum in the coming years.




    From a regional perspective, North America currently dominates the Fast MRI Breast Screening market, accounting for the largest revenue share in 2024. The region's leadership is attributed to its well-established healthcare infrastructure, high adoption rates of advanced medical technologies, and supportive reimbursement frameworks. Europe follows closely, driven by robust government initiatives and a strong focus on cancer prevention and early detection. Meanwhile, the Asia Pacific region is anticipated to witness the fastest growth during the forecast period, fueled by rising healthcare expenditure, increasing awareness about breast cancer, and rapid expansion of diagnostic facilities. Latin America and the Middle East & Africa are also exhibiting steady growth, albeit from a lower base, as they gradually enhance their healthcare capabilities and improve access to advanced diagnostic solutions.

    <br /&

  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Florian Knoll; Patricia M. Johnson; Daniel K. Sodickson; Michael P. Recht; Yvonne W. Lui (2023). fastMRI [Dataset]. https://datacatalog.med.nyu.edu/dataset/10389
Organization logo

Data from: fastMRI

Related Article
Explore at:
Dataset updated
Aug 7, 2023
Dataset provided by
NYU Health Sciences Library
Authors
Florian Knoll; Patricia M. Johnson; Daniel K. Sodickson; Michael P. Recht; Yvonne W. Lui
Description

This deidentified imaging dataset is comprised of raw k-space data in several sub-dataset groups. Raw and DICOM data have been deidentified via conversion to the vendor-neutral ISMRMRD format and the RSNA Clinical Trial Processor, respectively. Manual inspection of each DICOM image was also performed to check for the presence of any unexpected protected health information (PHI), with spot checking of both metadata and image content.

Knee MRI: Data from more than 1,500 fully sampled knee MRIs obtained on 3 and 1.5 Tesla magnets and DICOM images from 10,000 clinical knee MRIs also obtained at 3 or 1.5 Tesla. The raw dataset includes coronal proton density-weighted images with and without fat suppression. The DICOM dataset contains coronal proton density-weighted with and without fat suppression, axial proton density-weighted with fat suppression, sagittal proton density, and sagittal T2-weighted with fat suppression.

Brain MRI: Data from 6,970 fully sampled brain MRIs obtained on 3 and 1.5 Tesla magnets. The raw dataset includes axial T1 weighted, T2 weighted and FLAIR images. Some of the T1 weighted acquisitions included admissions of contrast agent.

Additional information on file structure, data loader, and transforms are available on GitHub.

Prostate MRI: Data obtained on 3 Tesla magnets from 312 male patients referred for clinical prostate MRI exams. The raw dataset includes axial T2-weighted and axial diffusion-weighted images for each of the 312 exams.

Search
Clear search
Close search
Google apps
Main menu