Facebook
TwitterTHIS RESOURCE IS NO LONGER IN SERVICE, documented May 10, 2017. A pilot effort that has developed a centralized, web-based biospecimen locator that presents biospecimens collected and stored at participating Arizona hospitals and biospecimen banks, which are available for acquisition and use by researchers. Researchers may use this site to browse, search and request biospecimens to use in qualified studies. The development of the ABL was guided by the Arizona Biospecimen Consortium (ABC), a consortium of hospitals and medical centers in the Phoenix area, and is now being piloted by this Consortium under the direction of ABRC. You may browse by type (cells, fluid, molecular, tissue) or disease. Common data elements decided by the ABC Standards Committee, based on data elements on the National Cancer Institute''s (NCI''s) Common Biorepository Model (CBM), are displayed. These describe the minimum set of data elements that the NCI determined were most important for a researcher to see about a biospecimen. The ABL currently does not display information on whether or not clinical data is available to accompany the biospecimens. However, a requester has the ability to solicit clinical data in the request. Once a request is approved, the biospecimen provider will contact the requester to discuss the request (and the requester''s questions) before finalizing the invoice and shipment. The ABL is available to the public to browse. In order to request biospecimens from the ABL, the researcher will be required to submit the requested required information. Upon submission of the information, shipment of the requested biospecimen(s) will be dependent on the scientific and institutional review approval. Account required. Registration is open to everyone.. Documented October 4, 2017.A sub-project of the Cell Centered Database (http://ccdb.ucsd.edu) providing a public repository for animal imaging data sets from MRI and related techniques. The public AIDB website provides the ability for browsing, visualizing and downloading the animal subjected MRI data. The AIDB is a pilot project to serve the current need for public imaging repositories for animal imaging data. The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy. The AIDB data model is modified from the basic model of the CCDB where microscopic images are combined to make 2D, 3D and 4D reconstructions. The CCDB has made available over 40 segmented datasets from high resolution magnetic resonance imaging of inbred mouse strains through the prototype AIDB. These data were acquired as part of the Mouse BIRN project by Drs. G. Allan Johnson and Robert Williams. More information about these data can be found in Badea et al. (2009) (Genetic dissection of the mouse CNS using magnetic resonance microscopy - Pubmed: 19542887)
Facebook
Twitterhttps://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
The COVID-19 pandemic is a global healthcare emergency. Prediction models for COVID-19 imaging are rapidly being developed to support medical decision making in imaging. However, inadequate availability of a diverse annotated dataset has limited the performance and generalizability of existing models.
To create the first multi-institutional, multi-national expert annotated COVID-19 imaging dataset made freely available to the machine learning community as a research and educational resource for COVID-19 chest imaging. The Radiological Society of North America (RSNA) assembled the RSNA International COVID-19 Open Radiology Database (RICORD) collection of COVID-related imaging datasets and expert annotations to support research and education. RICORD data will be incorporated in the Medical Imaging and Data Resource Center (MIDRC), a multi-institutional research data repository funded by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health.
This dataset was created through a collaboration between the RSNA and Society of Thoracic Radiology (STR). Clinical annotation by thoracic radiology subspecialists was performed for all COVID positive chest radiography (CXR) imaging studies using a labeling schema based upon guidelines for reporting classification of COVID-19 findings in CXRs (see Review of Chest Radiograph Findings of COVID-19 Pneumonia and Suggested Reporting Language, Journal of Thoracic Imaging).
The RSNA International COVID-19 Open Annotated Radiology Database (RICORD) consists of 998 chest x-rays from 361 patients at four international sites annotated with diagnostic labels.
Patient Selection: Patients at least 18 years in age receiving positive diagnosis for COVID-19.
998 Chest x-ray examinations from 361 patients.
Annotations with labels:
Classification
Typical Appearance
Multifocal bilateral, peripheral opacities, and/or Opacities with rounded morphology
Lower lung-predominant distribution (Required Feature - must be present with either or both of the first two opacity patterns)
Indeterminate Appearance
Absence of typical findings AND Unilateral, central or upper lung predominant distribution of airspace disease
Negative for Pneumonia
No lung opacities
Airspace Disease Grading
Lungs are divided on frontal chest xray into 3 zones per lung (6 zones total). The upper zone extends from the apices to the superior hilum. The mid zone spans between the superior and inferior hilar margins. The lower zone extends from the inferior hilar margins to the costophrenic sulci.
Mild - Required if not negative for pneumonia
Opacities in 1-2 lung zones
Moderate - Required if not negative for pneumonia
Opacities in 3-4 lung zones
Severe - Required if not negative for pneumonia
Opacities in >4 lung zones
Supporting clinical variables: MRN*, Age, Study Date*, Exam Description, Sex, Study UID*, Image Count, Modality, Testing Result, Specimen Source (* pseudonymous values).
How to use the JSON annotations
More information about how the JSON annotations are organized can be found on https://docs.md.ai/data/json/. Steps 2 & 3 in this example code demonstrate how to to load the JSON into a Dataframe. The JSON file can be downloaded via the data access table below; it is not available via MD.ai. This Jupyter Notebook may also be helpful.
RICORD is available for non-commercial use (and further enrichment) by the research and education communities which may include development of educational resources for COVID-19, use of RICORD to create AI systems for diagnosis and quantification, benchmarking performance for existing solutions, exploration of distributed/federated learning, further annotation or data augmentation efforts, and evaluation of the examinations for disease entities beyond COVID-19 pneumonia. Deliberate consideration of the detailed annotation schema, demographics, and other included meta-data will be critical when generating cohorts with RICORD, particularly as more public COVID-19 imaging datasets are made available via complementary and parallel efforts. It is important to emphasize that there are limitations to the clinical “ground truth” as the SARS-CoV-2 RT-PCR tests have widely documented limitations and are subject to both false-negative and false-positive results which impact the distribution of the included imaging data, and may have led to an unknown epidemiologic distortion of patients based on the inclusion criteria. These limitations notwithstanding, RICORD has achieved the stated objectives for data complexity, heterogeneity, and high-quality expert annotations as a comprehensive COVID-19 thoracic imaging data resource.
Facebook
Twitterhttps://www.nist.gov/open/licensehttps://www.nist.gov/open/license
This database contains imaging and calibration data for phantoms contained in the NIST/NIBIB Phantom Lending Library (PLL). Description and access to the PLL can be found at https://www.nist.gov/programs-projects/nistnibib-medical-imaging-phantom-lending-library . Public analysis software written in Python can be found at https://github.com/MRIStandards/PhantomViewer . This database contains image sets from different scanners and different sites to be used for comparison and reference purposes. It is not meant to endorse any specific scanner or scan protocol.
Facebook
Twitterhttps://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
The Lung Image Database Consortium image collection (LIDC-IDRI) consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and evaluation of computer-assisted diagnostic (CAD) methods for lung cancer detection and diagnosis. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.
Seven academic centers and eight medical imaging companies collaborated to create this data set which contains 1018 cases. Each subject includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories ("nodule > or =3 mm," "nodule <3 mm," and "non-nodule > or =3 mm"). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus.
Note : The TCIA team strongly encourages users to review pylidc and the Standardized representation of the TCIA LIDC-IDRI annotations using DICOM (DICOM-LIDC-IDRI-Nodules) of the annotations/segmentations included in this dataset before developing custom tools to analyze the XML version.
Facebook
TwitterDatabase of 141 studies which have investigated brain structure (using MRI and CT scans) in patients with bipolar disorder compared to a control group. Ninety-eight studies and 47 brain structures are included in the meta-analysis. The database and meta-analysis are contained in an Excel spreadsheet file which may be freely downloaded from this website.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A collection of wide-field calcium imaging (WFCI) sleep and wake recordings collected from twelve transgenic mice expressing GCaMP6f in excitatory neurons. Each mouse underwent a three-hour undisturbed WFCI recording session where wake, REM (rapid eye movement) sleep and NREM (non-REM) sleep was recorded. Each WFCI recording is manually scored by sleep scoring experts in 10-second epochs as wake, NREM or REM by use of adjunct EEG/EMG. The dataset contains annotated WFCI recordings, brain mask and the Paxinos atlas used for defining the brain regions. The dataset was collected as part of a study evaluating a deep learning-based automated sleep state classification method.
Facebook
TwitterThe Province of Ontario Neurodevelopmental Disorders (POND) Network is an Integrated Discovery Program funded by the Ontario Brain Institute and aims to understand the neurobiology of neurodevelopment disorders and translate the findings into effective new treatments. This controlled data release includes T1 weighted and T2 weighted structural MRIs, DTI, MRS, resting and task based fMRI, and MEG imaging data along with demographic, medical history data, behavioural and cognitive assessments for 682 children and youth diagnosed with various neurodevelopmental disorders as well as typically developing children and youth.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) is currently one of the powerful tools for the clinical diagnosis of dementia such as Alzheimer's Disease (AD). Meanwhile, MR imaging, being non-radioactive and having high contrast resolution, is highly accessible in clinical settings. Therefore, this dataset intends to use FDG-PET images as the Ground Truth for evaluating AD, for the development of predicting AD patients using MR images. This dataset includes an AD group and a control group (Healthy Group). The determination of the image diagnosis group is made by neurology specialists based on comprehensive judgment using clinically relevant information. Each set of data contains one set of MRI T1 images and one set of FDG-PET images. The image format is DICOM, and all images have been anonymized. To obtain the clinical information and related documentation, please contact the administrator.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is at the core of a dementia research project focused on the exploration and diagnosis of dementia using advanced imaging technologies. It integrates data collected through Single-Photon Emission Computed Tomography (SPECT). Dataset Composition
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The experimental dataset used for the refined evaluation of human fat
Facebook
TwitterOpen Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
This database provides a collection of myocardial perfusion scintigraphy images in DICOM format with all metadata and segmentations (masks) in NIfTI format. The images were obtained from patients undergoing scintigraphy examinations to investigate cardiac conditions such as ischemia and myocardial infarction. The dataset encompasses a diversity of clinical cases, including various perfusion patterns and underlying cardiac conditions. All images have been properly anonymized, and the age range of the patients is from 20 to 90 years. This database represents a valuable source of information for researchers and healthcare professionals interested in the analysis and diagnosis of cardiac diseases. Moreover, it serves as a foundation for the development and validation of image processing algorithms and artificial intelligence techniques applied to cardiovascular medicine. Available for free on the PhysioNet platform, its aim is to promote collaboration and advance research in nuclear cardiology and cardiovascular medicine, while ensuring the replicability of studies.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Group average map of FLAIR images in standard MNI space across 1,832 MRiShare subjects.
This collection contains group average maps presented in the associated publication "The MRi-Share database: brain imaging in a cross-sectional cohort of 1,870 university students".
homo sapiens
Structural MRI
group
None / Other
A
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
We report on MRi-Share, a multi-modal brain MRI database acquired in a unique sample of 1,870 young healthy adults, aged 18 to 35 years, while undergoing university-level education. MRi-Share contains structural (T1 and FLAIR), diffusion (multispectral), susceptibility weighted (SWI), and resting-state functional imaging modalities. Here, we described the contents of these different neuroimaging datasets and the processing pipelines used to derive brain phenotypes, as well as how quality control was assessed. In addition, we present preliminary results on associations of some of these brain image-derived phenotypes at the whole brain level with both age and sex, in the subsample of 1,722 individuals aged less than 26 years. We demonstrate that the post-adolescence period is characterized by changes in both structural and microstructural brain phenotypes. Grey matter cortical thickness, surface area and volume were found to decrease with age, while white matter volume shows increase. Diffusivity, either radial or axial, was found to robustly decrease with age whereas fractional anisotropy only slightly increased. As for the neurite orientation dispersion and densities, both were found to increase with age. The isotropic volume fraction also showed a slight increase with age. These preliminary findings emphasize the complexity of changes in brain structure and function occurring in this critical period at the interface of late maturation and early aging.
Facebook
TwitterThe purpose of the SNF Study was to develop the techniques to make the link from biophysical measurements made on the ground to aircraft radiometric measurements and then to scale up to satellite observations. Therefore, satellite image data were acquired for the Superior National Forest study site. These data were selected from all the scenes available from Landsat 1 through 5 and SPOT platforms. Image data substantially contaminated by cloud cover or of poor radiometric quality was not acquired. Of the Landsat scenes, only one Thematic Mapper (TM) scene was acquired, the remainder were Multispectral Scanner (MSS) images. Some of the acquired image data had cloud cover in portions of the scene or other problems with the data. These problems and other comments about the images are summarized in the data set. This data set contains a listing of the scenes that passed inspection and were acquired and archived by Goddard Space Flight Center. Though these image data are no longer available from either the Goddard Space Flight Center or the ORNL DAAC, this data set has been included in the Superior National Forest data collection in order to document which satellite images were used during the project.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionA multishell diffusion MRI dataset collected from three traveling subjects with identical acquisition setting in ten imaging centers. Both of the scanner type and imaging protocol for anatomical and diffusion imaging were well controlled.This dataset is expected to replenish the individual reproducible study via multicenter collaborations by providing an open resource for advanced and novel microstructure and tractography quantifications.Primary acquisition parameters• T1-weighted imagesSequence: MP2RAGEResolution = 1x1x1.2 mm3• Diffusion-weighted imagesSequence: SMS SE EPIResolution = 1.5x1.5x1.5 mm3b-value = 1000,2000,3000 mm2/sdirections = 30,30,30non-diffusion images = 6Data use agreement To use this dataset, we would like to follow the license CC BY-NC-SA (Attribution-NonCommercial-ShareAlike) https://creativecommons.org/licenses/by-nc-sa/4.0ContactFor more information on the data collection and pre-processing procedure, you can contact Qiqi Tong (tongqq@zju.edu.cn)For other information or further cooperation with us, you can contact Dr. Hongjian He (hhezju@zju.edu.cn)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PDF with metadata for each face's MAT and JPG files with file name, size, and dimension. Dimension was presented in pixel and wavelength units as HxWx? for MAT files and as HxW for JPG files, in which H, W, and ? correspond to the height, weight, and wavelength from 400 nm to 720 nm, respectively.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Cancer-Net PCa-Data is an open access benchmark dataset of volumetric correlated diffusion imaging (CDIs) data acquisitions of prostate cancer patients. Cancer-Net PCa-Data is a part of the Cancer-Net open source initiative dedicated to advancement in machine learning and imaging research to aid clinicians in the global fight against cancer.
The volumetric CDIs data acquisitions in the Cancer-Net PCa-Data dataset were generated from a patient cohort of 200 patient cases acquired at Radboud University Medical Centre (Radboudumc) in the Prostate MRI Reference Center in Nijmegen, The Netherlands and made available as part of the SPIE-AAPM-NCI PROSTATEx Challenges. Masks derived from the PROSTATEx_masks repository are also provided which label regions of healthy prostate tissue, clinically significant prostate cancer (csPCa), and clinically insignificant prostate cancer (insPCa).
This dataset was used to investigate the relationship between PCa presence and CDIs hyperintensity.
Cancer-Net PCa-Data is released under a CC BY 4.0 license.
Example T2-weighted images of prostates with CDIs overlaid are shown below.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4364336%2Fc312a93e80813c9f4e5e418f1220d4e4%2FPROSTATEx-grid-top100.png?generation=1684256503310308&alt=media" alt="Grid of T2-weighted MRI images of the prostate with CDIs images overlaid.">
If you find our work useful for your research, please cite:
@article{Wong2022,
author={Alexander Wong and Hayden Gunraj and Vignesh Sivan and Masoom A. Haider},
title={Synthetic correlated diffusion imaging hyperintensity delineates clinically significant prostate cancer},
journal ={Scientific Reports},
volume={12},
year={2022},
number={3376},
doi={10.1038/s41598-022-06872-7}
}
and
@article{Gunraj2023,
author={Hayden Gunraj and Chi-en Amy Tai and Alexander Wong},
title={Cancer-Net PCa-Data: An Open-Source Benchmark Dataset for Prostate Cancer Clinical Decision Support using Synthetic Correlated Diffusion Imaging Data},
journal ={NeurIPS Workshops},
year={2023}
}
Additionally, SPIE-AAPM-NCI PROSTATEx Challenges, PROSTATEx_masks, and The Cancer Imaging Archive (TCIA) should also be cited:
@misc{Litjens2017,
author={Geert Litjens and Oscar Debats and Jelle Barentsz and Nico Karssemeijer and Henkjan Huisman},
title={ProstateX Challenge data [data set]},
journal={The Cancer Imaging Archive},
year={2017},
doi={10.7937/K9TCIA.2017.MURS5CL
}
@article{Litjens2014,
author={Geert Litjens and Oscar Debats and Jelle Barentsz and Nico Karssemeijer and Henkjan Huisman},
title={Computer-Aided Detection of Prostate Cancer in MRI},
journal={IEEE Transactions on Medical Imaging},
year={2014},
volume={33},
number={5},
pages={1083-1092},
doi={10.1109/TMI.2014.2303821}
}
@article{Cuocolo2021,
author={Renato Cuocolo and Arnaldo Stanzione and Anna Castaldo and Davide Raffaele {De Lucia} and Massimo Imbriaco},
title={Quality control and whole-gland, zonal and lesion annotations for the PROSTATEx challenge public dataset},
journal={European Journal of Radiology},
volume={138},
pages={109647},
year={2021},
doi={10.1016/j.ejrad.2021.109647}
}
@article{Clark2013,
author={Kenneth Clark and Bruce Vendt and Kirk Smith and John Freymann and Justin Kirby and Paul Koppel and Stephen Moore and Stanley Phillips and David Maffitt and Michael Pringle and Lawrence Tarbox and Fred Prior},
title={The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository},
journal={Journal of Digital Imaging},
year={2013},
volume={26},
number={6},
pages={1045-1057},
}
Facebook
TwitterThe OPTIMAM Mammography Image Database is a sharable resource with processed and unprocessed mammography images from United Kingdom breast screening centers, with annotated cancers and clinical details.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global synthetic medical imaging data platforms market size reached USD 1.24 billion in 2024, driven by the surging adoption of artificial intelligence and machine learning in healthcare imaging workflows. The market is expected to expand at a robust CAGR of 27.6% from 2025 to 2033, projecting a value of USD 11.84 billion by 2033. This remarkable growth trajectory is primarily fueled by the increasing demand for high-quality, diverse, and annotated medical imaging datasets to enhance diagnostic accuracy, accelerate AI model training, and address data privacy concerns.
A key growth factor for the synthetic medical imaging data platforms market is the escalating integration of AI and deep learning technologies in healthcare diagnostics. Traditional medical imaging datasets are often limited by privacy regulations, patient consent issues, and the sheer volume required for robust machine learning model development. Synthetic data generation platforms are bridging this gap by providing scalable, customizable, and privacy-compliant datasets that mirror real-world imaging scenarios. These platforms enable healthcare organizations, research institutes, and technology vendors to develop, validate, and deploy advanced diagnostic algorithms with greater speed and accuracy, thereby improving patient outcomes and operational efficiency.
Another significant driver is the rising need for data diversity and bias mitigation in medical imaging AI models. Real-world datasets can be skewed by demographic, geographic, and modality-specific biases, potentially leading to suboptimal or inaccurate clinical decisions. Synthetic medical imaging data platforms address this challenge by generating balanced and representative datasets that encompass a wide range of patient profiles, disease states, and imaging modalities. This capability not only enhances the generalizability of AI models but also supports regulatory compliance and ethical AI development, which are becoming increasingly critical in the global healthcare landscape.
The market is further propelled by the growing emphasis on data privacy and security, particularly in light of stringent regulations such as HIPAA and GDPR. Synthetic data eliminates the risk of patient re-identification, enabling healthcare institutions and technology companies to share, collaborate, and innovate without compromising sensitive patient information. This has accelerated the adoption of synthetic medical imaging data platforms among pharmaceutical and biotechnology companies, diagnostic centers, and academic research institutes. As the healthcare industry continues to prioritize digital transformation and AI-driven innovation, the demand for synthetic imaging data solutions is expected to witness sustained and exponential growth.
From a regional perspective, North America currently dominates the synthetic medical imaging data platforms market, accounting for the largest revenue share in 2024. This leadership is attributed to the region’s advanced healthcare infrastructure, robust R&D ecosystem, and proactive regulatory environment supporting digital health innovation. Europe follows closely, benefiting from strong government initiatives and collaborative research networks. Meanwhile, the Asia Pacific region is emerging as a high-growth market, fueled by increasing healthcare digitization, rising investments in AI research, and expanding medical imaging capabilities across China, Japan, and India. These regional dynamics underscore the global potential and strategic importance of synthetic medical imaging data platforms in shaping the future of healthcare.
The synthetic medical imaging data platforms market is segmented by component into software and services, each playing a pivotal role in the ecosystem. The software segment, comprising synthetic data generation engines, data annotation tools, and integration modules, currently holds the dominant market share. These advanced software solutions leverage cutting-edge AI algorithms and deep learning architectures to create hyper-realistic imaging datasets that closely mimic real patient data. The continuous evolution of generative adversarial networks (GANs) and diffusion models has significantly enhanced the fidelity and diversity of synthetic images, making them indispensable for AI model training and validation. As healthcare providers and techn
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Hyperspectral images (HxWx?) containing the spectral reflectance of each image pixel. H and W correspond to the height and the weight of the matrix in pixel units, and ? corresponds to the wavelength from 400 nm to 720 nm.
Facebook
TwitterTHIS RESOURCE IS NO LONGER IN SERVICE, documented May 10, 2017. A pilot effort that has developed a centralized, web-based biospecimen locator that presents biospecimens collected and stored at participating Arizona hospitals and biospecimen banks, which are available for acquisition and use by researchers. Researchers may use this site to browse, search and request biospecimens to use in qualified studies. The development of the ABL was guided by the Arizona Biospecimen Consortium (ABC), a consortium of hospitals and medical centers in the Phoenix area, and is now being piloted by this Consortium under the direction of ABRC. You may browse by type (cells, fluid, molecular, tissue) or disease. Common data elements decided by the ABC Standards Committee, based on data elements on the National Cancer Institute''s (NCI''s) Common Biorepository Model (CBM), are displayed. These describe the minimum set of data elements that the NCI determined were most important for a researcher to see about a biospecimen. The ABL currently does not display information on whether or not clinical data is available to accompany the biospecimens. However, a requester has the ability to solicit clinical data in the request. Once a request is approved, the biospecimen provider will contact the requester to discuss the request (and the requester''s questions) before finalizing the invoice and shipment. The ABL is available to the public to browse. In order to request biospecimens from the ABL, the researcher will be required to submit the requested required information. Upon submission of the information, shipment of the requested biospecimen(s) will be dependent on the scientific and institutional review approval. Account required. Registration is open to everyone.. Documented October 4, 2017.A sub-project of the Cell Centered Database (http://ccdb.ucsd.edu) providing a public repository for animal imaging data sets from MRI and related techniques. The public AIDB website provides the ability for browsing, visualizing and downloading the animal subjected MRI data. The AIDB is a pilot project to serve the current need for public imaging repositories for animal imaging data. The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy. The AIDB data model is modified from the basic model of the CCDB where microscopic images are combined to make 2D, 3D and 4D reconstructions. The CCDB has made available over 40 segmented datasets from high resolution magnetic resonance imaging of inbred mouse strains through the prototype AIDB. These data were acquired as part of the Mouse BIRN project by Drs. G. Allan Johnson and Robert Williams. More information about these data can be found in Badea et al. (2009) (Genetic dissection of the mouse CNS using magnetic resonance microscopy - Pubmed: 19542887)