100+ datasets found
  1. u

    3D Microvascular Image Data and Labels for Machine Learning

    • rdr.ucl.ac.uk
    • datasetcatalog.nlm.nih.gov
    bin
    Updated Apr 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natalie Holroyd; Claire Walsh; Emmeline Brown; Emma Brown; Yuxin Zhang; Carles Bosch Pinol; Simon Walker-Samuel (2024). 3D Microvascular Image Data and Labels for Machine Learning [Dataset]. http://doi.org/10.5522/04/25715604.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Apr 30, 2024
    Dataset provided by
    University College London
    Authors
    Natalie Holroyd; Claire Walsh; Emmeline Brown; Emma Brown; Yuxin Zhang; Carles Bosch Pinol; Simon Walker-Samuel
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    These images and associated binary labels were collected from collaborators across multiple universities to serve as a diverse representation of biomedical images of vessel structures, for use in the training and validation of machine learning tools for vessel segmentation. The dataset contains images from a variety of imaging modalities, at different resolutions, using difference sources of contrast and featuring different organs/ pathologies. This data was use to train, test and validated a foundational model for 3D vessel segmentation, tUbeNet, which can be found on github. The paper descripting the training and validation of the model can be found here. Filenames are structured as follows: Data - [Modality]_[species Organ]_[resolution].tif Labels - [Modality]_[species Organ]_[resolution]_labels.tif Sub-volumes of larger dataset - [Modality]_[species Organ]_subvolume[dimensions in pixels].tif Manual labelling of blood vessels was carried out using Amira (2020.2, Thermo-Fisher, UK). Training data: opticalHREM_murineLiver_2.26x2.26x1.75um.tif: A high resolution episcopic microscopy (HREM) dataset, acquired in house by staining a healthy mouse liver with Eosin B and imaged using a standard HREM protocol. NB: 25% of this image volume was withheld from training, for use as test data. CT_murineTumour_20x20x20um.tif: X-ray microCT images of a microvascular cast, taken from a subcutaneous mouse model of colorectal cancer (acquired in house). NB: 25% of this image volume was withheld from training, for use as test data. RSOM_murineTumour_20x20um.tif: Raster-Scanning Optoacoustic Mesoscopy (RSOM) data from a subcutaneous tumour model (provided by Emma Brown, Bohndiek Group, University of Cambridge). The image data has undergone filtering to reduce the background ​(Brown et al., 2019)​. OCTA_humanRetina_24x24um.tif: retinal angiography data obtained using Optical Coherence Tomography Angiography (OCT-A) (provided by Dr Ranjan Rajendram, Moorfields Eye Hospital). Test data: MRI_porcineLiver_0.9x0.9x5mm.tif: T1-weighted Balanced Turbo Field Echo Magnetic Resonance Imaging (MRI) data from a machine-perfused porcine liver, acquired in-house. Test Data MFHREM_murineTumourLectin_2.76x2.76x2.61um.tif: a subcutaneous colorectal tumour mouse model was imaged in house using Multi-fluorescence HREM in house, with Dylight 647 conjugated lectin staining the vasculature ​(Walsh et al., 2021)​. The image data has been processed using an asymmetric deconvolution algorithm described by ​Walsh et al., 2020​. NB: A sub-volume of 480x480x640 voxels was manually labelled (MFHREM_murineTumourLectin_subvolume480x480x640.tif). MFHREM_murineBrainLectin_0.85x0.85x0.86um.tif: an MF-HREM image of the cortex of a mouse brain, stained with Dylight-647 conjugated lectin, was acquired in house ​(Walsh et al., 2021)​. The image data has been downsampled and processed using an asymmetric deconvolution algorithm described by ​Walsh et al., 2020​. NB: A sub-volume of 1000x1000x99 voxels was manually labelled. This sub-volume is provided at full resolution and without preprocessing (MFHREM_murineBrainLectin_subvol_0.57x0.57x0.86um.tif). 2Photon_murineOlfactoryBulbLectin_0.2x0.46x5.2um.tif: two-photon data of mouse olfactory bulb blood vessels, labelled with sulforhodamine 101, was kindly provided by Yuxin Zhang at the Sensory Circuits and Neurotechnology Lab, the Francis Crick Institute ​(Bosch et al., 2022)​. NB: A sub-volume of 500x500x79 voxel was manually labelled (2Photon_murineOlfactoryBulbLectin_subvolume500x500x79.tif). References: ​​Bosch, C., Ackels, T., Pacureanu, A., Zhang, Y., Peddie, C. J., Berning, M., Rzepka, N., Zdora, M. C., Whiteley, I., Storm, M., Bonnin, A., Rau, C., Margrie, T., Collinson, L., & Schaefer, A. T. (2022). Functional and multiscale 3D structural investigation of brain tissue through correlative in vivo physiology, synchrotron microtomography and volume electron microscopy. Nature Communications 2022 13:1, 13(1), 1–16. https://doi.org/10.1038/s41467-022-30199-6 ​Brown, E., Brunker, J., & Bohndiek, S. E. (2019). Photoacoustic imaging as a tool to probe the tumour microenvironment. DMM Disease Models and Mechanisms, 12(7). https://doi.org/10.1242/DMM.039636 ​Walsh, C., Holroyd, N. A., Finnerty, E., Ryan, S. G., Sweeney, P. W., Shipley, R. J., & Walker-Samuel, S. (2021). Multifluorescence High-Resolution Episcopic Microscopy for 3D Imaging of Adult Murine Organs. Advanced Photonics Research, 2(10), 2100110. https://doi.org/10.1002/ADPR.202100110 ​Walsh, C., Holroyd, N., Shipley, R., & Walker-Samuel, S. (2020). Asymmetric Point Spread Function Estimation and Deconvolution for Serial-Sectioning Block-Face Imaging. Communications in Computer and Information Science, 1248 CCIS, 235–249. https://doi.org/10.1007/978-3-030-52791-4_19 ​ 

  2. n

    Voxel Dataset

    • data.ncl.ac.uk
    txt
    Updated Sep 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Towers; Linus Ericsson; Elliot J Crowley; Amir Atapour-Abarghouei; Andrew Stephen McGough (2025). Voxel Dataset [Dataset]. http://doi.org/10.25405/data.ncl.26970223.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    Sep 18, 2025
    Dataset provided by
    Newcastle University
    Authors
    David Towers; Linus Ericsson; Elliot J Crowley; Amir Atapour-Abarghouei; Andrew Stephen McGough
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Voxel dataset is a constructed dataset of 3D shapes designed to present a unique problem for ML and NAS tools. Instead of a photo of a 3D object, we exploit ML's ability to work across N number of 'colour' channels and use this dimension as a third dimension for images. This dataset is one of the three hidden datasets used by the 2024 NAS Unseen-Data Challenge. The images include 70,000 generated 3D Images of seven different shapes that we generated by creating a 20x20x20 grid of points in 3d space, and randomly generated different 3D shapes (see below) and recorded which of the points the shape collided with, generating the voxel like shapes in the dataset. The data has a shape of (n, 20, 20, 20) where n is the number of samples in the corresponding set (50,000 for training, 10,000 for validation, and 10,000 for testing). For each class (shape), we generated 10,000 samples evenly distributed between the three sets. The three classes and corresponding numerical labels are as follows: Sphere: 0, Cube: 1, Cone: 2, Cylinder: 3, Ellipsoid: 4, Cuboid: 5, Pyramid: 6

    NumPy (.npy) files can be opened through the NumPy Python library, using the numpy.load() function by inputting the path to the file into the function as a parameter. The metadata file contains some basic information about the datasets, and can be opened in many text editors such as vim, nano, notepad++, notepad, etc

  3. A multi-modal 3D medical image database for ultrasound-guided spinal surgery...

    • zenodo.org
    zip
    Updated May 26, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nima Masoumi; Clyde Belasso; Yiming Xiao; Hassan Rivaz; Nima Masoumi; Clyde Belasso; Yiming Xiao; Hassan Rivaz (2021). A multi-modal 3D medical image database for ultrasound-guided spinal surgery [Dataset]. http://doi.org/10.5281/zenodo.2483402
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 26, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Nima Masoumi; Clyde Belasso; Yiming Xiao; Hassan Rivaz; Nima Masoumi; Clyde Belasso; Yiming Xiao; Hassan Rivaz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Three different datasets of vertebrae with corresponding computed tomography (CT) and ultrasound (US) images are presented. In the first dataset, three human patients lumbar vertebrae are presented and the US images are simulated from their CT images. The second dataset includes corresponding CT, US, and simulated US images of a phantom made from post-mortem canine cervical and thoracic vertebrae. The last phantom consists of the CT, US, and simulated US images of a phantom made from a post-mortem lamb lumbar vertebrae. For each of the two latter datasets, we also provide 15 landmark pairs of matching structures between the CT and US images and performed fiducial registration to acquire a silver standard for assessing image registration.

    The datasets can be used to test CT-US image registration techniques and to validate techniques that simulate US from CT.

  4. c

    Non-Rigid 3D Human Models

    • research-data.cardiff.ac.uk
    zip
    Updated Sep 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Pickup; Xianfang Sun; Paul Rosin; Ralph Martin; Z Cheng (2024). Non-Rigid 3D Human Models [Dataset]. http://doi.org/10.17035/d.2015.100097
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 18, 2024
    Dataset provided by
    Cardiff University
    Authors
    David Pickup; Xianfang Sun; Paul Rosin; Ralph Martin; Z Cheng
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset contains a collection of human models, representing a variety of individuals in a variety of poses. It includes both a dataset created from scans of real individuals, and a dataset of synthetically generated humans. The dataset has been used to benchmark non-rigid 3D shape retrieval algorithms.Results based upon these data are published at http://doi.org/10.1007/s11263-016-0903-8

  5. 3d printing errors

    • kaggle.com
    Updated Feb 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NilsHagenBeyer (2024). 3d printing errors [Dataset]. https://www.kaggle.com/datasets/nimbus200/3d-printing-errors
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 20, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    NilsHagenBeyer
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    This dataset contains images of 3d printed parts recorded while printing.

    The dataset contains 4 classes and 34 shapes:

    classGOODSRINGINGUNDEREXTRUSIONSPAGHETTI
    images506927982962134

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F5666725%2F92b8fca57767fa55ae4e42d3972b2522%2F1.PNG?generation=1708440162571728&alt=media" alt=""> https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F5666725%2Fc36caa40d8d565bafa02d9f97112a777%2F2.PNG?generation=1708440216287321&alt=media" alt=""> https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F5666725%2F3ddeb2380e1106e9d482f3e6940235d3%2F3.PNG?generation=1708440227278455&alt=media" alt="">

    Labels and methadata:

    imageimage file name
    class0: Good, 1: Under-Extrusion, 2: Stringing, 4: Spaghetti
    layerlayer of completion of the printed part
    ex_mulglobal extrusion multiplier during print
    shapeidentifier of the printed geometry (1-34)
    recordingdatetime coded name of the print/recording
    printbed_colorcolor of the printbed (black, silver)

    Recording Process

    The dataset was recorded in the context of this work: https://github.com/NilsHagenBeyer/FDM_error_detection

    The Images were recorded with ELP-USB13MAFKV76 digital autofocus camera with the Sony IMX214 sensor chip, which has a resolution of 3264x2448, which were later downscaled to 256x256px. All Prints were carried out on a customized Creality Ender-3 Pro 3D.

    The Images were mainly recorded with a black printbed from camera position 1. For testing purposes the dataset contains also few images from camera postition 2 (oblique camera) with a black printbed (significant motion blurr) and camera postition 1 with a silver printbed. The positions can be seen in the image below.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F5666725%2F253a5f4c3d83233ddbc943fc1f8273e0%2Fexp_setup.png?generation=1721130817484111&alt=media" alt="">

    Folder Structure

    ├── general data

     └── all_images_no_filter.csv      # Full Dataset, unfiltered
    
     └── all_images.csv         # Full Dataset, no spaghetti error
    
     └── black_bed_all.csv       # Full Dataset, no silver bed
    

    ├── images

     └── all_images
     |   └── ...         # All Images: Full Dataset + Silver Bed + Oblique Camera
     |
     └── test_images_silver265
     |   └── ...         # Silver bed test images
     |
     └── test_images_oblique256
        └── ...         # Oblique camera test images
    
  6. 3D MNIST

    • kaggle.com
    zip
    Updated Oct 18, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David de la Iglesia Castro (2019). 3D MNIST [Dataset]. https://www.kaggle.com/daavoo/3d-mnist
    Explore at:
    zip(160210751 bytes)Available download formats
    Dataset updated
    Oct 18, 2019
    Authors
    David de la Iglesia Castro
    Description

    Context

    The aim of this dataset is to provide a simple way to get started with 3D computer vision problems such as 3D shape recognition.

    Accurate 3D point clouds can (easily and cheaply) be adquired nowdays from different sources:

    However there is a lack of large 3D datasets (you can find a good one here based on triangular meshes); it's especially hard to find datasets based on point clouds (wich is the raw output from every 3D sensing device).

    This dataset contains 3D point clouds generated from the original images of the MNIST dataset to bring a familiar introduction to 3D to people used to work with 2D datasets (images).

    In the 3D_from_2D notebook you can find the code used to generate the dataset.

    You can use the code in the notebook to generate a bigger 3D dataset from the original.

    Content

    full_dataset_vectors.h5

    The entire dataset stored as 4096-D vectors obtained from the voxelization (x:16, y:16, z:16) of all the 3D point clouds.

    In adition to the original point clouds, it contains randomly rotated copies with noise.

    The full dataset is splitted into arrays:

    • X_train (10000, 4096)
    • y_train (10000)
    • X_test(2000, 4096)
    • y_test (2000)

    Example python code reading the full dataset:

     with h5py.File("../input/train_point_clouds.h5", "r") as hf:  
       X_train = hf["X_train"][:]
       y_train = hf["y_train"][:]  
       X_test = hf["X_test"][:] 
       y_test = hf["y_test"][:] 
    

    train_point_clouds.h5 & test_point_clouds.h5

    5000 (train), and 1000 (test) 3D point clouds stored in HDF5 file format. The point clouds have zero mean and a maximum dimension range of 1.

    Each file is divided into HDF5 groups

    Each group is named as its corresponding array index in the original mnist dataset and it contains:

    • "points" dataset: x, y, z coordinates of each 3D point in the point cloud.
    • "normals" dataset: nx, ny, nz components of the unit normal associate to each point.
    • "img" dataset: the original mnist image.
    • "label" attribute: the original mnist label.

    Example python code reading 2 digits and storing some of the group content in tuples:

    with h5py.File("../input/train_point_clouds.h5", "r") as hf:  
      a = hf["0"]
      b = hf["1"]  
      digit_a = (a["img"][:], a["points"][:], a.attrs["label"]) 
      digit_b = (b["img"][:], b["points"][:], b.attrs["label"]) 
    

    voxelgrid.py

    Simple Python class that generates a grid of voxels from the 3D point cloud. Check kernel for use.

    plot3D.py

    Module with functions to plot point clouds and voxelgrid inside jupyter notebook. You have to run this locally due to Kaggle's notebook lack of support to rendering Iframes. See github issue here

    Functions included:

    • array_to_color Converts 1D array to rgb values use as kwarg color in plot_points()

    • plot_points(xyz, colors=None, size=0.1, axis=False)

    • plot_voxelgrid(v_grid, cmap="Oranges", axis=False)

    Acknowledgements

    Have fun!

  7. Z

    Data from: PTI datasets: 3D imaging

    • data.niaid.nih.gov
    • eprints.soton.ac.uk
    Updated Feb 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yeh, Li-Hao; Ivanov, Ivan; Byrum, Janie; Chhun, Bryant; Guo, Syuan-Ming; Foltz, Cameron; Hashemi, Ezzat; Pérez-Bermejo, Juan; Wang, Huijun; Yu, Yanhao; Kazansky, Peter; Conklin, Bruce; Han, May; Mehta, Shalin (2023). PTI datasets: 3D imaging [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5951977
    Explore at:
    Dataset updated
    Feb 28, 2023
    Dataset provided by
    Gladstone Institutes
    Chan Zuckerberg Biohub
    Gladstone Institutes, University of California at San Francisco
    Stanford University
    University of Southampton
    Authors
    Yeh, Li-Hao; Ivanov, Ivan; Byrum, Janie; Chhun, Bryant; Guo, Syuan-Ming; Foltz, Cameron; Hashemi, Ezzat; Pérez-Bermejo, Juan; Wang, Huijun; Yu, Yanhao; Kazansky, Peter; Conklin, Bruce; Han, May; Mehta, Shalin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset includes the following data reported in the PTI paper (link). These datasets can be read and processed using the provided notebooks (link) with the waveorder package (link). The zarr arrays (live one level below Col_x in the zarr files) in these datasets can also be visualized with the python image viewer (napari). You will need the ome-zarr plugin in napari and drag the zarr array to the napari viewer.

    1. Anisotropic_target_small.zip includes two zarr files that save the raw intensity images and processed physical properties of the small anisotropic target (double line-scan, 300-fs pulse duration):
    • Anisotropic_target_small_raw.zarr: array size in the format of (PolChannel, IllumChannel, Z, Y, X) = (4, 9, 96, 300, 300)

    • Anisotropic_target_small_processed.zarr:

    (Pos0 - Stitched_f_tensor) array size in the format of (T, C, Z, Y, X) = (1, 9, 96, 300, 300)

    (Pos1 - Stitched_physical) array size in the format of (T, C, Z, Y, X) = (1, 5, 96, 300, 300)

    1. Anisotropic_target_raw.zip includes the raw intensity images of another anisotropic target (single line-scan, 500-fs pulse duration):
    • data: 9 x 96 (pattern x z-slices) raw intensity images (TIFF) of the target with size of (2048, 2448) -> 4 channels of (1024, 1224)

    • bg: - data: 9 (pattern) raw intensity images (TIFF) of the background with size of (2048, 2448) -> 4 channels of (1024, 1224)

    • cali_images.pckl: pickle file that contains calibration curves of the polarization channels for this dataset

    1. Anisotropic_target_processed.zip includes two zarr files that save the processed scattering potential tensor components and the processed physical properties of the anisotropic target (single line-scan, 500-fs pulse duration):
    • uPTI_stitched.zarr: (Stitched_f_tensor) array size in the format of (T, C, Z, Y, X) = (1, 9, 96, 1024, 1224)

    • uPTI_physical.zarr: (Stitched_physical) array size in the format of (T, C, Z, Y, X) = (1, 5, 96, 700, 700) (cropping the star target region)

    1. Mouse_brain_aco_raw.zip includes the raw intensity images of the mouse brain section at aco region:
    • data: 9 x 96 (pattern x z-slices) raw intensity images (TIFF) of the mouse brain section with size of (2048, 2448) -> 4 channels of (1024, 1224)

    • bg: - data: 9 (pattern) raw intensity images (TIFF) of the background with size of (2048, 2448) -> 4 channels of (1024, 1224)

    • cali_images.pckl: pickle file that contains calibration curves of the polarization channels for this dataset

    1. Mouse_brain_aco_processed.zip includes two zarr files that save the processed scattering potential tensor components and the processed physical properties of the mouse brain section at aco region:
    • uPTI_stitched.zarr: (Stitched_f_tensor) array size in the format of (T, C, Z, Y, X) = (1, 9, 96, 1024, 1224)

    • uPTI_physical.zarr: (Stitched_physical) array size in the format of (T, C, Z, Y, X) = (1, 5, 96, 1024, 1224)

    1. Cardiomyocytes_(condition)_raw.zip includes two zarr files that save the raw PTI intensity images and the deconvolved fluorescence images of the cardiomyocytes with the specified (condition):
    • Cardiomyocytes_(condition)_raw.zarr:

    (Pos0) raw intensity images with the array size in the format of (PolChannel, IllumChannel, Z, Y, X) = (4, 9, 32, 1024, 1224)

    (Pos1) background intensity images with the array size in the format of (PolChannel, IllumChannel, Z, Y, X) = (4, 9, 1, 1024, 1224)

    • Cardiomyocytes_(condition)_fluor_decon.zarr: deconvolved fluorescence images with the array size in the format of (T, C, Z, Y, X) = (1, 3, 32, 1024, 1224)
    1. Cardiomyocytes_(condition)_processed.zip includes two zarr files that save the processed scattering potential tensor components and the processed physical properties of the cardiomyocytes with the specified (condition):
    • uPTI_stitched.zarr: (Stitched_f_tensor) array size in the format of (T, C, Z, Y, X) = (1, 9, 32, 1024, 1224)

    • uPTI_physical.zarr: (Stitched_physical) array size in the format of (T, C, Z, Y, X) = (1, 5, 32, 1024, 1224)

    1. cardiac_tissue_H_and_E_processed.zip and Human_uterus_section_H_and_E_raw.zip include the raw PTI intensity and H&E images of the cardiac tissue and human uterus section:
    • data: 10 x 40 (pattern x z-slices) raw intensity images (TIFF) of the target with size of (2048, 2448) -> 4 channels of (1024, 1224), the last channel is for images acquired with LCD turned off (the light leakage needed to be subtracted from the data)

    • bg: - data: 10 (pattern) raw intensity images (TIFF) of the background with size of (2048, 2448) -> 4 channels of (1024, 1224)

    • cali_images.pckl: pickle file that contains calibration curves of the polarization channels for this dataset

    • fluor: 3 x 40 (RGB x z-slices) raw H&E intensity images (TIFF) of the sample with size of (2048, 2448)

    • fluor_bg: 3 (RGB) raw H&E intensity images (TIFF) of the background with size of (2048, 2448)

    1. cardiac_tissue_H_and_E_processed.zip and Human_uterus_section_H_and_E_processed.zip include three zarr files that save the processed scattering potential tensor components, the processed physical properties, and the white-balanced H&E intensities of the cardiac tissue and human uterus section:
    • uPTI_stitched.zarr: (Stitched_f_tensor) array size in the format of (T, C, Z, Y, X) = (1, 9, 40, 1024, 1224)

    • uPTI_physical.zarr: (Stitched_physical) array size in the format of (T, C, Z, Y, X) = (1, 5, 40, 1024, 1224)

    • H_and_E.zarr: (H_and_E) array size in the format of (T, C, Z, Y, X) = (1, 3, 40, 1024, 1224)

  8. s

    SPE3R: Synthetic Dataset for Satellite Pose Estimation and 3D Reconstruction...

    • purl.stanford.edu
    Updated Jan 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tae Ha Park; Simone D'Amico (2024). SPE3R: Synthetic Dataset for Satellite Pose Estimation and 3D Reconstruction [Dataset]. http://doi.org/10.25740/pk719hm4806
    Explore at:
    Dataset updated
    Jan 4, 2024
    Authors
    Tae Ha Park; Simone D'Amico
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    This repository contains the Satellite Pose Estimation and 3D Reconstruction (SPE3R) dataset comprising 64 unique spacecraft 3D models. The models are selectively acquired from the NASA 3D Resources and ESA Science Satellite Fleet. Each of these models are normalized and made watertight, and they are accompanied by 1,000 images, binary masks and corresponding pose labels in order to support simultaneous 3D structure characterization and pose estimation. The images and binary masks are rendered using a custom high-fidelity synthetic scene constructed in Unreal Engine. The dataset is divided up into training, validation and test sets, such that the validation set is used to evaluate an algorithm's generalization capability on unseen images of known targets (i.e., seen during training), whereas the test set helps evaluate it on images of unknown targets (i.e., unseen during training).

  9. R

    3d Dataset

    • universe.roboflow.com
    zip
    Updated Sep 30, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ILLU (2024). 3d Dataset [Dataset]. https://universe.roboflow.com/illu/3d-9cvsk/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 30, 2024
    Dataset authored and provided by
    ILLU
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    3d
    Description

    3d

    ## Overview
    
    3d is a dataset for classification tasks - it contains 3d annotations for 2,695 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  10. f

    Ant Laboratory Environment Snapshot Dataset and 3D model

    • sussex.figshare.com
    txt
    Updated Jun 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Oluwaseyi Oladipupo Jesusanmi; Dexter Shepherd; Amany Said Amin; Nay Newman; Alejandra Carriero (2025). Ant Laboratory Environment Snapshot Dataset and 3D model [Dataset]. http://doi.org/10.25377/sussex.29109845.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 3, 2025
    Dataset provided by
    University of Sussex
    Authors
    Oluwaseyi Oladipupo Jesusanmi; Dexter Shepherd; Amany Said Amin; Nay Newman; Alejandra Carriero
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview:This dataset includes a set of panoramic images taken from a 3D reconstruction of an ant laboratory environment, and the 3D environment itself. In this dataset are 1485 points, evenly spaced between the explorable bounds of the arena in a grid. The coordinates where each image was taken were recorded. Since all the images are panoramic, you can rotate the images to get any direction view at any point in the grid. A look-up table is provided name "full_grid_views_meta_data.csv" which provides the names for all the image files with their corresponding coordinate. The simulation software used was Isaac Sim, and we included the USD file containing the 3D model of the reconstructed laboratory environment where all the views were taken from. The lab that was reconstructed is the Ant Navigation behaviour lab at the University of Sussex.Media and plots folder:"top down grid view.png" Shows an image of the arena from the top down. It is useful to use as a base for plotting."top down grid view plot.png" Shows the grid plotted on top of an image of the arena from the top down. It is useful to use as a base for plotting."pan cam animation.mp4" Is a fly through video of the virtual arena."plotting_grid_script.py" Python file for making the grid plot shown in "top down grid view.png"Meta data file explanation:Measurements were taken in metres, as Isaac Sim uses. Image files are stored in the zipped folder.0 degrees is "north" looking towards the white arena entrance on the platform where the images were taken from.Route name - Name of the data set.img_name - name of the image file.x_m - The x coordinate of the where the picture was taken in metresy_m - The y coordinate of the where the picture was taken in metresz_m - The z coordinate of the where the picture was taken in metres. (In this dataset is the same for all images)Headings - Heading perspective where the image is taken from. (In this dataset is the same for all images. Due to panoramas headings can be changed by rotating images).USD scene files:To use the 3D scene yourself, the "Collected_ant_view_scene\ant_view_scene.usd" contains the 3D model of the ant arena, with some virtual cameras for different simulated views of the scene. You can open and interact with the scene via Isaac Sim or other Nvidia Omniverse software. To collect your own sets of 3D views use and adapt the replicator script "full_arena_grid_infer_views.py" in the script editor. This script creates a set of coordinates and cycles through them taking an image at each point prescribed.The 3D models alone without any Isaac Sim layers can be found at "Collected_ant_view_scene\old nucleus clone\photogram\3DModel.usd". This can be edited in a 3D software of your choice.

  11. R

    Lidar 3d Captured Image From Drone Perspective Dataset

    • universe.roboflow.com
    zip
    Updated Oct 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Team 6 Lidar labeling (2024). Lidar 3d Captured Image From Drone Perspective Dataset [Dataset]. https://universe.roboflow.com/team-6-lidar-labeling/lidar-3d-captured-image-dataset-from-drone-perspective
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 30, 2024
    Dataset authored and provided by
    Team 6 Lidar labeling
    Variables measured
    Person Bounding Boxes
    Description

    LiDAR 3D Captured Image Dataset from Drone Perspective Only Person Labeled

  12. R

    3d Est Dataset

    • universe.roboflow.com
    zip
    Updated Feb 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BioEcoSys (2025). 3d Est Dataset [Dataset]. https://universe.roboflow.com/bioecosys/3d-est
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 4, 2025
    Dataset authored and provided by
    BioEcoSys
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Good Bad
    Description

    3D EST

    ## Overview
    
    3D EST is a dataset for classification tasks - it contains Good Bad annotations for 750 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  13. f

    3D dataset.zip

    • figshare.com
    • datasetcatalog.nlm.nih.gov
    zip
    Updated Feb 11, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeehae Park (2019). 3D dataset.zip [Dataset]. http://doi.org/10.6084/m9.figshare.6981620.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 11, 2019
    Dataset provided by
    figshare
    Authors
    Jeehae Park
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Gene expression data measured by in situ hybridization and cellular resolution 3D imaging.

  14. Mermaid Underwater Dataset

    • seanoe.org
    bin, image/*, pdf +1
    Updated 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Loïca Avanthey; Laurent Beaudoin (2023). Mermaid Underwater Dataset [Dataset]. http://doi.org/10.17882/97987
    Explore at:
    pdf, image/*, bin, xmlAvailable download formats
    Dataset updated
    2023
    Dataset provided by
    SEANOE
    Authors
    Loïca Avanthey; Laurent Beaudoin
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Time period covered
    Mar 31, 2022
    Area covered
    Description

    the mermaid underwater dataset (sr202204_ldm-s) is a set of underwater images acquired at approximately 20 m depth at the la sirène site (lion-de-mer) in saint-raphaël (france) during the submeeting 2022, an underwater robotic workshop (https://submeeting2022.univ-tln.fr/). a micro geodesic network was established at the acquisition site to serve as ground truth.the covered area is approximately 150 m2, with a sub-millimeter gsd. it is composed of a statue of a mermaid, a sandy plain with stones and a rocky area. sealife is distributed across these different spaces. data is acquired from a single camera by divers with natural lighting and is provided without pre-processing.this dataset can, among other things, be used in the context of work on underwater 3d reconstruction or on underwater visual navigation.the dataset is composed of the following data:- a pdf file bringing together information relating to the micro geodesic network (acquisition methods, data processing method, gcps, measurements and diagram) [sr202204_ldm-s_d00_groundtruth_report.pdf]- an stl file representing the micro geodesic network on a relative scale [sr202204_ldm-s_d00_groundtruth_network.stl]- a folder containing all the images [sr202204_ldm-s_d01] - a pdf file bringing together information relating to the image data (acquisition method, data format, sensor information including calibration value, overview of the area under different views with mosaic, 3d clouds and textured mesh created from the images, overview of the trajectory) [sr202204_ldm-s_d01_readme.pdf]- an xml file which brings together information relating to the pose of each image estimated from a multi-view bundle adjustment [sr202204_ldm-s_c01_camera_poses]

  15. 3D Endoanal Ultrasound Image Dataset

    • kaggle.com
    zip
    Updated Jul 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Orvile (2025). 3D Endoanal Ultrasound Image Dataset [Dataset]. https://www.kaggle.com/datasets/orvile/3d-endoanal-ultrasound-image-dataset
    Explore at:
    zip(31562884 bytes)Available download formats
    Dataset updated
    Jul 5, 2025
    Authors
    Orvile
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset

    This dataset was created by Orvile

    Released under Attribution 4.0 International (CC BY 4.0)

    Contents

  16. m

    3D quantification of vascular-like structures in z-stack confocal images:...

    • data.mendeley.com
    Updated Oct 28, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laura Bray (2020). 3D quantification of vascular-like structures in z-stack confocal images: Supplementary Material [Dataset]. http://doi.org/10.17632/btrrwrmt7z.1
    Explore at:
    Dataset updated
    Oct 28, 2020
    Authors
    Laura Bray
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is an example dataset as part of the Supplementary Material for our manuscript "3D quantification of vascular-like structures in z-stack confocal images" in STAR Protocols". The dataset provides an example raw confocal image stack, demonstrates the data visualisation at major steps throughout the protocol, as well as the received output from WinFiber3D.

  17. Stanford 2D-3D-Semantics Dataset (2D-3D-S)

    • redivis.com
    application/jsonl +7
    Updated Jun 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stanford Doerr School of Sustainability Data Repository (2024). Stanford 2D-3D-Semantics Dataset (2D-3D-S) [Dataset]. http://doi.org/10.57761/gmhc-wx10
    Explore at:
    arrow, spss, avro, stata, parquet, sas, csv, application/jsonlAvailable download formats
    Dataset updated
    Jun 28, 2024
    Dataset provided by
    Redivis Inc.
    Authors
    Stanford Doerr School of Sustainability Data Repository
    Time period covered
    Jun 27, 2024
    Description

    Abstract

    2D-3D-S comprises

    Methodology

    The 2D-3D-S dataset provides a variety of mutually registered modalities from 2D, 2.5D and 3D domains, with instance-level semantic and geometric annotations. It covers over 6,000 m2 and contains over 70,000 RGB images, along with the corresponding depths, surface normals, semantic annotations, global XYZ images (all in forms of both regular and 360° equirectangular images) as well as camera information. It also includes registered raw and semantically annotated 3D meshes and point clouds. In addition, the dataset contains the raw RGB and Depth imagery along with the corresponding camera information per scan location. The dataset enables development of joint and cross-modal learning models and potentially unsupervised approaches utilizing the regularities present in large-scale indoor spaces.

    In more detail, the dataset is collected in 6 large-scale indoor areas that originate from 3 different buildings of mainly educational and office use. For each area, all modalities are registered in the same reference system, yielding pixel to pixel correspondences among them. In a nutshell, the presented dataset contains a total of 70,496 regular RGB and 1,413 equirectangular RGB images, along with their corresponding depths, surface normals, semantic annotations, global XYZ OpenEXR format and camera metadata. It also contains the raw sensor data, which comprises of 18 HDR RGB and Depth images (6 looking forward, 6 towards the top, 6 towards the bottom) along with the corresponding camera metadata per each of the 1,413 scan locations, yielding a total of 25,434 RGBD raw images. In addition, we provide whole building 3D reconstructions as textured meshes, as well as the corresponding 3D semantic meshes. It also includes the colored 3D point cloud data of these areas with the total number of 695,878,620 points, that have been previously presented in the Stanford large-scale 3D Indoor Spaces Dataset (S3DIS).

    https://redivis.com/fileUploads/7a4dcf34-471b-4dd8-b2dc-dc9842280f76%3E" alt="2D3DS_pano.png">

    https://redivis.com/fileUploads/699e543b-cac6-4db0-bf30-77d48e3b2203%3E" alt="3Dmodal.png">

    https://redivis.com/fileUploads/43f7c602-202c-48fb-a44e-386b57a22835%3E" alt="equirect.png">%3Cu%3E%3Cstrong%3EImportant Information:%3C/strong%3E%3C/u%3E

    %3C!-- --%3E

  18. t

    Pix3D: Dataset and methods for single-image 3D shape modeling - Dataset -...

    • service.tib.eu
    Updated Dec 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Pix3D: Dataset and methods for single-image 3D shape modeling - Dataset - LDM [Dataset]. https://service.tib.eu/ldmservice/dataset/pix3d--dataset-and-methods-for-single-image-3d-shape-modeling
    Explore at:
    Dataset updated
    Dec 2, 2024
    Description

    The Pix3D dataset is a dataset of pairs of natural images and CAD models.

  19. s

    Dataset for the thesis '3D image-based modelling of collagenous soft...

    • eprints.soton.ac.uk
    Updated Mar 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Li, Jia; Limbert, Georges (2025). Dataset for the thesis '3D image-based modelling of collagenous soft tissues' [Dataset]. http://doi.org/10.5258/SOTON/D3442
    Explore at:
    Dataset updated
    Mar 15, 2025
    Dataset provided by
    University of Southampton
    Authors
    Li, Jia; Limbert, Georges
    Description

    This dataset contains: A zip file including several txt files that store the extarcted nominal stress and Green-Lagrange strain data for Models 1ABC subjected to uniaxial tensile tests. Also the maximum principal logarithmic strain and maximum principal stress data for Models 1A and 1C were written into two txt files respectively. These data were extracted from .ODB files using Python script.

  20. Z

    Dataset with results of "Joint 2D to 3D image registration workflow for...

    • data-staging.niaid.nih.gov
    • data.niaid.nih.gov
    Updated Jan 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dirk Elias Schut; Rachael Maree Wood; Anna Katharina Trull; Rob Schouten; Robert van Liere; Tristan van Leeuwen; Kees Joost Batenburg (2024). Dataset with results of "Joint 2D to 3D image registration workflow for comparing multiple slice photographs and CT scans of apple fruit with internal disorders" [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_8275792
    Explore at:
    Dataset updated
    Jan 25, 2024
    Dataset provided by
    Leiden Institute of Advanced Computer Science (LIACS)
    GREEFA
    Wageningen University and Research
    CWI
    Wageningen Food and Biobased Research
    Authors
    Dirk Elias Schut; Rachael Maree Wood; Anna Katharina Trull; Rob Schouten; Robert van Liere; Tristan van Leeuwen; Kees Joost Batenburg
    Description

    Summary

    This dataset contains all results from the paper "Joint 2D to 3D image registration workflow for comparing multiple slice photographs and CT scans of apple fruit with internal disorders". Most notably, this dataset contains the corresponding CT slices for slice photographs of 1347 'Kanzi' apples. This dataset also contains data of the results section, metadata required to make the registration code run, and segmentation masks of the apple slice photographs. The "raw" data that was used to produce these results can be found in another Zenodo dataset: https://zenodo.org/record/8167285.

    Description

    registered ct photo side-by-side view.zip is the easiest way to explore the registered CT photo image pairs. For every apple slice it contains a .png image consisting of the slice photo, registered CT slice and a combined view (photo=green, CT=purple) side-by-side. The resolution was reduced to reduce the file size.

    registered ct slices.zip contains the full resolution CT slices as .tiff files. The matching slice photos can be found in slice_photos_crop.zip in https://zenodo.org/record/8167285.

    photo metadata.zip contains all metadata files required to run the code on https://github.com/D1rk123/apple_photo_ct_workflow.

    results.zip contains the IPCED annotations and per apple metrics that were used to calculate all the average metrics and tables in the results section of the paper.

    subset experiment registered annotation slice.zip contains the full resolution CT slices of the annotation slice in the subset experiment as .tiff files.

    segmentation masks.zip contains slice photo segmentation masks as .png images. There are subfolders for the training set, the test set and the masks used for the workflow in the paper.

    Research groupThis dataset was produced by the Computational Imaging group at Centrum Wiskunde & Informatica (CI-CWI) in Amsterdam, The Netherlands: https://www.cwi.nl/research/groups/computational-imaging

    Contact detailsdirk [dot] schut [at] cwi [dot] nl

    AcknowledgmentsThis work was funded by the Dutch Research Council (NWO) through the UTOPIA project (ENWSS.2018.003).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Natalie Holroyd; Claire Walsh; Emmeline Brown; Emma Brown; Yuxin Zhang; Carles Bosch Pinol; Simon Walker-Samuel (2024). 3D Microvascular Image Data and Labels for Machine Learning [Dataset]. http://doi.org/10.5522/04/25715604.v1

3D Microvascular Image Data and Labels for Machine Learning

Explore at:
binAvailable download formats
Dataset updated
Apr 30, 2024
Dataset provided by
University College London
Authors
Natalie Holroyd; Claire Walsh; Emmeline Brown; Emma Brown; Yuxin Zhang; Carles Bosch Pinol; Simon Walker-Samuel
License

Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically

Description

These images and associated binary labels were collected from collaborators across multiple universities to serve as a diverse representation of biomedical images of vessel structures, for use in the training and validation of machine learning tools for vessel segmentation. The dataset contains images from a variety of imaging modalities, at different resolutions, using difference sources of contrast and featuring different organs/ pathologies. This data was use to train, test and validated a foundational model for 3D vessel segmentation, tUbeNet, which can be found on github. The paper descripting the training and validation of the model can be found here. Filenames are structured as follows: Data - [Modality]_[species Organ]_[resolution].tif Labels - [Modality]_[species Organ]_[resolution]_labels.tif Sub-volumes of larger dataset - [Modality]_[species Organ]_subvolume[dimensions in pixels].tif Manual labelling of blood vessels was carried out using Amira (2020.2, Thermo-Fisher, UK). Training data: opticalHREM_murineLiver_2.26x2.26x1.75um.tif: A high resolution episcopic microscopy (HREM) dataset, acquired in house by staining a healthy mouse liver with Eosin B and imaged using a standard HREM protocol. NB: 25% of this image volume was withheld from training, for use as test data. CT_murineTumour_20x20x20um.tif: X-ray microCT images of a microvascular cast, taken from a subcutaneous mouse model of colorectal cancer (acquired in house). NB: 25% of this image volume was withheld from training, for use as test data. RSOM_murineTumour_20x20um.tif: Raster-Scanning Optoacoustic Mesoscopy (RSOM) data from a subcutaneous tumour model (provided by Emma Brown, Bohndiek Group, University of Cambridge). The image data has undergone filtering to reduce the background ​(Brown et al., 2019)​. OCTA_humanRetina_24x24um.tif: retinal angiography data obtained using Optical Coherence Tomography Angiography (OCT-A) (provided by Dr Ranjan Rajendram, Moorfields Eye Hospital). Test data: MRI_porcineLiver_0.9x0.9x5mm.tif: T1-weighted Balanced Turbo Field Echo Magnetic Resonance Imaging (MRI) data from a machine-perfused porcine liver, acquired in-house. Test Data MFHREM_murineTumourLectin_2.76x2.76x2.61um.tif: a subcutaneous colorectal tumour mouse model was imaged in house using Multi-fluorescence HREM in house, with Dylight 647 conjugated lectin staining the vasculature ​(Walsh et al., 2021)​. The image data has been processed using an asymmetric deconvolution algorithm described by ​Walsh et al., 2020​. NB: A sub-volume of 480x480x640 voxels was manually labelled (MFHREM_murineTumourLectin_subvolume480x480x640.tif). MFHREM_murineBrainLectin_0.85x0.85x0.86um.tif: an MF-HREM image of the cortex of a mouse brain, stained with Dylight-647 conjugated lectin, was acquired in house ​(Walsh et al., 2021)​. The image data has been downsampled and processed using an asymmetric deconvolution algorithm described by ​Walsh et al., 2020​. NB: A sub-volume of 1000x1000x99 voxels was manually labelled. This sub-volume is provided at full resolution and without preprocessing (MFHREM_murineBrainLectin_subvol_0.57x0.57x0.86um.tif). 2Photon_murineOlfactoryBulbLectin_0.2x0.46x5.2um.tif: two-photon data of mouse olfactory bulb blood vessels, labelled with sulforhodamine 101, was kindly provided by Yuxin Zhang at the Sensory Circuits and Neurotechnology Lab, the Francis Crick Institute ​(Bosch et al., 2022)​. NB: A sub-volume of 500x500x79 voxel was manually labelled (2Photon_murineOlfactoryBulbLectin_subvolume500x500x79.tif). References: ​​Bosch, C., Ackels, T., Pacureanu, A., Zhang, Y., Peddie, C. J., Berning, M., Rzepka, N., Zdora, M. C., Whiteley, I., Storm, M., Bonnin, A., Rau, C., Margrie, T., Collinson, L., & Schaefer, A. T. (2022). Functional and multiscale 3D structural investigation of brain tissue through correlative in vivo physiology, synchrotron microtomography and volume electron microscopy. Nature Communications 2022 13:1, 13(1), 1–16. https://doi.org/10.1038/s41467-022-30199-6 ​Brown, E., Brunker, J., & Bohndiek, S. E. (2019). Photoacoustic imaging as a tool to probe the tumour microenvironment. DMM Disease Models and Mechanisms, 12(7). https://doi.org/10.1242/DMM.039636 ​Walsh, C., Holroyd, N. A., Finnerty, E., Ryan, S. G., Sweeney, P. W., Shipley, R. J., & Walker-Samuel, S. (2021). Multifluorescence High-Resolution Episcopic Microscopy for 3D Imaging of Adult Murine Organs. Advanced Photonics Research, 2(10), 2100110. https://doi.org/10.1002/ADPR.202100110 ​Walsh, C., Holroyd, N., Shipley, R., & Walker-Samuel, S. (2020). Asymmetric Point Spread Function Estimation and Deconvolution for Serial-Sectioning Block-Face Imaging. Communications in Computer and Information Science, 1248 CCIS, 235–249. https://doi.org/10.1007/978-3-030-52791-4_19 ​ 

Search
Clear search
Close search
Google apps
Main menu