Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
These images and associated binary labels were collected from collaborators across multiple universities to serve as a diverse representation of biomedical images of vessel structures, for use in the training and validation of machine learning tools for vessel segmentation. The dataset contains images from a variety of imaging modalities, at different resolutions, using difference sources of contrast and featuring different organs/ pathologies. This data was use to train, test and validated a foundational model for 3D vessel segmentation, tUbeNet, which can be found on github. The paper descripting the training and validation of the model can be found here. Filenames are structured as follows: Data - [Modality]_[species Organ]_[resolution].tif Labels - [Modality]_[species Organ]_[resolution]_labels.tif Sub-volumes of larger dataset - [Modality]_[species Organ]_subvolume[dimensions in pixels].tif Manual labelling of blood vessels was carried out using Amira (2020.2, Thermo-Fisher, UK). Training data: opticalHREM_murineLiver_2.26x2.26x1.75um.tif: A high resolution episcopic microscopy (HREM) dataset, acquired in house by staining a healthy mouse liver with Eosin B and imaged using a standard HREM protocol. NB: 25% of this image volume was withheld from training, for use as test data. CT_murineTumour_20x20x20um.tif: X-ray microCT images of a microvascular cast, taken from a subcutaneous mouse model of colorectal cancer (acquired in house). NB: 25% of this image volume was withheld from training, for use as test data. RSOM_murineTumour_20x20um.tif: Raster-Scanning Optoacoustic Mesoscopy (RSOM) data from a subcutaneous tumour model (provided by Emma Brown, Bohndiek Group, University of Cambridge). The image data has undergone filtering to reduce the background (Brown et al., 2019). OCTA_humanRetina_24x24um.tif: retinal angiography data obtained using Optical Coherence Tomography Angiography (OCT-A) (provided by Dr Ranjan Rajendram, Moorfields Eye Hospital). Test data: MRI_porcineLiver_0.9x0.9x5mm.tif: T1-weighted Balanced Turbo Field Echo Magnetic Resonance Imaging (MRI) data from a machine-perfused porcine liver, acquired in-house. Test Data MFHREM_murineTumourLectin_2.76x2.76x2.61um.tif: a subcutaneous colorectal tumour mouse model was imaged in house using Multi-fluorescence HREM in house, with Dylight 647 conjugated lectin staining the vasculature (Walsh et al., 2021). The image data has been processed using an asymmetric deconvolution algorithm described by Walsh et al., 2020. NB: A sub-volume of 480x480x640 voxels was manually labelled (MFHREM_murineTumourLectin_subvolume480x480x640.tif). MFHREM_murineBrainLectin_0.85x0.85x0.86um.tif: an MF-HREM image of the cortex of a mouse brain, stained with Dylight-647 conjugated lectin, was acquired in house (Walsh et al., 2021). The image data has been downsampled and processed using an asymmetric deconvolution algorithm described by Walsh et al., 2020. NB: A sub-volume of 1000x1000x99 voxels was manually labelled. This sub-volume is provided at full resolution and without preprocessing (MFHREM_murineBrainLectin_subvol_0.57x0.57x0.86um.tif). 2Photon_murineOlfactoryBulbLectin_0.2x0.46x5.2um.tif: two-photon data of mouse olfactory bulb blood vessels, labelled with sulforhodamine 101, was kindly provided by Yuxin Zhang at the Sensory Circuits and Neurotechnology Lab, the Francis Crick Institute (Bosch et al., 2022). NB: A sub-volume of 500x500x79 voxel was manually labelled (2Photon_murineOlfactoryBulbLectin_subvolume500x500x79.tif). References: Bosch, C., Ackels, T., Pacureanu, A., Zhang, Y., Peddie, C. J., Berning, M., Rzepka, N., Zdora, M. C., Whiteley, I., Storm, M., Bonnin, A., Rau, C., Margrie, T., Collinson, L., & Schaefer, A. T. (2022). Functional and multiscale 3D structural investigation of brain tissue through correlative in vivo physiology, synchrotron microtomography and volume electron microscopy. Nature Communications 2022 13:1, 13(1), 1–16. https://doi.org/10.1038/s41467-022-30199-6 Brown, E., Brunker, J., & Bohndiek, S. E. (2019). Photoacoustic imaging as a tool to probe the tumour microenvironment. DMM Disease Models and Mechanisms, 12(7). https://doi.org/10.1242/DMM.039636 Walsh, C., Holroyd, N. A., Finnerty, E., Ryan, S. G., Sweeney, P. W., Shipley, R. J., & Walker-Samuel, S. (2021). Multifluorescence High-Resolution Episcopic Microscopy for 3D Imaging of Adult Murine Organs. Advanced Photonics Research, 2(10), 2100110. https://doi.org/10.1002/ADPR.202100110 Walsh, C., Holroyd, N., Shipley, R., & Walker-Samuel, S. (2020). Asymmetric Point Spread Function Estimation and Deconvolution for Serial-Sectioning Block-Face Imaging. Communications in Computer and Information Science, 1248 CCIS, 235–249. https://doi.org/10.1007/978-3-030-52791-4_19
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Voxel dataset is a constructed dataset of 3D shapes designed to present a unique problem for ML and NAS tools. Instead of a photo of a 3D object, we exploit ML's ability to work across N number of 'colour' channels and use this dimension as a third dimension for images. This dataset is one of the three hidden datasets used by the 2024 NAS Unseen-Data Challenge. The images include 70,000 generated 3D Images of seven different shapes that we generated by creating a 20x20x20 grid of points in 3d space, and randomly generated different 3D shapes (see below) and recorded which of the points the shape collided with, generating the voxel like shapes in the dataset. The data has a shape of (n, 20, 20, 20) where n is the number of samples in the corresponding set (50,000 for training, 10,000 for validation, and 10,000 for testing). For each class (shape), we generated 10,000 samples evenly distributed between the three sets. The three classes and corresponding numerical labels are as follows: Sphere: 0, Cube: 1, Cone: 2, Cylinder: 3, Ellipsoid: 4, Cuboid: 5, Pyramid: 6
NumPy (.npy) files can be opened through the NumPy Python library, using the numpy.load() function by inputting the path to the file into the function as a parameter. The metadata file contains some basic information about the datasets, and can be opened in many text editors such as vim, nano, notepad++, notepad, etc
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Three different datasets of vertebrae with corresponding computed tomography (CT) and ultrasound (US) images are presented. In the first dataset, three human patients lumbar vertebrae are presented and the US images are simulated from their CT images. The second dataset includes corresponding CT, US, and simulated US images of a phantom made from post-mortem canine cervical and thoracic vertebrae. The last phantom consists of the CT, US, and simulated US images of a phantom made from a post-mortem lamb lumbar vertebrae. For each of the two latter datasets, we also provide 15 landmark pairs of matching structures between the CT and US images and performed fiducial registration to acquire a silver standard for assessing image registration.
The datasets can be used to test CT-US image registration techniques and to validate techniques that simulate US from CT.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains a collection of human models, representing a variety of individuals in a variety of poses. It includes both a dataset created from scans of real individuals, and a dataset of synthetically generated humans. The dataset has been used to benchmark non-rigid 3D shape retrieval algorithms.Results based upon these data are published at http://doi.org/10.1007/s11263-016-0903-8
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset contains images of 3d printed parts recorded while printing.
The dataset contains 4 classes and 34 shapes:
| class | GOOD | SRINGING | UNDEREXTRUSION | SPAGHETTI |
|---|---|---|---|---|
| images | 5069 | 2798 | 2962 | 134 |
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F5666725%2F92b8fca57767fa55ae4e42d3972b2522%2F1.PNG?generation=1708440162571728&alt=media" alt="">
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F5666725%2Fc36caa40d8d565bafa02d9f97112a777%2F2.PNG?generation=1708440216287321&alt=media" alt="">
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F5666725%2F3ddeb2380e1106e9d482f3e6940235d3%2F3.PNG?generation=1708440227278455&alt=media" alt="">
| image | image file name |
| class | 0: Good, 1: Under-Extrusion, 2: Stringing, 4: Spaghetti |
| layer | layer of completion of the printed part |
| ex_mul | global extrusion multiplier during print |
| shape | identifier of the printed geometry (1-34) |
| recording | datetime coded name of the print/recording |
| printbed_color | color of the printbed (black, silver) |
The dataset was recorded in the context of this work: https://github.com/NilsHagenBeyer/FDM_error_detection
The Images were recorded with ELP-USB13MAFKV76 digital autofocus camera with the Sony IMX214 sensor chip, which has a resolution of 3264x2448, which were later downscaled to 256x256px. All Prints were carried out on a customized Creality Ender-3 Pro 3D.
The Images were mainly recorded with a black printbed from camera position 1. For testing purposes the dataset contains also few images from camera postition 2 (oblique camera) with a black printbed (significant motion blurr) and camera postition 1 with a silver printbed. The positions can be seen in the image below.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F5666725%2F253a5f4c3d83233ddbc943fc1f8273e0%2Fexp_setup.png?generation=1721130817484111&alt=media" alt="">
├── general data
└── all_images_no_filter.csv # Full Dataset, unfiltered
└── all_images.csv # Full Dataset, no spaghetti error
└── black_bed_all.csv # Full Dataset, no silver bed
├── images
└── all_images
| └── ... # All Images: Full Dataset + Silver Bed + Oblique Camera
|
└── test_images_silver265
| └── ... # Silver bed test images
|
└── test_images_oblique256
└── ... # Oblique camera test images
Facebook
TwitterThe aim of this dataset is to provide a simple way to get started with 3D computer vision problems such as 3D shape recognition.
Accurate 3D point clouds can (easily and cheaply) be adquired nowdays from different sources:
However there is a lack of large 3D datasets (you can find a good one here based on triangular meshes); it's especially hard to find datasets based on point clouds (wich is the raw output from every 3D sensing device).
This dataset contains 3D point clouds generated from the original images of the MNIST dataset to bring a familiar introduction to 3D to people used to work with 2D datasets (images).
In the 3D_from_2D notebook you can find the code used to generate the dataset.
You can use the code in the notebook to generate a bigger 3D dataset from the original.
The entire dataset stored as 4096-D vectors obtained from the voxelization (x:16, y:16, z:16) of all the 3D point clouds.
In adition to the original point clouds, it contains randomly rotated copies with noise.
The full dataset is splitted into arrays:
Example python code reading the full dataset:
with h5py.File("../input/train_point_clouds.h5", "r") as hf:
X_train = hf["X_train"][:]
y_train = hf["y_train"][:]
X_test = hf["X_test"][:]
y_test = hf["y_test"][:]
5000 (train), and 1000 (test) 3D point clouds stored in HDF5 file format. The point clouds have zero mean and a maximum dimension range of 1.
Each file is divided into HDF5 groups
Each group is named as its corresponding array index in the original mnist dataset and it contains:
x, y, z coordinates of each 3D point in the point cloud.nx, ny, nz components of the unit normal associate to each point.Example python code reading 2 digits and storing some of the group content in tuples:
with h5py.File("../input/train_point_clouds.h5", "r") as hf:
a = hf["0"]
b = hf["1"]
digit_a = (a["img"][:], a["points"][:], a.attrs["label"])
digit_b = (b["img"][:], b["points"][:], b.attrs["label"])
Simple Python class that generates a grid of voxels from the 3D point cloud. Check kernel for use.
Module with functions to plot point clouds and voxelgrid inside jupyter notebook. You have to run this locally due to Kaggle's notebook lack of support to rendering Iframes. See github issue here
Functions included:
array_to_color
Converts 1D array to rgb values use as kwarg color in plot_points()
plot_points(xyz, colors=None, size=0.1, axis=False)
plot_voxelgrid(v_grid, cmap="Oranges", axis=False)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes the following data reported in the PTI paper (link). These datasets can be read and processed using the provided notebooks (link) with the waveorder package (link). The zarr arrays (live one level below Col_x in the zarr files) in these datasets can also be visualized with the python image viewer (napari). You will need the ome-zarr plugin in napari and drag the zarr array to the napari viewer.
Anisotropic_target_small_raw.zarr: array size in the format of (PolChannel, IllumChannel, Z, Y, X) = (4, 9, 96, 300, 300)
Anisotropic_target_small_processed.zarr:
(Pos0 - Stitched_f_tensor) array size in the format of (T, C, Z, Y, X) = (1, 9, 96, 300, 300)
(Pos1 - Stitched_physical) array size in the format of (T, C, Z, Y, X) = (1, 5, 96, 300, 300)
data: 9 x 96 (pattern x z-slices) raw intensity images (TIFF) of the target with size of (2048, 2448) -> 4 channels of (1024, 1224)
bg: - data: 9 (pattern) raw intensity images (TIFF) of the background with size of (2048, 2448) -> 4 channels of (1024, 1224)
cali_images.pckl: pickle file that contains calibration curves of the polarization channels for this dataset
uPTI_stitched.zarr: (Stitched_f_tensor) array size in the format of (T, C, Z, Y, X) = (1, 9, 96, 1024, 1224)
uPTI_physical.zarr: (Stitched_physical) array size in the format of (T, C, Z, Y, X) = (1, 5, 96, 700, 700) (cropping the star target region)
data: 9 x 96 (pattern x z-slices) raw intensity images (TIFF) of the mouse brain section with size of (2048, 2448) -> 4 channels of (1024, 1224)
bg: - data: 9 (pattern) raw intensity images (TIFF) of the background with size of (2048, 2448) -> 4 channels of (1024, 1224)
cali_images.pckl: pickle file that contains calibration curves of the polarization channels for this dataset
uPTI_stitched.zarr: (Stitched_f_tensor) array size in the format of (T, C, Z, Y, X) = (1, 9, 96, 1024, 1224)
uPTI_physical.zarr: (Stitched_physical) array size in the format of (T, C, Z, Y, X) = (1, 5, 96, 1024, 1224)
(Pos0) raw intensity images with the array size in the format of (PolChannel, IllumChannel, Z, Y, X) = (4, 9, 32, 1024, 1224)
(Pos1) background intensity images with the array size in the format of (PolChannel, IllumChannel, Z, Y, X) = (4, 9, 1, 1024, 1224)
uPTI_stitched.zarr: (Stitched_f_tensor) array size in the format of (T, C, Z, Y, X) = (1, 9, 32, 1024, 1224)
uPTI_physical.zarr: (Stitched_physical) array size in the format of (T, C, Z, Y, X) = (1, 5, 32, 1024, 1224)
data: 10 x 40 (pattern x z-slices) raw intensity images (TIFF) of the target with size of (2048, 2448) -> 4 channels of (1024, 1224), the last channel is for images acquired with LCD turned off (the light leakage needed to be subtracted from the data)
bg: - data: 10 (pattern) raw intensity images (TIFF) of the background with size of (2048, 2448) -> 4 channels of (1024, 1224)
cali_images.pckl: pickle file that contains calibration curves of the polarization channels for this dataset
fluor: 3 x 40 (RGB x z-slices) raw H&E intensity images (TIFF) of the sample with size of (2048, 2448)
fluor_bg: 3 (RGB) raw H&E intensity images (TIFF) of the background with size of (2048, 2448)
uPTI_stitched.zarr: (Stitched_f_tensor) array size in the format of (T, C, Z, Y, X) = (1, 9, 40, 1024, 1224)
uPTI_physical.zarr: (Stitched_physical) array size in the format of (T, C, Z, Y, X) = (1, 5, 40, 1024, 1224)
H_and_E.zarr: (H_and_E) array size in the format of (T, C, Z, Y, X) = (1, 3, 40, 1024, 1224)
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This repository contains the Satellite Pose Estimation and 3D Reconstruction (SPE3R) dataset comprising 64 unique spacecraft 3D models. The models are selectively acquired from the NASA 3D Resources and ESA Science Satellite Fleet. Each of these models are normalized and made watertight, and they are accompanied by 1,000 images, binary masks and corresponding pose labels in order to support simultaneous 3D structure characterization and pose estimation. The images and binary masks are rendered using a custom high-fidelity synthetic scene constructed in Unreal Engine. The dataset is divided up into training, validation and test sets, such that the validation set is used to evaluate an algorithm's generalization capability on unseen images of known targets (i.e., seen during training), whereas the test set helps evaluate it on images of unknown targets (i.e., unseen during training).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
3d is a dataset for classification tasks - it contains 3d annotations for 2,695 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview:This dataset includes a set of panoramic images taken from a 3D reconstruction of an ant laboratory environment, and the 3D environment itself. In this dataset are 1485 points, evenly spaced between the explorable bounds of the arena in a grid. The coordinates where each image was taken were recorded. Since all the images are panoramic, you can rotate the images to get any direction view at any point in the grid. A look-up table is provided name "full_grid_views_meta_data.csv" which provides the names for all the image files with their corresponding coordinate. The simulation software used was Isaac Sim, and we included the USD file containing the 3D model of the reconstructed laboratory environment where all the views were taken from. The lab that was reconstructed is the Ant Navigation behaviour lab at the University of Sussex.Media and plots folder:"top down grid view.png" Shows an image of the arena from the top down. It is useful to use as a base for plotting."top down grid view plot.png" Shows the grid plotted on top of an image of the arena from the top down. It is useful to use as a base for plotting."pan cam animation.mp4" Is a fly through video of the virtual arena."plotting_grid_script.py" Python file for making the grid plot shown in "top down grid view.png"Meta data file explanation:Measurements were taken in metres, as Isaac Sim uses. Image files are stored in the zipped folder.0 degrees is "north" looking towards the white arena entrance on the platform where the images were taken from.Route name - Name of the data set.img_name - name of the image file.x_m - The x coordinate of the where the picture was taken in metresy_m - The y coordinate of the where the picture was taken in metresz_m - The z coordinate of the where the picture was taken in metres. (In this dataset is the same for all images)Headings - Heading perspective where the image is taken from. (In this dataset is the same for all images. Due to panoramas headings can be changed by rotating images).USD scene files:To use the 3D scene yourself, the "Collected_ant_view_scene\ant_view_scene.usd" contains the 3D model of the ant arena, with some virtual cameras for different simulated views of the scene. You can open and interact with the scene via Isaac Sim or other Nvidia Omniverse software. To collect your own sets of 3D views use and adapt the replicator script "full_arena_grid_infer_views.py" in the script editor. This script creates a set of coordinates and cycles through them taking an image at each point prescribed.The 3D models alone without any Isaac Sim layers can be found at "Collected_ant_view_scene\old nucleus clone\photogram\3DModel.usd". This can be edited in a 3D software of your choice.
Facebook
TwitterLiDAR 3D Captured Image Dataset from Drone Perspective Only Person Labeled
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
3D EST is a dataset for classification tasks - it contains Good Bad annotations for 750 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Gene expression data measured by in situ hybridization and cellular resolution 3D imaging.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
the mermaid underwater dataset (sr202204_ldm-s) is a set of underwater images acquired at approximately 20 m depth at the la sirène site (lion-de-mer) in saint-raphaël (france) during the submeeting 2022, an underwater robotic workshop (https://submeeting2022.univ-tln.fr/). a micro geodesic network was established at the acquisition site to serve as ground truth.the covered area is approximately 150 m2, with a sub-millimeter gsd. it is composed of a statue of a mermaid, a sandy plain with stones and a rocky area. sealife is distributed across these different spaces. data is acquired from a single camera by divers with natural lighting and is provided without pre-processing.this dataset can, among other things, be used in the context of work on underwater 3d reconstruction or on underwater visual navigation.the dataset is composed of the following data:- a pdf file bringing together information relating to the micro geodesic network (acquisition methods, data processing method, gcps, measurements and diagram) [sr202204_ldm-s_d00_groundtruth_report.pdf]- an stl file representing the micro geodesic network on a relative scale [sr202204_ldm-s_d00_groundtruth_network.stl]- a folder containing all the images [sr202204_ldm-s_d01] - a pdf file bringing together information relating to the image data (acquisition method, data format, sensor information including calibration value, overview of the area under different views with mosaic, 3d clouds and textured mesh created from the images, overview of the trajectory) [sr202204_ldm-s_d01_readme.pdf]- an xml file which brings together information relating to the pose of each image estimated from a multi-view bundle adjustment [sr202204_ldm-s_c01_camera_poses]
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was created by Orvile
Released under Attribution 4.0 International (CC BY 4.0)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is an example dataset as part of the Supplementary Material for our manuscript "3D quantification of vascular-like structures in z-stack confocal images" in STAR Protocols". The dataset provides an example raw confocal image stack, demonstrates the data visualisation at major steps throughout the protocol, as well as the received output from WinFiber3D.
Facebook
Twitter2D-3D-S comprises
The 2D-3D-S dataset provides a variety of mutually registered modalities from 2D, 2.5D and 3D domains, with instance-level semantic and geometric annotations. It covers over 6,000 m2 and contains over 70,000 RGB images, along with the corresponding depths, surface normals, semantic annotations, global XYZ images (all in forms of both regular and 360° equirectangular images) as well as camera information. It also includes registered raw and semantically annotated 3D meshes and point clouds. In addition, the dataset contains the raw RGB and Depth imagery along with the corresponding camera information per scan location. The dataset enables development of joint and cross-modal learning models and potentially unsupervised approaches utilizing the regularities present in large-scale indoor spaces.
In more detail, the dataset is collected in 6 large-scale indoor areas that originate from 3 different buildings of mainly educational and office use. For each area, all modalities are registered in the same reference system, yielding pixel to pixel correspondences among them. In a nutshell, the presented dataset contains a total of 70,496 regular RGB and 1,413 equirectangular RGB images, along with their corresponding depths, surface normals, semantic annotations, global XYZ OpenEXR format and camera metadata. It also contains the raw sensor data, which comprises of 18 HDR RGB and Depth images (6 looking forward, 6 towards the top, 6 towards the bottom) along with the corresponding camera metadata per each of the 1,413 scan locations, yielding a total of 25,434 RGBD raw images. In addition, we provide whole building 3D reconstructions as textured meshes, as well as the corresponding 3D semantic meshes. It also includes the colored 3D point cloud data of these areas with the total number of 695,878,620 points, that have been previously presented in the Stanford large-scale 3D Indoor Spaces Dataset (S3DIS).
https://redivis.com/fileUploads/7a4dcf34-471b-4dd8-b2dc-dc9842280f76%3E" alt="2D3DS_pano.png">
https://redivis.com/fileUploads/699e543b-cac6-4db0-bf30-77d48e3b2203%3E" alt="3Dmodal.png">
https://redivis.com/fileUploads/43f7c602-202c-48fb-a44e-386b57a22835%3E" alt="equirect.png">%3Cu%3E%3Cstrong%3EImportant Information:%3C/strong%3E%3C/u%3E
%3C!-- --%3E
Facebook
TwitterThe Pix3D dataset is a dataset of pairs of natural images and CAD models.
Facebook
TwitterThis dataset contains: A zip file including several txt files that store the extarcted nominal stress and Green-Lagrange strain data for Models 1ABC subjected to uniaxial tensile tests. Also the maximum principal logarithmic strain and maximum principal stress data for Models 1A and 1C were written into two txt files respectively. These data were extracted from .ODB files using Python script.
Facebook
TwitterSummary
This dataset contains all results from the paper "Joint 2D to 3D image registration workflow for comparing multiple slice photographs and CT scans of apple fruit with internal disorders". Most notably, this dataset contains the corresponding CT slices for slice photographs of 1347 'Kanzi' apples. This dataset also contains data of the results section, metadata required to make the registration code run, and segmentation masks of the apple slice photographs. The "raw" data that was used to produce these results can be found in another Zenodo dataset: https://zenodo.org/record/8167285.
Description
registered ct photo side-by-side view.zip is the easiest way to explore the registered CT photo image pairs. For every apple slice it contains a .png image consisting of the slice photo, registered CT slice and a combined view (photo=green, CT=purple) side-by-side. The resolution was reduced to reduce the file size.
registered ct slices.zip contains the full resolution CT slices as .tiff files. The matching slice photos can be found in slice_photos_crop.zip in https://zenodo.org/record/8167285.
photo metadata.zip contains all metadata files required to run the code on https://github.com/D1rk123/apple_photo_ct_workflow.
results.zip contains the IPCED annotations and per apple metrics that were used to calculate all the average metrics and tables in the results section of the paper.
subset experiment registered annotation slice.zip contains the full resolution CT slices of the annotation slice in the subset experiment as .tiff files.
segmentation masks.zip contains slice photo segmentation masks as .png images. There are subfolders for the training set, the test set and the masks used for the workflow in the paper.
Research groupThis dataset was produced by the Computational Imaging group at Centrum Wiskunde & Informatica (CI-CWI) in Amsterdam, The Netherlands: https://www.cwi.nl/research/groups/computational-imaging
Contact detailsdirk [dot] schut [at] cwi [dot] nl
AcknowledgmentsThis work was funded by the Dutch Research Council (NWO) through the UTOPIA project (ENWSS.2018.003).
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
These images and associated binary labels were collected from collaborators across multiple universities to serve as a diverse representation of biomedical images of vessel structures, for use in the training and validation of machine learning tools for vessel segmentation. The dataset contains images from a variety of imaging modalities, at different resolutions, using difference sources of contrast and featuring different organs/ pathologies. This data was use to train, test and validated a foundational model for 3D vessel segmentation, tUbeNet, which can be found on github. The paper descripting the training and validation of the model can be found here. Filenames are structured as follows: Data - [Modality]_[species Organ]_[resolution].tif Labels - [Modality]_[species Organ]_[resolution]_labels.tif Sub-volumes of larger dataset - [Modality]_[species Organ]_subvolume[dimensions in pixels].tif Manual labelling of blood vessels was carried out using Amira (2020.2, Thermo-Fisher, UK). Training data: opticalHREM_murineLiver_2.26x2.26x1.75um.tif: A high resolution episcopic microscopy (HREM) dataset, acquired in house by staining a healthy mouse liver with Eosin B and imaged using a standard HREM protocol. NB: 25% of this image volume was withheld from training, for use as test data. CT_murineTumour_20x20x20um.tif: X-ray microCT images of a microvascular cast, taken from a subcutaneous mouse model of colorectal cancer (acquired in house). NB: 25% of this image volume was withheld from training, for use as test data. RSOM_murineTumour_20x20um.tif: Raster-Scanning Optoacoustic Mesoscopy (RSOM) data from a subcutaneous tumour model (provided by Emma Brown, Bohndiek Group, University of Cambridge). The image data has undergone filtering to reduce the background (Brown et al., 2019). OCTA_humanRetina_24x24um.tif: retinal angiography data obtained using Optical Coherence Tomography Angiography (OCT-A) (provided by Dr Ranjan Rajendram, Moorfields Eye Hospital). Test data: MRI_porcineLiver_0.9x0.9x5mm.tif: T1-weighted Balanced Turbo Field Echo Magnetic Resonance Imaging (MRI) data from a machine-perfused porcine liver, acquired in-house. Test Data MFHREM_murineTumourLectin_2.76x2.76x2.61um.tif: a subcutaneous colorectal tumour mouse model was imaged in house using Multi-fluorescence HREM in house, with Dylight 647 conjugated lectin staining the vasculature (Walsh et al., 2021). The image data has been processed using an asymmetric deconvolution algorithm described by Walsh et al., 2020. NB: A sub-volume of 480x480x640 voxels was manually labelled (MFHREM_murineTumourLectin_subvolume480x480x640.tif). MFHREM_murineBrainLectin_0.85x0.85x0.86um.tif: an MF-HREM image of the cortex of a mouse brain, stained with Dylight-647 conjugated lectin, was acquired in house (Walsh et al., 2021). The image data has been downsampled and processed using an asymmetric deconvolution algorithm described by Walsh et al., 2020. NB: A sub-volume of 1000x1000x99 voxels was manually labelled. This sub-volume is provided at full resolution and without preprocessing (MFHREM_murineBrainLectin_subvol_0.57x0.57x0.86um.tif). 2Photon_murineOlfactoryBulbLectin_0.2x0.46x5.2um.tif: two-photon data of mouse olfactory bulb blood vessels, labelled with sulforhodamine 101, was kindly provided by Yuxin Zhang at the Sensory Circuits and Neurotechnology Lab, the Francis Crick Institute (Bosch et al., 2022). NB: A sub-volume of 500x500x79 voxel was manually labelled (2Photon_murineOlfactoryBulbLectin_subvolume500x500x79.tif). References: Bosch, C., Ackels, T., Pacureanu, A., Zhang, Y., Peddie, C. J., Berning, M., Rzepka, N., Zdora, M. C., Whiteley, I., Storm, M., Bonnin, A., Rau, C., Margrie, T., Collinson, L., & Schaefer, A. T. (2022). Functional and multiscale 3D structural investigation of brain tissue through correlative in vivo physiology, synchrotron microtomography and volume electron microscopy. Nature Communications 2022 13:1, 13(1), 1–16. https://doi.org/10.1038/s41467-022-30199-6 Brown, E., Brunker, J., & Bohndiek, S. E. (2019). Photoacoustic imaging as a tool to probe the tumour microenvironment. DMM Disease Models and Mechanisms, 12(7). https://doi.org/10.1242/DMM.039636 Walsh, C., Holroyd, N. A., Finnerty, E., Ryan, S. G., Sweeney, P. W., Shipley, R. J., & Walker-Samuel, S. (2021). Multifluorescence High-Resolution Episcopic Microscopy for 3D Imaging of Adult Murine Organs. Advanced Photonics Research, 2(10), 2100110. https://doi.org/10.1002/ADPR.202100110 Walsh, C., Holroyd, N., Shipley, R., & Walker-Samuel, S. (2020). Asymmetric Point Spread Function Estimation and Deconvolution for Serial-Sectioning Block-Face Imaging. Communications in Computer and Information Science, 1248 CCIS, 235–249. https://doi.org/10.1007/978-3-030-52791-4_19