Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Discover the Remote Sensing Object Segmentation Dataset Perfect for GIS, AI driven environmental studies, and satellite image analysis.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The visuAAL Skin Segmentation Dataset contains 46,775 high quality images divided into a training set with 45,623 images, and a validation set with 1,152 images. Skin areas have been obtained automatically from the FashionPedia garment dataset. The process to extract the skin areas is explained in detail in the paper 'From Garment to Skin: The visuAAL Skin Segmentation Dataset'.
If you use the visuAAL Skin Segmentation Dataset, please, cite:
How to use:
A sample of image data in the FashionPedia dataset is:
{'id': 12305,
'width': 680,
'height': 1024,
'file_name': '064c8022b32931e787260d81ed5aafe8.jpg',
'license': 4,
'time_captured': 'March-August, 2018',
'original_url': 'https://farm2.staticflickr.com/1936/8607950470_9d9d76ced7_o.jpg',
'isstatic': 1,
'kaggle_id': '064c8022b32931e787260d81ed5aafe8'}
NOTE: Not all the images in the FashionPedia dataset have the correponding skin mask in the visuAAL Skin Segmentation Dataset, as there are images in which only garment parts and not people are present in them. These images were removed when creating the visuAAL Skin Segmentation Dataset. However, all the instances in the visuAAL skin segmentation dataset have their corresponding match in the FashionPedia dataset.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
All the nine datasets used in Hi-gMISnet paper with exact train, validation, and test splits. Paper link: https://iopscience.iop.org/article/10.1088/1361-6560/ad3cb3 Github Repo: https://github.com/tushartalukder/Hi-gMISnet.git
Cite as: @article{showrav2024hi, title={Hi-gMISnet: generalized medical image segmentation using DWT based multilayer fusion and dual mode attention into high resolution pGAN}, author={Showrav, Tushar Talukder and Hasan, Md Kamrul}, journal={Physics in Medicine and Biology} }
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Needle Segmentation is a dataset for semantic segmentation tasks - it contains Gauge Needles annotations for 3,953 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Waste disposal is a global challenge, especially in densely populated areas. Efficient waste segregation is critical for separating recyclable from non-recyclable materials. While developed countries have established and refined effective waste segmentation and recycling systems, our country still uses manual segregation to identify and process recyclable items. This study presents a dataset intended to improve automatic waste segmentation systems. The dataset consists of 784 images that have been manually annotated for waste classification. These images were primarily taken in and around Jadavpur University, including streets, parks, and lawns. Annotations were created with the Labelme program and are available in color annotation formats. The dataset includes 14 waste categories: plastic containers, plastic bottles, thermocol, metal bottles, plastic cardboard, glass, thermocol plates, plastic, paper, plastic cups, paper cups, aluminum foil, cloth, and nylon. The dataset includes a total of 2350 object segments.Other Information:Published in: Mendely DataLicense: http://creativecommons.org/licenses/by/4.0/See dataset on publisher's website: https://data.mendeley.com/datasets/gr99ny6b8p/1
Facebook
Twitterhttps://spdx.org/licenses/https://spdx.org/licenses/
Alabama Buildings Segmentation dataset is the combination of BingMap satellite images and masks from Microsoft Maps. It is almost from Alabama, US (99%). Others from Columbia. Dataset contains 10200 satellite images and 10200 masks with weight ~ 17Gb. The satellite images from this dataset have resolution 0.5m/pixel, image size 1024x1024, ~1.5Mb/image. Dataset only contains pictures that have the total area of builbuilding in mask >= 1% area of that pictures. It means there are no images that do not have any building in this dataset.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The HaN-Seg: Head and Neck Organ-at-Risk CT & MR Segmentation Dataset is a publicly available dataset of anonymized head and neck (HaN) images of 42 patients that underwent both CT and T1-weighted MR imaging for the purpose of image-guided radiotherapy planning. In addition, the dataset also contains reference segmentations of 30 organs-at-risk (OARs) for CT images in the form of binary segmentation masks, which were obtained by curating manual pixel-wise expert image annotations. A full description of the HaN-Seg dataset can be found in:
G. Podobnik, P. Strojan, P. Peterlin, B. Ibragimov, T. Vrtovec, "HaN-Seg: The head and neck organ-at-risk CT & MR segmentation dataset", Medical Physics, 2023. 10.1002/mp.16197,
and any research originating from its usage is required to cite this paper.
In parallel with the release of the dataset, the HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched to promote the development of new and application of existing state-of-the-art fully automated techniques for OAR segmentation in the HaN region from CT images that exploit the information of multiple imaging modalities, in this case from CT and MR images. The task of the HaN-Seg challenge is to automatically segment up to 30 OARs in the HaN region from CT images in the devised test set, consisting of 14 CT and MR images of the same patients, given the availability of the training set (i.e. the herein publicly available HaN-Seg dataset), consisting of 42 CT and MR images of the same patients with reference 3D OAR binary segmentation masks for CT images.
Please find below a list of relevant publications that address: (1) the assessment of inter-observer and inter-modality variability in OAR contouring, (2) results of the HaN-Seg challenge, (3) development of our multimodal segmentation model, and (4) development of MR-to-CT image-to-image translation using diffusion models:
Facebook
TwitterA database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
People Clothing Segmentation Dataset
Dataset comprises 14,358 high-quality photos of 7,179 people of diverse genders wearing bathing suits, each paired with detailed segmentation masks for precise body-parts segmentation. It designed for semantic and instance segmentation tasks, this large-scale collection offers manually annotated labels, enabling robust training for deep learning models in human body analysis. By leveraging this dataset, researchers can train high-precision… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/swimsuit-human-segmentation-dataset.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
## Overview
Fruits Segmentation is a dataset for instance segmentation tasks - it contains Fruits annotations for 590 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [MIT license](https://creativecommons.org/licenses/MIT).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Tennis Court Segmentation is a dataset for semantic segmentation tasks - it contains Tennis Court annotations for 545 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
Twitterhttps://github.com/MIT-LCP/license-and-dua/tree/master/draftshttps://github.com/MIT-LCP/license-and-dua/tree/master/drafts
Chest X-ray(CXR) images are prominent among medical images and are commonly administered in emergency diagnosis and treatment corresponding to cardiac and respiratory diseases. Though there are robust solutions available for medical diagnosis, validation of artificial intelligence (AI) in radiology is still questionable. Segmentation is pivotal in chest radiographs that aid in improvising the existing AI-based medical diagnosis process. We provide the CXLSeg dataset: Chest X-ray with Lung Segmentation, a comparatively large dataset of segmented Chest X-ray radiographs based on the MIMIC-CXR dataset, a popular CXR image dataset. The dataset contains segmentation results of 243,324 frontal view images of the MIMIC-CXR dataset and corresponding masks. Additionally, this dataset can be utilized for computer vision-related deep learning tasks such as medical image classification, semantic segmentation and medical report generation. Models using segmented images yield better results since only the features related to the important areas of the image are focused. Thus images of this dataset can be manipulated to any visual feature extraction process associated with the original MIMIC-CXR dataset and enhance the results of the published or novel investigations. Furthermore, masks provided by this dataset can be used to train segmentation models when combined with the MIMIC-CXR-JPG dataset. The SA-UNet model achieved a 96.80% in dice similarity coefficient and 91.97% in IoU for lung segmentation using CXLSeg.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We established a large-scale plant disease segmentation dataset named PlantSeg. PlantSeg comprises more than 11,400 images of 115 different plant diseases from various environments, each annotated with its corresponding segmentation label for diseased parts. To the best of our knowledge, PlantSeg is the largest plant disease segmentation dataset containing in-the-wild images. Our dataset enables researchers to evaluate their models and provides a valid foundation for the development and benchmarking of plant disease segmentation algorithms.
Please note that due to the image limitations of Roboflow, the dataset provided here is not complete.
Project page: https://github.com/tqwei05/PlantSeg
Paper: https://arxiv.org/abs/2409.04038
Complete dataset download: https://zenodo.org/records/13958858
Reference: @article{wei2024plantseg, title={PlantSeg: A Large-Scale In-the-wild Dataset for Plant Disease Segmentation}, author={Wei, Tianqi and Chen, Zhi and Yu, Xin and Chapman, Scott and Melloy, Paul and Huang, Zi}, journal={arXiv preprint arXiv:2409.04038}, year={2024} }
Facebook
Twitterhttps://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Delve into the Pupils Segmentation Dataset Essential for ophthalmology tech, AI driven vision studies, and advanced eye research.
Facebook
Twitterhttps://spdx.org/licenses/https://spdx.org/licenses/
The authors of India Driving Dataset (IDD): A Dataset for Exploring Problems of Autonomous Navigation in Unconstrained Environments highlight a notable gap in existing datasets, which primarily focus on structured driving environments with well-defined infrastructure, limited traffic categories, and adherence to traffic rules. To fill this void, the authors present IDD, a novel dataset tailored for road scene understanding in unstructured environments, specifically on Indian roads. The updated version of the dataset (acquired in Oct, 2023) comprises 20k images, meticulously annotated with 41 classes, derived from 182 drive sequence.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Cov Khoom Siv Feem Ntau Segmentation Dataset ua haujlwm rau kev lag luam e-lag luam thiab kev lom zem pom kev lag luam nrog ntau cov duab sau hauv internet, muaj cov kev daws teeb meem xws li 800 × 600 txog 4160 × 3120. Cov ntaub ntawv no suav nrog ntau qhov sib txawv ntawm cov xwm txheej niaj hnub thiab cov khoom, suav nrog ntau tus neeg, tsiaj txhu thiab cov rooj tog zaum. segmentation.
Facebook
Twitterhttp://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
Not my dataset. Check the original dataset: https://www.kaggle.com/datasets/mehradaria/leukemia/data
Credit: Paper: A Fast and Efficient CNN Model for B-ALL Diagnosis and its Subtypes Classification using Peripheral Blood Smear Images Source code: https://github.com/MehradAria/ALL-Subtype-Classification
Data Citation: Mehrad Aria, Mustafa Ghaderzadeh, Davood Bashash, Hassan Abolghasemi, Farkhondeh Asadi, and Azamossadat Hosseini, “Acute Lymphoblastic Leukemia (ALL) image dataset.” Kaggle, (2021). DOI: 10.34740/KAGGLE/DSV/2175623.
Publication Citation: Ghaderzadeh, M, Aria, M, Hosseini, A, Asadi, F, Bashash, D, Abolghasemi, H. A fast and efficient CNN model for B-ALL diagnosis and its subtypes classification using peripheral blood smear images. Int J Intell Syst. 2022; 37: 5113- 5133. doi:10.1002/int.22753
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Mixed training and test images of S. aureus, E. coli and B. subtilis for cell segmentation using StarDist, as well as the trained StarDist model.
Additional information can be found on this github wiki.
Data type: Paired bright field / fluorescence and segmented mask images
Microscopy data type: 2D widefield images; DIC and fluorescence for S. aureus, bright field images for E. coli, and fluorescence images for B. subtilis
Microscopes:
S. aureus:
GE HealthCare Deltavision OMX system (with temperature and humidity control, 37°C) equipped with an Olympus 60x 1.42NA Oil immersion objective and 2 PCO Edge 5.5 sCMOS cameras (one for DIC, one for fluorescence)
E.coli:
Nikon Eclipse Ti-E equipped with an Apo TIRF 1.49NA 100x oil immersion objective
B. subtilis:
Custom-built 100x inverted microscope bearing a 100x TIRF objective (Nikon CFI Apochromat TIRF 100XC Oil); images were captured on a Prime BSI sCMOS camera (Teledyne Photometrics)
Cell types: S. aureus strain JE2, E. coli MG1655 (CGSC #6300) and B. subtilis strain SH130; all grown under agarose pads
File format: .tif (8-bit and 16-bit)
Image size: 512 x 512 px² @ 80 nm pixel size (S. aureus); 1024 x 1024 px² @ 79 nm pixel size (E. coli); 1024 x 1024 px² @ 65 nm pixel size (B. subtilis)
Image preprocessing:
S. aureus:
Raw images were manually annotated by drawing ellipses in the NR fluorescence image and segmented images were created using the LOCI plugin (“ROI Map”). For training, images and masks were quartered into four 256 x 256 px² patches.
E. coli:
Raw images were recorded in 16-bit mode (image size 512x512 px² @ 158 nm/px). Images were upscaled with a factor of 2 (no interpolation) to enable generation of higher-quality segmentation masks.
B. subtilis:
Images were denoised using PureDenoise and resulting 32-bit images were converted into 8-bit images after normalizing to 1% and 99.98% percentiles. Images were manually annotated using the Labkit Fiji plugin
StarDist model:
The StarDist 2D model was generated using the ZeroCostDL4Mic platform (Chamier et al., 2021). It was trained from scratch for 200 epochs (120 steps/epoch) on 155 paired image patches (image dimensions: (1024, 1024), patch size: (256,256)) with a batch size of 4, 10% validation data, 64 rays on grid 2, a learning rate of 0.0003 and a mae loss function, using the StarDist 2D ZeroCostDL4Mic notebook (v 1.12.2). Key python packages used include tensorflow (v 0.1.12), Keras (v 2.3.1), csbdeep (v 0.6.1), numpy (v 1.19.5), cuda (v 11.0.221). The training was accelerated using a Tesla P100GPU. The dataset was augmented by a factor of 3.
The model weights can be used in the ZeroCostDL4Mic StarDist 2D notebook, the StarDist Fiji plugin or the TrackMate Fiji plugin (v7+).
Author(s): Christoph Spahn1,2, Mike Heilemann1,3, Mia Conduit4, Séamus Holden4,5, Pedro Matos Pereira6,7, Mariana Pinho6,8
Contact email: christoph.spahn@mpi-marburg.mpg.de, Seamus.Holden@newcastle.ac.uk, pmatos@itqb.unl.pt and mgpinho@itqb.unl.pt
Affiliation(s):
1) Institute of Physical and Theoretical Chemistry, Max-von-Laue Str. 7, Goethe-University Frankfurt, 60439 Frankfurt, Germany
2) ORCID: 0000-0001-9886-2263
3) ORCID: 0000-0002-9821-3578
4) Centre for Bacterial Cell Biology, Biosciences Institute, Newcastle University, NE2 4AX UK
5) ORCID: 0000-0002-7169-907X
6) Bacterial Cell Biology, Instituto de Tecnologia Química e Biológica António Xavier, Universidade Nova de Lisboa, Oeiras, Portugal
7) ORCID: 0000-0002-1426-9540
8) ORCID: 0000-0002-7132-8842
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The Liver Tumor Segmentation Benchmark (LiTS) dataset contains 130 CT scans of patients with liver cancer. This dataset includes 2D slices from 3D CT scans with masks for liver, tumor, bone, arteries, and kidneys.
This dataset facilitates slice based segmentation, which produces more accurate results (in most cases) than 3D segmentation.
Reference: https://doi.org/10.1016/j.media.2022.102680
This dataset contains the slices from the LiTS dataset in the format: Volume-{VolumeNumber}-{SliceNumber.png}.
Both the image and the mask files have the same naming convention.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Discover the Remote Sensing Object Segmentation Dataset Perfect for GIS, AI driven environmental studies, and satellite image analysis.