Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Discover the Remote Sensing Object Segmentation Dataset Perfect for GIS, AI driven environmental studies, and satellite image analysis.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The visuAAL Skin Segmentation Dataset contains 46,775 high quality images divided into a training set with 45,623 images, and a validation set with 1,152 images. Skin areas have been obtained automatically from the FashionPedia garment dataset. The process to extract the skin areas is explained in detail in the paper 'From Garment to Skin: The visuAAL Skin Segmentation Dataset'.
If you use the visuAAL Skin Segmentation Dataset, please, cite:
How to use:
A sample of image data in the FashionPedia dataset is:
{'id': 12305,
'width': 680,
'height': 1024,
'file_name': '064c8022b32931e787260d81ed5aafe8.jpg',
'license': 4,
'time_captured': 'March-August, 2018',
'original_url': 'https://farm2.staticflickr.com/1936/8607950470_9d9d76ced7_o.jpg',
'isstatic': 1,
'kaggle_id': '064c8022b32931e787260d81ed5aafe8'}
NOTE: Not all the images in the FashionPedia dataset have the correponding skin mask in the visuAAL Skin Segmentation Dataset, as there are images in which only garment parts and not people are present in them. These images were removed when creating the visuAAL Skin Segmentation Dataset. However, all the instances in the visuAAL skin segmentation dataset have their corresponding match in the FashionPedia dataset.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Delve into the Pupils Segmentation Dataset Essential for ophthalmology tech, AI driven vision studies, and advanced eye research.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We established a large-scale plant disease segmentation dataset named PlantSeg. PlantSeg comprises more than 11,400 images of 115 different plant diseases from various environments, each annotated with its corresponding segmentation label for diseased parts. To the best of our knowledge, PlantSeg is the largest plant disease segmentation dataset containing in-the-wild images. Our dataset enables researchers to evaluate their models and provides a valid foundation for the development and benchmarking of plant disease segmentation algorithms.
Please note that due to the image limitations of Roboflow, the dataset provided here is not complete.
Project page: https://github.com/tqwei05/PlantSeg
Paper: https://arxiv.org/abs/2409.04038
Complete dataset download: https://zenodo.org/records/13958858
Reference: @article{wei2024plantseg, title={PlantSeg: A Large-Scale In-the-wild Dataset for Plant Disease Segmentation}, author={Wei, Tianqi and Chen, Zhi and Yu, Xin and Chapman, Scott and Melloy, Paul and Huang, Zi}, journal={arXiv preprint arXiv:2409.04038}, year={2024} }
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Waste disposal is a global challenge, especially in densely populated areas. Efficient waste segregation is critical for separating recyclable from non-recyclable materials. While developed countries have established and refined effective waste segmentation and recycling systems, our country still uses manual segregation to identify and process recyclable items. This study presents a dataset intended to improve automatic waste segmentation systems. The dataset consists of 784 images that have been manually annotated for waste classification. These images were primarily taken in and around Jadavpur University, including streets, parks, and lawns. Annotations were created with the Labelme program and are available in color annotation formats. The dataset includes 14 waste categories: plastic containers, plastic bottles, thermocol, metal bottles, plastic cardboard, glass, thermocol plates, plastic, paper, plastic cups, paper cups, aluminum foil, cloth, and nylon. The dataset includes a total of 2350 object segments.Other Information:Published in: Mendely DataLicense: http://creativecommons.org/licenses/by/4.0/See dataset on publisher's website: https://data.mendeley.com/datasets/gr99ny6b8p/1
Facebook
TwitterATCS is a dataset designed to train deep learning models to volumetrically segment clouds from multi-angle satellite imagery. The dataset consists of spatiotemporally aligned patches of multi-angle polarimetry from the POLDER sensor aboard the PARASOL mission and vertical cloud profiles from the 2B-CLDCLASS product using the cloud profiling radar (CPR) aboard CloudSat.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
## Overview
Fruits Segmentation is a dataset for instance segmentation tasks - it contains Fruits annotations for 590 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [MIT license](https://creativecommons.org/licenses/MIT).
Facebook
Twitterhttps://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
Facebook
Twitterhttps://github.com/MIT-LCP/license-and-dua/tree/master/draftshttps://github.com/MIT-LCP/license-and-dua/tree/master/drafts
Chest X-ray(CXR) images are prominent among medical images and are commonly administered in emergency diagnosis and treatment corresponding to cardiac and respiratory diseases. Though there are robust solutions available for medical diagnosis, validation of artificial intelligence (AI) in radiology is still questionable. Segmentation is pivotal in chest radiographs that aid in improvising the existing AI-based medical diagnosis process. We provide the CXLSeg dataset: Chest X-ray with Lung Segmentation, a comparatively large dataset of segmented Chest X-ray radiographs based on the MIMIC-CXR dataset, a popular CXR image dataset. The dataset contains segmentation results of 243,324 frontal view images of the MIMIC-CXR dataset and corresponding masks. Additionally, this dataset can be utilized for computer vision-related deep learning tasks such as medical image classification, semantic segmentation and medical report generation. Models using segmented images yield better results since only the features related to the important areas of the image are focused. Thus images of this dataset can be manipulated to any visual feature extraction process associated with the original MIMIC-CXR dataset and enhance the results of the published or novel investigations. Furthermore, masks provided by this dataset can be used to train segmentation models when combined with the MIMIC-CXR-JPG dataset. The SA-UNet model achieved a 96.80% in dice similarity coefficient and 91.97% in IoU for lung segmentation using CXLSeg.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Cov Khoom Siv Feem Ntau Segmentation Dataset ua haujlwm rau kev lag luam e-lag luam thiab kev lom zem pom kev lag luam nrog ntau cov duab sau hauv internet, muaj cov kev daws teeb meem xws li 800 × 600 txog 4160 × 3120. Cov ntaub ntawv no suav nrog ntau qhov sib txawv ntawm cov xwm txheej niaj hnub thiab cov khoom, suav nrog ntau tus neeg, tsiaj txhu thiab cov rooj tog zaum. segmentation.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The Liver Tumor Segmentation Benchmark (LiTS) dataset contains 130 CT scans of patients with liver cancer. This dataset includes 2D slices from 3D CT scans with masks for liver, tumor, bone, arteries, and kidneys.
This dataset facilitates slice based segmentation, which produces more accurate results (in most cases) than 3D segmentation.
Reference: https://doi.org/10.1016/j.media.2022.102680
This dataset contains the slices from the LiTS dataset in the format: Volume-{VolumeNumber}-{SliceNumber.png}.
Both the image and the mask files have the same naming convention.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Road Lane Instance Segmentation Dataset contains high-resolution dashcam images with pixel-perfect annotations of lane markings such as solid lines, dotted lines, double lines, divider lines, and road sign lines. It is designed for autonomous driving, ADAS, and computer vision research.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Road Scene Semantic Segmentation Dataset is specifically designed for autonomous driving applications, featuring a collection of internet-collected images with a standard resolution of 1920 x 1080 pixels. This dataset is focused on semantic segmentation, aiming to accurately segment various elements of road scenes such as the sky, buildings, lane lines, pedestrians, and more, to support the development of advanced driver-assistance systems (ADAS) and autonomous vehicle technologies.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Explore the Walkway Segmentation Dataset. Vital for urban planning AI, pedestrian analysis, and smart city infrastructure insights.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Drivable Area Segmentation Dataset is meticulously crafted to enhance the capabilities of AI in navigating autonomous vehicles through diverse driving environments. It features a wide array of high-resolution images, with resolutions ranging from 1600 x 1200 to 2592 x 1944 pixels, capturing various pavement types such as bitumen, concrete, gravel, earth, snow, and ice. This dataset is vital for training AI models to differentiate between drivable and non-drivable areas, a fundamental aspect of autonomous driving. By providing detailed semantic and binary segmentation, it aims to improve the safety and efficiency of autonomous vehicles, ensuring they can adapt to different road conditions and environments encountered in real-world scenarios.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Hair Loss Segmentation Dataset - 1 080 images
The dataset comprises 1,080 images of 540 women with alopecia, featuring top-view scalp images paired with segmentation masks. Each image is annotated with precise segmentation masks, enabling analysis of hair follicles, hair density, and baldness patterns. — Get the data
Dataset characteristics:
Characteristic Data
Description Photos of women with varying degrees of hair loss for segmentation tasks
Data… See the full description on the dataset page: https://huggingface.co/datasets/ud-medical/hair-loss-segmentation-dataset.
Facebook
TwitterThis dataset is part of the Culicidaelab project - open-source system for mosquito research and analysis, which includes components:
Data:
Base diversity dataset (46 species, 3139 images under CC-BY-SA-4.0 license. Specialized derivatives: classification, detection, and segmentation datasets under CC-BY-SA-4.0 licenses.
Models:
Top-1 models (see reports), used as default by culicidaelab library: classification (Apache 2.0), detection (AGPL-3.0), segmentation (Apache 2.0) Top-5… See the full description on the dataset page: https://huggingface.co/datasets/iloncka/mosquito-species-segmentation-dataset.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The HaN-Seg: Head and Neck Organ-at-Risk CT & MR Segmentation Dataset is a publicly available dataset of anonymized head and neck (HaN) images of 42 patients that underwent both CT and T1-weighted MR imaging for the purpose of image-guided radiotherapy planning. In addition, the dataset also contains reference segmentations of 30 organs-at-risk (OARs) for CT images in the form of binary segmentation masks, which were obtained by curating manual pixel-wise expert image annotations. A full description of the HaN-Seg dataset can be found in:
G. Podobnik, P. Strojan, P. Peterlin, B. Ibragimov, T. Vrtovec, "HaN-Seg: The head and neck organ-at-risk CT & MR segmentation dataset", Medical Physics, 2023. 10.1002/mp.16197,
and any research originating from its usage is required to cite this paper.
In parallel with the release of the dataset, the HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched to promote the development of new and application of existing state-of-the-art fully automated techniques for OAR segmentation in the HaN region from CT images that exploit the information of multiple imaging modalities, in this case from CT and MR images. The task of the HaN-Seg challenge is to automatically segment up to 30 OARs in the HaN region from CT images in the devised test set, consisting of 14 CT and MR images of the same patients, given the availability of the training set (i.e. the herein publicly available HaN-Seg dataset), consisting of 42 CT and MR images of the same patients with reference 3D OAR binary segmentation masks for CT images.
Please find below a list of relevant publications that address: (1) the assessment of inter-observer and inter-modality variability in OAR contouring, (2) results of the HaN-Seg challenge, (3) development of our multimodal segmentation model, and (4) development of MR-to-CT image-to-image translation using diffusion models:
Facebook
TwitterThe dataset contains images of medical images and corresponding labels.
Facebook
TwitterThe data is a combination of DRIVE-HRF-CHASE DB1- STARE. Data is divided into 3 folders. It can be used for training a semantic segmentation model. Training- It has the most images and masks from all the dataset to be used for training Test - It has less images and masks from all the dataset to be used for training Unlabeled Test- It has only images to be used for inference if needed, It has no GT available
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Discover the Remote Sensing Object Segmentation Dataset Perfect for GIS, AI driven environmental studies, and satellite image analysis.