Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Wine Label Segmentation is a dataset for instance segmentation tasks - it contains Wine Labels annotations for 4,010 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Coffee Segmentation Labels is a dataset for instance segmentation tasks - it contains Coffee Labels annotations for 1,159 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Introduction This dataset aims to explore the realm of object detection and segmentation with a specific focus on its applications in agriculture. The primary objective is to employ YOLOv8 and SAM techniques to develop robust models for detecting grape bunches.
Dataset Description The dataset comprises four trained models utilizing YOLOv8 architecture. It includes two single-class models, one utilizing object detection and the other employing instance segmentation for grape detection. Additionally, there are two multi-class models capable of predicting and detecting different grape varietals. All models were trained using the large model from the Ultralytics repository (https://github.com/ultralytics/ultralytics).
The dataset encompasses four grape varietals: - Pinot Noir: 102 images and labels - Chardonnay: 39 images and labels from me 47 from thsant - Sauvignon Blanc: 42 images and labels - Pinot Gris: 111 images and labels
Total used for training: 341
Note that the training of the segmentation models used a total of 20 images from each for a total of 100.
Datasets Used for Training To see the dataset (e.g train/test/val folders) used for training the multi class object detection model please see the following zip file and note book:
To build a custom train-dataset please follow the instructions in the notebook: https://www.kaggle.com/code/nicolaasregnier/buildtraindataset/
The labels used for training the multi-class instance segmentation model are under the folder SAMPreds
Data Sources The dataset incorporates two primary data sources. The first source is a collection of images captured using an iPad Air 2 RGB camera. These images possess a resolution of 3226x2449 pixels and an 8-megapixel quality. The second source is contributed by GitHub user thsant, who has created an impressive project available at https://github.com/thsant/wgisd/tree/master.
To label the data, a base model from a previous dataset was utilized, and the annotation process was carried out using LabelImg (https://github.com/heartexlabs/labelImg). It is important to note that some annotations from thsant's dataset required modifications for completeness.
Implementation Steps The data preparation involved the utilization of classes and functions from the "my_SAM" (https://github.com/regs08/my_SAM) and "KaggleUtils" (https://github.com/regs08/KaggleUtils) repositories, facilitating the creation of training sets and the application of SAM techniques.
For model training, the YOLOv8 architecture with default hyperparameters was employed. The object detection models underwent 50 epochs of training, while the instance segmentation models were trained for 75 epochs.
Segment Anything (SAM) from https://segment-anything.com/ was applied to the bbox-labeled data to generate images and corresponding masks for the instance segmentation models. No further editing of the images occurred after applying SAM.
Evaluation and Inference The evaluation metrics utilized were Mean Average Precision (mAP). The following mAP values were obtained:
Single-class object detection: - mAP50: 0.85449 - mAP50-95: 0.56177
Multi-class object detection: - mAP50: 0.85336 - mAP50-95: 0.56316
Single-class instance segmentation: - mAP50: (value not provided) - mAP50-95: (value not provided)
Multi-class instance segmentation: - mAP50: 0.89436 - mAP50-95: 0.62785
For more comprehensive metrics, please refer to the results folder corresponding to the model of interest.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Annotation Label is a dataset for instance segmentation tasks - it contains Gun annotations for 968 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
TimberVision is a dataset and framework for tree-trunk detection and tracking based on RGB images. It combines the advantages of oriented object detection and instance segmentation for optimizing robustness and efficiency, as described in the corresponding paper presented at WACV 2025. This repository contains images and annotations of the dataset as well as associated files. Source code, models, configuration files and further documentation can be found on our GitHub page.
Data Structure
The repository provides the following subdirectories:
images: all images included in the TimberVision dataset
labels: annotations corresponding to each image in YOLOv8 instance-segmentation format
labels_eval: additional annotations
mot: ground-truth annotations for multi-object-tracking evaluation in custom format
timberseg: custom annotations for selected images from the TimberSeg dataset
videos: complete video files used for evaluating multi-object-tracking (annotated keyframes sampled from each file are included in the images and labels directories)
scene_parameters.csv: annotations of four scene parameters for each image describing trunk properties and context (see the paper for details)
train/val/test.txt: original split files used for training, validation and testing of oriented-object-detection and instance-segmentation models with YOLOv8
sources.md: references and licenses for images used in the open-source subset
Subsets
TimberVision consists of multiple subsets for different application scenarios. To identify them, file names of images and annotations include the following prefixes:
tvc: core dataset recorded in forests and other outdoor locations
tvh: images depicting harvesting scenarios in forests with visible machinery
tvl: images depicting loading scenarios in more structured environments with visible machinery
tvo: a small set of third-party open-source images for evaluating generalization
tvt: keyframes extracted from videos at 2 fps for tracking evaluation
Citing
If you use the TimberVision dataset for your research, please cite the original paper: Steininger, D., Simon, J., Trondl, A., Murschitz, M., 2025. TimberVision: A Multi-Task Dataset and Framework for Log-Component Segmentation and Tracking in Autonomous Forestry Operations. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
AutoNaVIT is a meticulously developed dataset designed to accelerate research in autonomous navigation, semantic scene understanding, and object segmentation through deep learning. This release includes only the annotation labels in XML format, aligned with high-resolution frames extracted from a controlled driving sequence at Vellore Institute of Technology – Chennai Campus (VIT-C). The corresponding images will be included in Version 2 of the dataset.
Class Annotations The dataset features carefully annotated bounding boxes for the following three essential classes relevant to real-time navigation and path planning in autonomous vehicles:
Kerb – 1,377 instances
Obstacle – 258 instances
Path – 532 instances
All annotations were produced using Roboflow with human-verified precision, ensuring consistent, high-quality data that supports robust model development for urban and semi-urban scenarios.
Data Capture Specifications The source video was captured using a Sony IMX890 sensor, under stable daylight lighting. Below are the capture parameters:
Sensor Size: 1/1.56", 50 MP
Lens: 6P optical configuration
Aperture: ƒ/1.8
Focal Length: 24mm equivalent
Pixel Size: 1.0 µm
Features: Optical Image Stabilization (OIS), PDAF autofocus
Video Duration: 4 minutes 11 seconds
Frame Rate: 2 FPS
Total Annotated Frames: 504
Format Compatibility and Model Support AutoNaVIT annotations are provided in Pascal VOC-compatible XML format, making them directly usable with models that support the Pascal VOC standard. The dataset is immediately compatible with:
Pascal VOC
As XML is a structured, extensible format, these annotations can be easily adapted for use with additional object detection frameworks that support XML-based label schemas.
Benchmark Results To assess dataset utility, a YOLOv8 segmentation model was trained on the full dataset (including images). The model achieved the following results:
Mean Average Precision (mAP): 96.5%
Precision: 92.2%
Recall: 94.4%
These metrics demonstrate the dataset’s effectiveness in training models for autonomous vehicle perception and obstacle detection.
Disclaimer and Attribution Requirement By downloading or using this dataset, users agree to the terms outlined in the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0):
This dataset is available solely for academic and non-commercial research purposes.
Proper attribution must be provided as follows: “Dataset courtesy of Vellore Institute of Technology – Chennai Campus.” This citation must appear in all research papers, presentations, or any work derived from this dataset.
Redistribution, public hosting, commercial use, or modification is prohibited without prior written permission from VIT-C.
Use of this dataset implies acceptance of these terms. All rights not explicitly granted are retained by VIT-C.
Facebook
TwitterThis dataset contains 810 images of 12 different classes of food types. The dataset contains food that is generically found across the globe like Pizzas, Burgers, Fries, etc., and some food items that are geographically specific to India. Those include Idli, Vada, Chapathi, etc. In order for the Yolo model to recognize extremely generic items like fruits and common ingredients, the dataset was trained on Apples, Bananas, Rice, Tomatoes, etc. This dataset was created using roboflow's dataset creator present on the roboflow website. The data was augmented using roboflow's dataset augmentation methods like Flip 90 degrees and different ranges of saturation. The dataset can be used with YoloV5 and YoloV8 as well.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
## Overview
Auto Label is a dataset for instance segmentation tasks - it contains Buildings 4jY2 annotations for 7,839 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [MIT license](https://creativecommons.org/licenses/MIT).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
AutoNaVIT is a meticulously curated dataset developed to assist research in autonomous navigation, scene understanding, and deep learning-based object segmentation. This release contains only the annotation labels in TXT format corresponding to high-resolution frames extracted from a recorded driving sequence at Vellore Institute of Technology – Chennai Campus (VIT-C). The corresponding images will be made available in Version 2 of the dataset soon.
The dataset features manually annotated bounding boxes and labels for three essential classes critical for autonomous vehicle navigation:
Kerb – 1,377 instances
Obstacle – 258 instances
Path – 532 instances
All annotations were created using Roboflow, ensuring high fidelity and consistency, which is vital for real-world autonomous driving applications in both urban and semi-urban environments.
Data Capture Specifications Source imagery was recorded using a Sony IMX890 sensor with the following specifications:
Sensor Size: 1/1.56", 50 MP
Lens: 6P, ƒ/1.8, 24mm equivalent, 1.0 µm pixels
Features: OIS (Optical Image Stabilization), PDAF autofocus
Video Duration: 4 min 11 sec
Frame Rate: 2 FPS
Total Annotated Frames: 504
Format Compatibility and Model Support AutoNaVIT annotations are provided in standard TXT format, enabling direct compatibility with the following 13 models:
yolokeras
yolov4pytorch
darknet
yolov5-obb
yolov8-obb
imt-yolov6
yolov4scaled
yolov5pytorch
yolov7pytorch
yolov8
yolov9
yolov11
yolov12
As the dataset adheres to standard YOLO TXT annotations, it can easily be adapted for other models or frameworks that support TXT-based annotations.
Benchmark Results To evaluate the dataset’s performance, a YOLOv8-based segmentation model was trained on the complete dataset (images + annotations). The model achieved:
Mean Average Precision (mAP): 96.5%
Precision: 92.2%
Recall: 94.4%
These results confirm the dataset's high utility and reliability in training segmentation models for autonomous vehicle perception systems.
Disclaimer and Attribution Requirement By accessing or using this dataset, users agree to the terms outlined under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0):
Usage is permitted solely for non-commercial academic and research purposes.
Proper attribution must be given, stating: “Dataset courtesy of Vellore Institute of Technology – Chennai Campus.” This acknowledgment must be included in all forms of publication, presentation, or dissemination of work utilizing this dataset.
Redistribution, commercial use, modification, or public hosting of the dataset is prohibited without explicit written permission from VIT-C.
Use of this dataset implies acceptance of these terms. All rights not explicitly granted are reserved by VIT-C.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is derived from the [Cell Counting v5 dataset on Roboflow] (https://universe.roboflow.com/cell-counting-hapu2/cell-counting-so7h7 ).
The original dataset was provided in YOLOv8 object detection format.
We created binary masks suitable for UNet-based semantic segmentation tasks.
Additionally, we generated augmented images to increase dataset variability.
Train/Valid/Test Splits
Each split contains:
images/: Source images labels/: YOLO annotation files (kept for reference) masks_binary/: Binary masks for semantic segmentation Augmented Images
aug_inference_only/images/ Each of the 35 original images was augmented with 3 additional variations, resulting in 105 augmented images.
Augmentation methods include:
- Random rotation (−90° to 90°)
- Flipping (horizontal, vertical, both)
- Shifting and scaling
- Brightness/contrast adjustment
- Gaussian noise injection
CC BY 4.0 – This dataset can be shared and adapted with appropriate attribution.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Precious Gemstone Identification
Description: This comprehensive dataset comprises annotated images of a diverse range of precious gemstones meticulously curated for gemstone identification tasks. With 87 classes of gemstones for classification unique varieties including Chalcedony Blue, Amber, Aventurine Yellow, Dumortierite, Pearl, Aventurine Green, and many others, this dataset serves as a valuable resource for training and evaluating machine learning models in gemstone recognition.
Gemstone Variety: The dataset encompasses a wide spectrum of precious gemstones, ranging from well-known varieties like Emerald, Ruby, Sapphire, and Diamond to lesser-known gems such as Benitoite, Larimar, and Sphene.
Dataset Split: Train Set: 92% (46404 images) Validation Set: 4% (1932 images) Test Set: 4% (1932 images)
Preprocessing: Images in the dataset have been preprocessed to ensure consistency and quality:
Augmentations: To enhance model robustness and generalization, each training example has been augmented with various transformations:
File Formats Available:
Disclaimer:
The images included in this dataset were sourced from various online platforms, primarily from minerals.net and www.rasavgems.com websites, as well as other online datasets. We have curated and annotated these datasets for the purpose of gemstone identification and made them available in different formats. We do not claim ownership of the original images, and we do not claim to own these images. Any trademarks, logos, or copyrighted materials belong to their respective owners.
Researchers, enthusiasts and developers interested in gemstone identification, machine learning, and computer vision applications will find this dataset invaluable for training and benchmarking gemstone recognition algorithms.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Experimental data for the paper "Hierarchical Deep Learning Framework for Automated Marine Vegetation and Fauna Analysis Using ROV Video Data."This dataset supports the study "Hierarchical Deep Learning Framework for Automated Marine Vegetation and Fauna Analysis Using ROV Video Data" by providing resources essential for reproducing and validating the research findings.Dataset Contents and Structure:Hierarchical Model Weights: - .pth files containing trained weights for all alpha regularization values used in hierarchical classification models.MaskRCNN-Segmented Objects: - .jpg files representing segmented objects detected by the MaskRCNN model. - Accompanied by maskrcnn-segmented-objects-dataset.parquet, which includes metadata and classifications: - Columns:masked_image: Path to the segmented image file.confidence: Confidence score for the prediction.predicted_species: Predicted species label.species: True species label.MaskRCNN Weights: - Trained MaskRCNN model weights, including hierarchical CNN models integrated with MaskRCNN in the processing pipeline.Pre-Trained Models:.pt files for all object detectors trained on the Esefjorden Marine Vegetation Segmentation Dataset (EMVSD) in YOLO txt format.Segmented Object Outputs: - Segmentation outputs and datasets for the following models: - RT-DETR: - Segmented objects: rtdetr-segmented-objects/ - Dataset: rtdetr-segmented-objects-dataset.parquet - YOLO-SAG: - Segmented objects: yolosag-segmented-objects/ - Dataset: yolosag-segmented-objects-dataset.parquet - YOLOv11: - Segmented objects: yolov11-segmented-objects/ - Dataset: yolov11-segmented-objects-dataset.parquet - YOLOv8: - Segmented objects: yolov8-segmented-objects/ - Dataset: yolov8-segmented-objects-dataset.parquet - YOLOv9: - Segmented objects: yolov9-segmented-objects/ - Dataset: yolov9-segmented-objects-dataset.parquetUsage Instructions:1. Download and extract the dataset.2. Utilize the Python scripts provided in the associated GitHub repository for evaluation and inference: https://github.com/Ci2Lab/FjordVisionReproducibility:The dataset includes pre-trained weights, segmentation outputs, and experimental results to facilitate reproducibility. The .parquet files and segmented object directories follow a standardized format to ensure consistency.Licensing:This dataset is released under the CC-BY 4.0 license, permitting reuse with proper attribution.Related Materials:- GitHub Repository: https://github.com/Ci2Lab/FjordVision
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
AutoNaVIT is a carefully designed dataset intended to advance research in autonomous navigation, semantic scene understanding, and deep learning-based object segmentation. This release includes only the annotation labels in CSV format, corresponding to high-resolution frames extracted from a driving sequence recorded at Vellore Institute of Technology – Chennai Campus (VIT-C). The corresponding images will be provided in Version 2 of the dataset.
The dataset comprises manually annotated bounding boxes for three key classes that are critical for path planning and perception in autonomous vehicle systems:
Kerb – 1,377 instances
Obstacle – 258 instances
Path – 532 instances
All annotations were generated using Roboflow, with precise, human-verified labeling for consistent, high-quality data—essential for training robust models that generalize well to real-world urban and semi-urban driving scenarios.
Data Capture Specifications The video footage used for annotation was recorded using a Sony IMX890 camera sensor under stable daylight conditions, with the following details:
Sensor Size: 1/1.56", 50 MP
Lens: 6P optical configuration
Aperture: ƒ/1.8
Focal Length: 24mm equivalent
Pixel Size: 1.0 µm
Features: Optical Image Stabilization (OIS), PDAF autofocus
Video Duration: 4 minutes 11 seconds
Frame Rate: 2 FPS
Total Annotated Frames: 504
Format Compatibility and Model Support AutoNaVIT’s annotations are made available in standard CSV format, enabling direct compatibility with the following three models:
Multiclass
TensorFlow CSV
RetinaNet
Since CSV is a highly adaptable format, the annotations can be easily modified or reformatted to suit other deep learning models or pipelines that support CSV-based label structures.
Benchmark Results To validate the dataset's effectiveness, a segmentation model using YOLOv8 was trained with the full dataset (images + annotations). The resulting performance metrics were:
Mean Average Precision (mAP): 96.5%
Precision: 92.2%
Recall: 94.4%
These metrics confirm the dataset’s value in developing perception systems for autonomous vehicles, particularly for object detection and path segmentation tasks.
Disclaimer and Attribution Requirement By accessing or using this dataset, users agree to the following terms under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0):
The dataset is available for non-commercial academic and research purposes only.
Proper attribution must be included as: “Dataset courtesy of Vellore Institute of Technology – Chennai Campus.” This citation must appear in all forms of publication, presentation, or dissemination using this dataset.
Redistribution, commercial usage, public hosting, or modification of the dataset is not permitted without explicit written consent from VIT-C.
Use of the dataset indicates acceptance of these conditions. All rights not explicitly granted are reserved by VIT-C.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Rust Labels is a dataset for instance segmentation tasks - it contains Objects annotations for 2,858 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
Twitterhttps://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F19272950%2F3136c2a234771726dd17c29a758ba365%2Fb0.png?generation=1709156580593893&alt=media" alt="">
Fig. 1: Diagram of the proposed blueberry fruit phenotyping workflow, involving four stages: data collection, dataset generation, model training, and phenotyping traits extraction. Our mobile platform equipped with a multi-view imaging system (top, left and right) was used to scan the blueberry plants through navigating over crop rows. On the basis of fruit/cluster detection dataset, we leverage a maturity classifier and a segmentation foundation model, SAM, to generate a semantic instance dataset for immature, semi-mature, and mature fruits segmentation. We proposed a lightweight improved YOLOv8 model for fruit cluster detection and blueberry segmentation for plant-scale and cluster-scale phenotyping traits extraction, including yield, maturity, cluster number and compactness.
Dataset generation:
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F19272950%2F7a06785e03056ac75a41f0ba881c7ca2%2Fb1.png?generation=1709156618386382&alt=media" alt="">
Fig 2: Illumination of the proposed automated pixel-wise labels generation for immature, semi-mature, and mature blueberry fruits (genotype: keecrisp). From left to right: (a) bounding box labels of blueberries from our previous manual detection dataset [27]; (b) three-classes boxes labels (immature-yellow, semi-mature-red, mature-blue) re-classified with a maturity classifier; (c) pixel-wise mask labels of blueberry fruits with Segment Anything Model.
If you find this work or code useful, please cite:
@article{li2025-robotic blueberry phenotyping,
title={In-field blueberry fruit phenotyping with a MARS-PhenoBot and customized BerryNet},
author={Li, Zhengkun and Xu, Rui and Li, Changying and Munoz, Patricio and Takeda, Fumiomi and Leme, Bruno},
journal={Computers and Electronics in Agriculture},
volume={232},
pages={110057},
year={2025},
publisher={Elsevier}
}
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Page Number Segmentation is a dataset for instance segmentation tasks - it contains Page annotations for 711 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Labeling Test is a dataset for instance segmentation tasks - it contains Ear annotations for 402 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Compared with histological examination of lung cancer, cytology is less invasive and provides better preservation of complete morphology and detail. However, traditional cytological diagnosis requires an experienced pathologist to evaluate all sections individually under a microscope, which is a time-consuming process with low interobserver consistency. With the development of deep neural networks, the You Only Look Once (YOLO) object-detection model has been recognized for its impressive speed and accuracy. Thus, in this study, we developed a model for intraoperative cytological segmentation of pulmonary lesions based on the YOLOv8 algorithm, which labels each instance by segmenting the image at the pixel level. The model achieved a mean pixel accuracy and mean intersection over union of 0.80 and 0.70, respectively, on the test set. At the image level, the accuracy and area under the receiver operating characteristic curve values for malignant and benign (or normal) lesions were 91.0% and 0.90, respectively. In addition, the model was deemed suitable for diagnosing pleural fluid cytology and bronchoalveolar lavage fluid cytology images. The model predictions were strongly correlated with pathologist diagnoses and the gold standard, indicating the model’s ability to make clinical-level decisions during initial diagnosis. Thus, the proposed method is useful for rapidly localizing lung cancer cells based on microscopic images and outputting image interpretation results.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
## Overview
Label Real Data is a dataset for instance segmentation tasks - it contains Bags annotations for 318 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
Facebook
Twitter## Overview
Label Data is a dataset for instance segmentation tasks - it contains Objects annotations for 1,955 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Wine Label Segmentation is a dataset for instance segmentation tasks - it contains Wine Labels annotations for 4,010 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).