Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These figures are the graphical results of my Master 2 internship on automatic segmentation using SAM2(Segment Anything Model 2) an artificial intelligence. The red line represents the best cell line from which anatomical measurements were made.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Seven Segment V2 is a dataset for object detection tasks - it contains Number annotations for 742 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Visual comparison of 100 human annotations (labels) compared with Segment Anything Model 2 (SAM2) segmentation.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Document Segmentation V2 is a dataset for instance segmentation tasks - it contains Document K3N7 annotations for 1,763 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
https://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/XIDXVThttps://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/XIDXVT
Few-Shot Segmentation (FSS) aims to learn class-agnostic segmentation on few classes to segment arbitrary classes, but at the risk of overfitting. To address this, some methods use the well-learned knowledge of foundation models (e.g., SAM) to simplify the learning process. Recently, SAM 2 has extended SAM by supporting video segmentation, whose class-agnostic matching ability is useful to FSS. A simple idea is to encode support foreground (FG) features as memory, with which query FG features are matched and fused. Unfortunately, the FG objects in different frames of SAM 2's video data are always the same identity, while those in FSS are different identities, i.e., the matching step is incompatible. Therefore, we design Pseudo Prompt Generator to encode pseudo query memory, matching with query features in a compatible way. However, the memories can never be as accurate as the real ones, i.e., they are likely to contain incomplete query FG, but some unexpected query background (BG) features, leading to wrong segmentation. Hence, we further design Iterative Memory Refinement to fuse more query FG features into the memory, and devise a Support-Calibrated Memory Attention to suppress the unexpected query BG features in memory. Extensive experiments have been conducted on PASCAL-5i and COCO-20i to validate the effectiveness of our design, e.g., the 1-shot mIoU can be 4.2% better than the best baseline.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Doodleverse/Segmentation Zoo Res-UNet models for Aerial/planecam/2-class (water, nowater) segmentation of RGB 1024x768 high-res. images
These Residual-UNet models have been created using Segmentation Gym*
Image size used by model: 1024 x 768 x 3 pixels
classes:
water
other
File descriptions
For each model, there are 5 files with the same root name:
'.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.
'.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py
. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py
. Models may be ensembled.
'_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config
file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model
'_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py
'.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py
Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU
References
*Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Doodleverse/Segmentation Gym Res-UNet models for 2-class (water, other) segmentation of CoastCam runup timestack imagery
This model release is part of the Doodleverse: https://github.com/Doodleverse
These Residual-UNet model data are based on RGB (red, green, and blue) images of coasts and associated labels.
Models have been created using Segmentation Gym* using an as-yet unpublished dataset of images and associated label images. See https://github.com/Doodleverse for more information about how this model was trained, and how to use it for inference
Classes: {0=other, 1=water}
File descriptions
There are two models; v7 has been trained from scratch, and v8 has been fine-tuned using hyperparameter adjustment. For each model, there are 5 files with the same root name:
'.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.
'.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py
. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py
. Models may be ensembled.
'_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config
file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model
'_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py
'.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py
Additionally,
References
*Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym
stodoran/elwha-segmentation-v2 dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Bags Segmentation V2 is a dataset for instance segmentation tasks - it contains Bag Vf25 annotations for 300 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Doodleverse/Segmentation Zoo Res-UNet models for 2-class (water, other) segmentation of Sentinel-2 and Landsat-7/8 3-band (RGB) images of coasts.
These Residual-UNet model data are based on RGB (red, green, and blue) images of coasts and associated labels.
Models have been created using Segmentation Gym* using the following dataset**: https://doi.org/10.5281/zenodo.7384242
Classes: {0=other, 1=water}
File descriptions
For each model, there are 5 files with the same root name:
'.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.
'.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py
. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py
. Models may be ensembled.
'_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config
file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model
'_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py
'.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py
Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU
References
*Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym
** Buscombe, D. (2022). Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, other) (v1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7384242
The TotalSegmentator-V2 dataset is a publicly available dataset for 3D medical image segmentation. It contains 1,228 CT scans with annotations for 117 major anatomical structures in WBCT images.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Reference field boundaries dataset generated in the paper "FieldSeg: A scalable agricultural field extraction framework based on the Segment Anything Model and 10-m Sentinel-2 imagery".
A hand-annotated field boundary dataset (2022) covering 8 10x10 km areas across the world is made available. The study areas are located in Argentina, Australia, Brazil, China, South Africa, Spain, USA-California, and USA-Iowa.
This dataset contains two files:
More information on how this dataset was prepared is available in the paper "FieldSeg: A scalable agricultural field extraction framework based on the Segment Anything Model and 10-m Sentinel-2 imagery".
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset was created by Kumar Shubham
Released under CC0: Public Domain
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB, NIR, and SWIR satellite images of coasts (water, other)
Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat 5-band (R+G+B+NIR+SWIR) satellite images of coasts (water, other)
Description
3649 images and 3649 associated labels for semantic segmentation of Sentinel-2 and Landsat 5-band (R+G+B+NIR+SWIR) satellite images of coasts. The 2 classes are 1=water, 0=other. Imagery are a mixture of 10-m Sentinel-2 and 15-m pansharpened Landsat 7, 8, and 9 visible-band imagery of various sizes. Red, Green, Blue, near-infrared, and short-wave infrared bands only
These images and labels could be used within numerous Machine Learning frameworks for image segmentation, but have specifically been made for use with the Doodleverse software package, Segmentation Gym**.
Two data sources have been combined
Dataset 1
Dataset 2
3070 image-label pairs from the Sentinel-2 Water Edges Dataset (SWED)***** dataset, https://openmldata.ukho.gov.uk/, described by Seale et al. (2022)******
A subset of the original SWED imagery (256 x 256 x 12) and labels (256 x 256 x 1) have been chosen, based on the criteria of more than 2.5% of the pixels represent water
File descriptions
classes.txt, a file containing the class names
images.zip, a zipped folder containing the 3-band RGB images of varying sizes and extents
labels.zip, a zipped folder containing the 1-band label images
nir.zip, a zipped folder containing the 1-band near-infrared (NIR) images
swir.zip, a zipped folder containing the 1-band shorttwave infrared (SWIR) images
overlays.zip, a zipped folder containing a semi-transparent overlay of the color-coded label on the image (red=1=water, blue=0=other)
resized_images.zip, RGB images resized to 512x512x3 pixels
resized_labels.zip, label images resized to 512x512x1 pixels
resized_nir.zip, NIR images resized to 512x512x1 pixels
resized_swir.zip, SWIR images resized to 512x512x1 pixels
References
*Doodler: Buscombe, D., Goldstein, E.B., Sherwood, C.R., Bodine, C., Brown, J.A., Favela, J., Fitzpatrick, S., Kranenburg, C.J., Over, J.R., Ritchie, A.C. and Warrick, J.A., 2021. HumanâinâtheâLoop Segmentation of Earth Surface Imagery. Earth and Space Science, p.e2021EA002085https://doi.org/10.1029/2021EA002085. See https://github.com/Doodleverse/dash_doodler.
**Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym
***Coast Train data release: Wernette, P.A., Buscombe, D.D., Favela, J., Fitzpatrick, S., and Goldstein E., 2022, Coast Train--Labeled imagery for training and evaluation of data-driven models for image segmentation: U.S. Geological Survey data release, https://doi.org/10.5066/P91NP87I. See https://coasttrain.github.io/CoastTrain/ for more information
****Buscombe, Daniel. (2022). Images and 4-class labels for semantic segmentation of Sentinel-2 and Landsat RGB, NIR, and SWIR satellite images of coasts (water, whitewater, sediment, other) (v1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7344571
*****Seale, C., Redfern, T., Chatfield, P. 2022. Sentinel-2 Water Edges Dataset (SWED) https://openmldata.ukho.gov.uk/
******Seale, C., Redfern, T., Chatfield, P., Luo, C. and Dempsey, K., 2022. Coastline detection in satellite imagery: A deep learning approach on new benchmark data. Remote Sensing of Environment, 278, p.113044.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Skittle Segmentation V2 is a dataset for instance segmentation tasks - it contains Objects annotations for 815 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Doodleverse/Segmentation Zoo Res-UNet models for identifying water in Sentinel-2 RGB images of coasts.
Based on SWED*** data
https://openmldata.ukho.gov.uk/
These Residual-UNet model data are based on images of coasts and associated labels. Models have been fitted to the following types of data
1. RGB (3 band): red, green, blue
Classes are: {0: null, 1: water}.
These files are used in conjunction with Segmentation Zoo*
For each model, there are 3 files with the same root name:
1. '.json' config file: this is the file that was used by Segmentation Gym** to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.
2. '.h5' weights file: this is the file that was created by the Segmentation Gym** function `train_model.py`. It contains the trained model's parameter weights. It can called by the Segmentation Gym** function `seg_images_in_folder.py` or the Segmentation Zoo* function `select_model_and_batch_process_folder.py` to segment a folder of images
3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the `config` file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model
References
* https://github.com/Doodleverse/segmentation_zoo
** https://github.com/Doodleverse/segmentation_gym
*** https://www.sciencedirect.com/science/article/abs/pii/S0034425722001584
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We release MarsData-V2
The stream geomorphic assessment (SGA) is a physical assessment competed by geomorphologists to determine the condition and sensitivity of a stream. The SGA Phase 2 Segment Breaks are points that indicate where the Phase 1 SGA reach was "segmented" into smaller Phase 2 segments. These segments are determined in the field and are based on changes in topography, slope and valley setting that were not found in phase 1, and on changes in condition found in the field. Where there found in the field a significant change in any of the above there is a segment break created.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, other)
Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, other)
Description
4088 images and 4088 associated labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts. The 2 classes are 1=water, 0=other. Imagery are a mixture of 10-m Sentinel-2 and 15-m pansharpened Landsat 7, 8, and 9 visible-band imagery of various sizes. Red, Green, Blue bands only
These images and labels could be used within numerous Machine Learning frameworks for image segmentation, but have specifically been made for use with the Doodleverse software package, Segmentation Gym**.
Two data sources have been combined
Dataset 1
Dataset 2
File descriptions
References
*Doodler: Buscombe, D., Goldstein, E.B., Sherwood, C.R., Bodine, C., Brown, J.A., Favela, J., Fitzpatrick, S., Kranenburg, C.J., Over, J.R., Ritchie, A.C. and Warrick, J.A., 2021. HumanâinâtheâLoop Segmentation of Earth Surface Imagery. Earth and Space Science, p.e2021EA002085https://doi.org/10.1029/2021EA002085. See https://github.com/Doodleverse/dash_doodler.
**Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym
***Coast Train data release: Wernette, P.A., Buscombe, D.D., Favela, J., Fitzpatrick, S., and Goldstein E., 2022, Coast Train--Labeled imagery for training and evaluation of data-driven models for image segmentation: U.S. Geological Survey data release, https://doi.org/10.5066/P91NP87I. See https://coasttrain.github.io/CoastTrain/ for more information
****Buscombe, Daniel, Goldstein, Evan, Bernier, Julie, Bosse, Stephen, Colacicco, Rosa, Corak, Nick, Fitzpatrick, Sharon, del JesĂșs GonzĂĄlez GuillĂ©n, Anais, Ku, Venus, Paprocki, Julie, Platt, Lindsay, Steele, Bethel, Wright, Kyle, & Yasin, Brandon. (2022). Images and 4-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, whitewater, sediment, other) (v1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7335647
*****Seale, C., Redfern, T., Chatfield, P. 2022. Sentinel-2 Water Edges Dataset (SWED) https://openmldata.ukho.gov.uk/
******Seale, C., Redfern, T., Chatfield, P., Luo, C. and Dempsey, K., 2022. Coastline detection in satellite imagery: A deep learning approach on new benchmark data. Remote Sensing of Environment, 278, p.113044.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provide an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections.
To validate the model's performance, roughly 10% of the AI predictions were assigned to a validation set. For this set, a board-certified radiologist graded the quality of AI predictions on a Likert scale. If they did not 'strongly agree' with the AI output, the reviewer corrected the segmentation.
This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images.
Only 10% of the AI-derived annotations provided in this dataset are verified by expert radiologists . More details, on model training and annotations are provided within the associated manuscript to ensure transparency and reproducibility.
This work was done in two stages. Versions 1.x of this record were from the first stage. Versions 2.x added additional records. In the Version 1.x collections, a medical student (non-expert) reviewed all the AI predictions and rated them on a 5-point Likert Scale, for any AI predictions in the validation set that they did not 'strongly agree' with, the non-expert provided corrected segmentations. This non-expert was not utilized for the Version 2.x additional records.
Likert Score Definition:
Guidelines for reviewers to grade the quality of AI segmentations.
5 Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change)
4 Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable
3 Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome
2 Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch
1 Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable.
Zip File Folder Structure
Each zip file in the collection correlates to a specific segmentation task. The common folder structure is
ai-segmentations-dcm This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files
qa-segmentations-dcm This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*)
qa-results.csv CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance.
qa-results.csv Columns
The qa-results.csv file contains metadata about the segmentations, their related IDC case image, as well as the Likert ratings and comments by the reviewers.
Column
Description
Collection
The name of the IDC collection for this case
PatientID
PatientID in DICOM metadata of scan. Also called Case ID in the IDC
StudyInstanceUID
StudyInstanceUID in the DICOM metadata of the scan
SeriesInstanceUID
SeriesInstanceUID in the DICOM metadata of the scan
Validation
true/false if this scan was manually reviewed
Reviewer
Coded ID of the reviewer. Radiologist IDs start with âradâ non-expect IDs start with âneâ
AimiProjectYear
2023 or 2024, This work was split over two years. The main methodology difference between the two is that in 2023, a non-expert also reviewed the AI output, but a non-expert was not utilized in 2024.
AISegmentation
The filename of the AI prediction file in DICOM-seg format. This file is in the ai-segmentations-dcm folder.
CorrectedSegmentation
The filename of the reviewer-corrected prediction file in DICOM-seg format. This file is in the qa-segmentations-dcm folder. If the reviewer strongly agreed with the AI for all segments, they did not provide any correction file.
Was the AI predicted ROIs accurate?
This column appears one for each segment in the task for images from AimiProjectYear 2023. The reviewer rates segmentation quality on a Likert scale. In tasks that have multiple labels in the output, there is only one rating to cover them all.
Was the AI predicted {SEGMENT_NAME} label accurate?
This column appears one for each segment in the task for images from AimiProjectYear 2024. The reviewer rates each segment for its quality on a Likert scale.
Do you have any comments about the AI predicted ROIs?
Open ended question for the reviewer
Do you have any comments about the findings from the study scans?
Open ended question for the reviewer
File Overview
brain-mr.zip
Segment Description: brain tumor regions: necrosis, edema, enhancing
IDC Collection: UPENN-GBM
Links: model weights, github
breast-fdg-pet-ct.zip
Segment Description: FDG-avid lesions in breast from FDG PET/CT scans QIN-Breast
IDC Collection: QIN-Breast
Links: model weights, github
breast-mr.zip
Segment Description: Breast, Fibroglandular tissue, structural tumor
IDC Collection: duke-breast-cancer-mri
Links: model weights, github
kidney-ct.zip
Segment Description: Kidney, Tumor, and Cysts from contrast enhanced CT scans
IDS Collection: TCGA-KIRC, TCGA-KIRP, TCGA-KICH, CPTAC-CCRCC
Links: model weights, github
liver-ct.zip
Segment Description: Liver from CT scans
IDC Collection: TCGA-LIHC
Links: model weights, github
liver2-ct.zip
Segment Description: Liver and Lesions from CT scans
IDC Collection: HCC-TACE-SEG, COLORECTAL-LIVER-METASTASES
Links: model weights, github
liver-mr.zip
Segment Description: Liver from T1 MRI scans
IDC Collection: TCGA-LIHC
Links: model weights, github
lung-ct.zip
Segment Description: Lung and Nodules (3mm-30mm) from CT scans
IDC Collections:
Anti-PD-1-Lung
LUNG-PET-CT-Dx
NSCLC Radiogenomics
RIDER Lung PET-CT
TCGA-LUAD
TCGA-LUSC
Links: model weights 1, model weights 2, github
lung2-ct.zip
Improved model version
Segment Description: Lung and Nodules (3mm-30mm) from CT scans
IDC Collections:
QIN-LUNG-CT, SPIE-AAPM Lung CT Challenge
Links: model weights, github
lung-fdg-pet-ct.zip
Segment Description: Lungs and FDG-avid lesions in the lung from FDG PET/CT scans
IDC Collections:
ACRIN-NSCLC-FDG-PET
Anti-PD-1-Lung
LUNG-PET-CT-Dx
NSCLC Radiogenomics
RIDER Lung PET-CT
TCGA-LUAD
TCGA-LUSC
Links: model weights, github
prostate-mr.zip
Segment Description: Prostate from T2 MRI scans
IDC Collection: ProstateX, Prostate-MRI-US-Biopsy
Links: model weights, github
Changelog
2.0.2 - Fix the brain-mr segmentations to be transformed correctly
2.0.1 - added AIMI 2024 radiologist comments to qa-results.csv
2.0.0 - added AIMI 2024 segmentations
1.X - AIMI 2023 segmentations and reviewer scores
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These figures are the graphical results of my Master 2 internship on automatic segmentation using SAM2(Segment Anything Model 2) an artificial intelligence. The red line represents the best cell line from which anatomical measurements were made.