100+ datasets found
  1. Results of AI segmentations and cell files research Part.2

    • figshare.com
    png
    Updated May 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Killian Verlingue (2025). Results of AI segmentations and cell files research Part.2 [Dataset]. http://doi.org/10.6084/m9.figshare.29118605.v1
    Explore at:
    pngAvailable download formats
    Dataset updated
    May 21, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Killian Verlingue
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These figures are the graphical results of my Master 2 internship on automatic segmentation using SAM2(Segment Anything Model 2) an artificial intelligence. The red line represents the best cell line from which anatomical measurements were made.

  2. R

    Seven Segment V2 Dataset

    • universe.roboflow.com
    zip
    Updated Jul 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    E (2024). Seven Segment V2 Dataset [Dataset]. https://universe.roboflow.com/e-ug7c7/seven-segment-v2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 26, 2024
    Dataset authored and provided by
    E
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Number Bounding Boxes
    Description

    Seven Segment V2

    ## Overview
    
    Seven Segment V2 is a dataset for object detection tasks - it contains Number annotations for 742 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  3. f

    SAM2 segmentation test and comparison with manual segmentation

    • figshare.com
    png
    Updated May 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Killian Verlingue (2025). SAM2 segmentation test and comparison with manual segmentation [Dataset]. http://doi.org/10.6084/m9.figshare.29136194.v1
    Explore at:
    pngAvailable download formats
    Dataset updated
    May 23, 2025
    Dataset provided by
    figshare
    Authors
    Killian Verlingue
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Visual comparison of 100 human annotations (labels) compared with Segment Anything Model 2 (SAM2) segmentation.

  4. R

    Document Segmentation V2 Dataset

    • universe.roboflow.com
    zip
    Updated Mar 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lung (2024). Document Segmentation V2 Dataset [Dataset]. https://universe.roboflow.com/lung-x8el1/document-segmentation-v2-gt86h
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 12, 2024
    Dataset authored and provided by
    Lung
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Document K3N7 Polygons
    Description

    Document Segmentation V2

    ## Overview
    
    Document Segmentation V2 is a dataset for instance segmentation tasks - it contains Document K3N7 annotations for 1,763 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  5. D

    Data from: Unlocking the Power of SAM 2 for Few-Shot Segmentation

    • researchdata.ntu.edu.sg
    Updated May 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DR-NTU (Data) (2025). Unlocking the Power of SAM 2 for Few-Shot Segmentation [Dataset]. http://doi.org/10.21979/N9/XIDXVT
    Explore at:
    Dataset updated
    May 22, 2025
    Dataset provided by
    DR-NTU (Data)
    License

    https://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/XIDXVThttps://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/XIDXVT

    Dataset funded by
    RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative
    Description

    Few-Shot Segmentation (FSS) aims to learn class-agnostic segmentation on few classes to segment arbitrary classes, but at the risk of overfitting. To address this, some methods use the well-learned knowledge of foundation models (e.g., SAM) to simplify the learning process. Recently, SAM 2 has extended SAM by supporting video segmentation, whose class-agnostic matching ability is useful to FSS. A simple idea is to encode support foreground (FG) features as memory, with which query FG features are matched and fused. Unfortunately, the FG objects in different frames of SAM 2's video data are always the same identity, while those in FSS are different identities, i.e., the matching step is incompatible. Therefore, we design Pseudo Prompt Generator to encode pseudo query memory, matching with query features in a compatible way. However, the memories can never be as accurate as the real ones, i.e., they are likely to contain incomplete query FG, but some unexpected query background (BG) features, leading to wrong segmentation. Hence, we further design Iterative Memory Refinement to fuse more query FG features into the memory, and devise a Support-Calibrated Memory Attention to suppress the unexpected query BG features in memory. Extensive experiments have been conducted on PASCAL-5i and COCO-20i to validate the effectiveness of our design, e.g., the 1-shot mIoU can be 4.2% better than the best baseline.

  6. Z

    Doodleverse/Segmentation Zoo Res-UNet models for Aerial/planecam/2-class...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Buscombe, Daniel (2024). Doodleverse/Segmentation Zoo Res-UNet models for Aerial/planecam/2-class (water, nowater) segmentation of RGB 1024x768 high-res. images [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7604074
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset authored and provided by
    Buscombe, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Doodleverse/Segmentation Zoo Res-UNet models for Aerial/planecam/2-class (water, nowater) segmentation of RGB 1024x768 high-res. images

    These Residual-UNet models have been created using Segmentation Gym*

    Image size used by model: 1024 x 768 x 3 pixels

    classes:

    water

    other

    File descriptions

    For each model, there are 5 files with the same root name:

    1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.

    2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py. Models may be ensembled.

    3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

    4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py

    5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py

    Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU

    References

    *Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

  7. Z

    Doodleverse/Segmentation Gym Res-UNet models for 2-class (water, other)...

    • data.niaid.nih.gov
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Buscombe, Daniel (2024). Doodleverse/Segmentation Gym Res-UNet models for 2-class (water, other) segmentation of CoastCam runup timestack imagery [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7921970
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset authored and provided by
    Buscombe, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Doodleverse/Segmentation Gym Res-UNet models for 2-class (water, other) segmentation of CoastCam runup timestack imagery

    This model release is part of the Doodleverse: https://github.com/Doodleverse

    These Residual-UNet model data are based on RGB (red, green, and blue) images of coasts and associated labels.

    Models have been created using Segmentation Gym* using an as-yet unpublished dataset of images and associated label images. See https://github.com/Doodleverse for more information about how this model was trained, and how to use it for inference

    Classes: {0=other, 1=water}

    File descriptions

    There are two models; v7 has been trained from scratch, and v8 has been fine-tuned using hyperparameter adjustment. For each model, there are 5 files with the same root name:

    1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.

    2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py. Models may be ensembled.

    3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

    4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py

    5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py

    Additionally,

    1. BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU
    2. sample_images.zip contains a few example input files, for model testing

    References

    *Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

  8. h

    elwha-segmentation-v2

    • huggingface.co
    Updated May 20, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefan Todoran (2024). elwha-segmentation-v2 [Dataset]. https://huggingface.co/datasets/stodoran/elwha-segmentation-v2
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 20, 2024
    Authors
    Stefan Todoran
    Description

    stodoran/elwha-segmentation-v2 dataset hosted on Hugging Face and contributed by the HF Datasets community

  9. R

    Bags Segmentation V2 Dataset

    • universe.roboflow.com
    zip
    Updated Dec 21, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bruno (2023). Bags Segmentation V2 Dataset [Dataset]. https://universe.roboflow.com/bruno-iujyp/bags-segmentation-v2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 21, 2023
    Dataset authored and provided by
    Bruno
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Bag Vf25 Polygons
    Description

    Bags Segmentation V2

    ## Overview
    
    Bags Segmentation V2 is a dataset for instance segmentation tasks - it contains Bag Vf25 annotations for 300 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  10. Z

    Doodleverse/Segmentation Zoo Res-UNet models for 2-class (water, other)...

    • data.niaid.nih.gov
    Updated Jul 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Buscombe, Daniel (2024). Doodleverse/Segmentation Zoo Res-UNet models for 2-class (water, other) segmentation of Sentinel-2 and Landsat-7/8 3-band (RGB) images of coasts. [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7384254
    Explore at:
    Dataset updated
    Jul 15, 2024
    Dataset authored and provided by
    Buscombe, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Doodleverse/Segmentation Zoo Res-UNet models for 2-class (water, other) segmentation of Sentinel-2 and Landsat-7/8 3-band (RGB) images of coasts.

    These Residual-UNet model data are based on RGB (red, green, and blue) images of coasts and associated labels.

    Models have been created using Segmentation Gym* using the following dataset**: https://doi.org/10.5281/zenodo.7384242

    Classes: {0=other, 1=water}

    File descriptions

    For each model, there are 5 files with the same root name:

    1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.

    2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py. Models may be ensembled.

    3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

    4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py

    5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py

    Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU

    References

    *Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

    ** Buscombe, D. (2022). Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, other) (v1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7384242

  11. t

    TotalSegmentator-V2 - Dataset - LDM

    • service.tib.eu
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). TotalSegmentator-V2 - Dataset - LDM [Dataset]. https://service.tib.eu/ldmservice/dataset/totalsegmentator-v2
    Explore at:
    Dataset updated
    Dec 16, 2024
    Description

    The TotalSegmentator-V2 dataset is a publicly available dataset for 3D medical image segmentation. It contains 1,228 CT scans with annotations for 117 major anatomical structures in WBCT images.

  12. Reference field boundaries (paper: FieldSeg: A scalable agricultural field...

    • zenodo.org
    bin
    Updated Mar 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lucas Borges Ferreira; Vitor Souza Martins; Lucas Borges Ferreira; Vitor Souza Martins (2025). Reference field boundaries (paper: FieldSeg: A scalable agricultural field extraction framework based on the Segment Anything Model and 10-m Sentinel-2 imagery) [Dataset]. http://doi.org/10.5281/zenodo.14630397
    Explore at:
    binAvailable download formats
    Dataset updated
    Mar 10, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Lucas Borges Ferreira; Vitor Souza Martins; Lucas Borges Ferreira; Vitor Souza Martins
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Reference field boundaries dataset generated in the paper "FieldSeg: A scalable agricultural field extraction framework based on the Segment Anything Model and 10-m Sentinel-2 imagery".

    A hand-annotated field boundary dataset (2022) covering 8 10x10 km areas across the world is made available. The study areas are located in Argentina, Australia, Brazil, China, South Africa, Spain, USA-California, and USA-Iowa.

    This dataset contains two files:

    • reference_field_boundaries.gpkg: hand-annotated dataset, with polygons defining the field boundaries.
    • study_areas.gpkg: polygons defining the limits of the study areas and additional metadata about each area.

    More information on how this dataset was prepared is available in the paper "FieldSeg: A scalable agricultural field extraction framework based on the Segment Anything Model and 10-m Sentinel-2 imagery".

  13. tfgbr-segmentation-v2-2

    • kaggle.com
    Updated Jan 18, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    KS (2022). tfgbr-segmentation-v2-2 [Dataset]. https://www.kaggle.com/datasets/ks2019/tfgbr-segmentation-v2-2
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 18, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    KS
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Dataset

    This dataset was created by Kumar Shubham

    Released under CC0: Public Domain

    Contents

  14. Z

    Images and 2-class labels for semantic segmentation of Sentinel-2 and...

    • data.niaid.nih.gov
    Updated Dec 2, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Buscombe, Daniel (2022). Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB, NIR, and SWIR satellite images of coasts (water, other) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7384262
    Explore at:
    Dataset updated
    Dec 2, 2022
    Dataset authored and provided by
    Buscombe, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB, NIR, and SWIR satellite images of coasts (water, other)

    Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat 5-band (R+G+B+NIR+SWIR) satellite images of coasts (water, other)

    Description

    3649 images and 3649 associated labels for semantic segmentation of Sentinel-2 and Landsat 5-band (R+G+B+NIR+SWIR) satellite images of coasts. The 2 classes are 1=water, 0=other. Imagery are a mixture of 10-m Sentinel-2 and 15-m pansharpened Landsat 7, 8, and 9 visible-band imagery of various sizes. Red, Green, Blue, near-infrared, and short-wave infrared bands only

    These images and labels could be used within numerous Machine Learning frameworks for image segmentation, but have specifically been made for use with the Doodleverse software package, Segmentation Gym**.

    Two data sources have been combined

    Dataset 1

    • 579 image-label pairs from the following data release**** https://doi.org/10.5281/zenodo.7344571
    • Labels have been reclassified from 4 classes to 2 classes.
    • Some (422) of these images and labels were originally included in the Coast Train*** data release, and have been modified from their original by reclassifying from the original classes to the present 2 classes.
    • These images and labels have been made using the Doodleverse software package, Doodler*.

    Dataset 2

    3070 image-label pairs from the Sentinel-2 Water Edges Dataset (SWED)***** dataset, https://openmldata.ukho.gov.uk/, described by Seale et al. (2022)******

    A subset of the original SWED imagery (256 x 256 x 12) and labels (256 x 256 x 1) have been chosen, based on the criteria of more than 2.5% of the pixels represent water

    File descriptions

    classes.txt, a file containing the class names
    
    images.zip, a zipped folder containing the 3-band RGB images of varying sizes and extents
    
    labels.zip, a zipped folder containing the 1-band label images
    
    nir.zip, a zipped folder containing the 1-band near-infrared (NIR) images
    
    swir.zip, a zipped folder containing the 1-band shorttwave infrared (SWIR) images
    
    overlays.zip, a zipped folder containing a semi-transparent overlay of the color-coded label on the image (red=1=water, blue=0=other)
    
    resized_images.zip, RGB images resized to 512x512x3 pixels
    
    resized_labels.zip, label images resized to 512x512x1 pixels
    
    resized_nir.zip, NIR images resized to 512x512x1 pixels
    
    resized_swir.zip, SWIR images resized to 512x512x1 pixels
    

    References

    *Doodler: Buscombe, D., Goldstein, E.B., Sherwood, C.R., Bodine, C., Brown, J.A., Favela, J., Fitzpatrick, S., Kranenburg, C.J., Over, J.R., Ritchie, A.C. and Warrick, J.A., 2021. Human‐in‐the‐Loop Segmentation of Earth Surface Imagery. Earth and Space Science, p.e2021EA002085https://doi.org/10.1029/2021EA002085. See https://github.com/Doodleverse/dash_doodler.

    **Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

    ***Coast Train data release: Wernette, P.A., Buscombe, D.D., Favela, J., Fitzpatrick, S., and Goldstein E., 2022, Coast Train--Labeled imagery for training and evaluation of data-driven models for image segmentation: U.S. Geological Survey data release, https://doi.org/10.5066/P91NP87I. See https://coasttrain.github.io/CoastTrain/ for more information

    ****Buscombe, Daniel. (2022). Images and 4-class labels for semantic segmentation of Sentinel-2 and Landsat RGB, NIR, and SWIR satellite images of coasts (water, whitewater, sediment, other) (v1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7344571

    *****Seale, C., Redfern, T., Chatfield, P. 2022. Sentinel-2 Water Edges Dataset (SWED) https://openmldata.ukho.gov.uk/

    ******Seale, C., Redfern, T., Chatfield, P., Luo, C. and Dempsey, K., 2022. Coastline detection in satellite imagery: A deep learning approach on new benchmark data. Remote Sensing of Environment, 278, p.113044.

  15. R

    Skittle Segmentation V2 Dataset

    • universe.roboflow.com
    zip
    Updated May 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AI Expo 2025 (2025). Skittle Segmentation V2 Dataset [Dataset]. https://universe.roboflow.com/ai-expo-2025/skittle-segmentation-v2
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 10, 2025
    Dataset authored and provided by
    AI Expo 2025
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Objects Polygons
    Description

    Skittle Segmentation V2

    ## Overview
    
    Skittle Segmentation V2 is a dataset for instance segmentation tasks - it contains Objects annotations for 815 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  16. Doodleverse/Segmentation Zoo Res-UNet models for identifying water in...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Nov 23, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Buscombe; Daniel Buscombe (2022). Doodleverse/Segmentation Zoo Res-UNet models for identifying water in Sentinel-2 RGB images of coasts. [Dataset]. http://doi.org/10.5281/zenodo.6824280
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 23, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Daniel Buscombe; Daniel Buscombe
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Doodleverse/Segmentation Zoo Res-UNet models for identifying water in Sentinel-2 RGB images of coasts.

    Based on SWED*** data

    https://openmldata.ukho.gov.uk/

    These Residual-UNet model data are based on images of coasts and associated labels. Models have been fitted to the following types of data

    1. RGB (3 band): red, green, blue

    Classes are: {0: null, 1: water}.

    These files are used in conjunction with Segmentation Zoo*

    For each model, there are 3 files with the same root name:

    1. '.json' config file: this is the file that was used by Segmentation Gym** to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.

    2. '.h5' weights file: this is the file that was created by the Segmentation Gym** function `train_model.py`. It contains the trained model's parameter weights. It can called by the Segmentation Gym** function `seg_images_in_folder.py` or the Segmentation Zoo* function `select_model_and_batch_process_folder.py` to segment a folder of images

    3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the `config` file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

    References

    * https://github.com/Doodleverse/segmentation_zoo

    ** https://github.com/Doodleverse/segmentation_gym

    *** https://www.sciencedirect.com/science/article/abs/pii/S0034425722001584

  17. i

    MarsData-V2

    • ieee-dataport.org
    Updated Aug 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meibao Yao (2023). MarsData-V2 [Dataset]. https://ieee-dataport.org/documents/marsdata-v2-rock-segmentation-dataset-real-martian-scenes
    Explore at:
    Dataset updated
    Aug 10, 2023
    Authors
    Meibao Yao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We release MarsData-V2

  18. SGA Phase 2 Reach Segment Breaks

    • catalog.data.gov
    • anrgeodata.vermont.gov
    • +7more
    Updated Dec 13, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vermont Agency of Natural Resources, DEC, Rivers (2024). SGA Phase 2 Reach Segment Breaks [Dataset]. https://catalog.data.gov/dataset/sga-phase-2-reach-segment-breaks-b0799
    Explore at:
    Dataset updated
    Dec 13, 2024
    Dataset provided by
    Vermont Agency Of Natural Resourceshttp://www.anr.state.vt.us/
    Description

    The stream geomorphic assessment (SGA) is a physical assessment competed by geomorphologists to determine the condition and sensitivity of a stream. The SGA Phase 2 Segment Breaks are points that indicate where the Phase 1 SGA reach was "segmented" into smaller Phase 2 segments. These segments are determined in the field and are based on changes in topography, slope and valley setting that were not found in phase 1, and on changes in condition found in the field. Where there found in the field a significant change in any of the above there is a segment break created.

  19. Images and 2-class labels for semantic segmentation of Sentinel-2 and...

    • zenodo.org
    • data.niaid.nih.gov
    txt, zip
    Updated Dec 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Buscombe; Daniel Buscombe (2022). Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, other) [Dataset]. http://doi.org/10.5281/zenodo.7384242
    Explore at:
    zip, txtAvailable download formats
    Dataset updated
    Dec 2, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Daniel Buscombe; Daniel Buscombe
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, other)

    Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, other)

    Description

    4088 images and 4088 associated labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts. The 2 classes are 1=water, 0=other. Imagery are a mixture of 10-m Sentinel-2 and 15-m pansharpened Landsat 7, 8, and 9 visible-band imagery of various sizes. Red, Green, Blue bands only

    These images and labels could be used within numerous Machine Learning frameworks for image segmentation, but have specifically been made for use with the Doodleverse software package, Segmentation Gym**.

    Two data sources have been combined

    Dataset 1

    • 1018 image-label pairs from the following data release**** https://doi.org/10.5281/zenodo.7335647
    • Labels have been reclassified from 4 classes to 2 classes.
    • Some (422) of these images and labels were originally included in the Coast Train*** data release, and have been modified from their original by reclassifying from the original classes to the present 2 classes.
    • These images and labels have been made using the Doodleverse software package, Doodler*.

    Dataset 2

    • 3070 image-label pairs from the Sentinel-2 Water Edges Dataset (SWED)***** dataset, https://openmldata.ukho.gov.uk/, described by Seale et al. (2022)******
    • A subset of the original SWED imagery (256 x 256 x 12) and labels (256 x 256 x 1) have been chosen, based on the criteria of more than 2.5% of the pixels represent water

    File descriptions

    • classes.txt, a file containing the class names
    • images.zip, a zipped folder containing the 3-band RGB images of varying sizes and extents
    • labels.zip, a zipped folder containing the 1-band label images
    • overlays.zip, a zipped folder containing a semi-transparent overlay of the color-coded label on the image (red=1=water, bllue=0=other)
    • resized_images.zip, RGB images resized to 512x512x3 pixels
    • resized_labels.zip, label images resized to 512x512x1 pixels

    References

    *Doodler: Buscombe, D., Goldstein, E.B., Sherwood, C.R., Bodine, C., Brown, J.A., Favela, J., Fitzpatrick, S., Kranenburg, C.J., Over, J.R., Ritchie, A.C. and Warrick, J.A., 2021. Human‐in‐the‐Loop Segmentation of Earth Surface Imagery. Earth and Space Science, p.e2021EA002085https://doi.org/10.1029/2021EA002085. See https://github.com/Doodleverse/dash_doodler.

    **Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

    ***Coast Train data release: Wernette, P.A., Buscombe, D.D., Favela, J., Fitzpatrick, S., and Goldstein E., 2022, Coast Train--Labeled imagery for training and evaluation of data-driven models for image segmentation: U.S. Geological Survey data release, https://doi.org/10.5066/P91NP87I. See https://coasttrain.github.io/CoastTrain/ for more information

    ****Buscombe, Daniel, Goldstein, Evan, Bernier, Julie, Bosse, Stephen, Colacicco, Rosa, Corak, Nick, Fitzpatrick, Sharon, del JesĂșs GonzĂĄlez GuillĂ©n, Anais, Ku, Venus, Paprocki, Julie, Platt, Lindsay, Steele, Bethel, Wright, Kyle, & Yasin, Brandon. (2022). Images and 4-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, whitewater, sediment, other) (v1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7335647

    *****Seale, C., Redfern, T., Chatfield, P. 2022. Sentinel-2 Water Edges Dataset (SWED) https://openmldata.ukho.gov.uk/

    ******Seale, C., Redfern, T., Chatfield, P., Luo, C. and Dempsey, K., 2022. Coastline detection in satellite imagery: A deep learning approach on new benchmark data. Remote Sensing of Environment, 278, p.113044.

  20. Z

    Data from: Image segmentations produced by BAMF under the AIMI Annotations...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Sep 27, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    McCrumb, Diana (2024). Image segmentations produced by BAMF under the AIMI Annotations initiative [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8345959
    Explore at:
    Dataset updated
    Sep 27, 2024
    Dataset provided by
    Murugesan, Gowtham Krishnan
    McCrumb, Diana
    Soni, Rahul
    Van Oss, Jeff
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provide an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections.

    To validate the model's performance, roughly 10% of the AI predictions were assigned to a validation set. For this set, a board-certified radiologist graded the quality of AI predictions on a Likert scale. If they did not 'strongly agree' with the AI output, the reviewer corrected the segmentation.

    This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images.

    Only 10% of the AI-derived annotations provided in this dataset are verified by expert radiologists . More details, on model training and annotations are provided within the associated manuscript to ensure transparency and reproducibility.

    This work was done in two stages. Versions 1.x of this record were from the first stage. Versions 2.x added additional records. In the Version 1.x collections, a medical student (non-expert) reviewed all the AI predictions and rated them on a 5-point Likert Scale, for any AI predictions in the validation set that they did not 'strongly agree' with, the non-expert provided corrected segmentations. This non-expert was not utilized for the Version 2.x additional records.

    Likert Score Definition:

    Guidelines for reviewers to grade the quality of AI segmentations.

    5 Strongly Agree - Use-as-is (i.e., clinically acceptable, and could be used for treatment without change)

    4 Agree - Minor edits that are not necessary. Stylistic differences, but not clinically important. The current segmentation is acceptable

    3 Neither agree nor disagree - Minor edits that are necessary. Minor edits are those that the review judges can be made in less time than starting from scratch or are expected to have minimal effect on treatment outcome

    2 Disagree - Major edits. This category indicates that the necessary edit is required to ensure correctness, and sufficiently significant that user would prefer to start from the scratch

    1 Strongly disagree - Unusable. This category indicates that the quality of the automatic annotations is so bad that they are unusable.

    Zip File Folder Structure

    Each zip file in the collection correlates to a specific segmentation task. The common folder structure is

    ai-segmentations-dcm This directory contains the AI model predictions in DICOM-SEG format for all analyzed IDC collection files

    qa-segmentations-dcm This directory contains manual corrected segmentation files, based on the AI prediction, in DICOM-SEG format. Only a fraction, ~10%, of the AI predictions were corrected. Corrections were performed by radiologist (rad*) and non-experts (ne*)

    qa-results.csv CSV file linking the study/series UIDs with the ai segmentation file, radiologist corrected segmentation file, radiologist ratings of AI performance.

    qa-results.csv Columns

    The qa-results.csv file contains metadata about the segmentations, their related IDC case image, as well as the Likert ratings and comments by the reviewers.

    Column

    Description

    Collection

    The name of the IDC collection for this case

    PatientID

    PatientID in DICOM metadata of scan. Also called Case ID in the IDC

    StudyInstanceUID

    StudyInstanceUID in the DICOM metadata of the scan

    SeriesInstanceUID

    SeriesInstanceUID in the DICOM metadata of the scan

    Validation

    true/false if this scan was manually reviewed

    Reviewer

    Coded ID of the reviewer. Radiologist IDs start with ‘rad’ non-expect IDs start with ‘ne’

    AimiProjectYear

    2023 or 2024, This work was split over two years. The main methodology difference between the two is that in 2023, a non-expert also reviewed the AI output, but a non-expert was not utilized in 2024.

    AISegmentation

    The filename of the AI prediction file in DICOM-seg format. This file is in the ai-segmentations-dcm folder.

    CorrectedSegmentation

    The filename of the reviewer-corrected prediction file in DICOM-seg format. This file is in the qa-segmentations-dcm folder. If the reviewer strongly agreed with the AI for all segments, they did not provide any correction file.

    Was the AI predicted ROIs accurate?

    This column appears one for each segment in the task for images from AimiProjectYear 2023. The reviewer rates segmentation quality on a Likert scale. In tasks that have multiple labels in the output, there is only one rating to cover them all.

    Was the AI predicted {SEGMENT_NAME} label accurate?

    This column appears one for each segment in the task for images from AimiProjectYear 2024. The reviewer rates each segment for its quality on a Likert scale.

    Do you have any comments about the AI predicted ROIs?

    Open ended question for the reviewer

    Do you have any comments about the findings from the study scans?

    Open ended question for the reviewer

    File Overview

    brain-mr.zip

    Segment Description: brain tumor regions: necrosis, edema, enhancing

    IDC Collection: UPENN-GBM

    Links: model weights, github

    breast-fdg-pet-ct.zip

    Segment Description: FDG-avid lesions in breast from FDG PET/CT scans QIN-Breast

    IDC Collection: QIN-Breast

    Links: model weights, github

    breast-mr.zip

    Segment Description: Breast, Fibroglandular tissue, structural tumor

    IDC Collection: duke-breast-cancer-mri

    Links: model weights, github

    kidney-ct.zip

    Segment Description: Kidney, Tumor, and Cysts from contrast enhanced CT scans

    IDS Collection: TCGA-KIRC, TCGA-KIRP, TCGA-KICH, CPTAC-CCRCC

    Links: model weights, github

    liver-ct.zip

    Segment Description: Liver from CT scans

    IDC Collection: TCGA-LIHC

    Links: model weights, github

    liver2-ct.zip

    Segment Description: Liver and Lesions from CT scans

    IDC Collection: HCC-TACE-SEG, COLORECTAL-LIVER-METASTASES

    Links: model weights, github

    liver-mr.zip

    Segment Description: Liver from T1 MRI scans

    IDC Collection: TCGA-LIHC

    Links: model weights, github

    lung-ct.zip

    Segment Description: Lung and Nodules (3mm-30mm) from CT scans

    IDC Collections:

    Anti-PD-1-Lung

    LUNG-PET-CT-Dx

    NSCLC Radiogenomics

    RIDER Lung PET-CT

    TCGA-LUAD

    TCGA-LUSC

    Links: model weights 1, model weights 2, github

    lung2-ct.zip

    Improved model version

    Segment Description: Lung and Nodules (3mm-30mm) from CT scans

    IDC Collections:

    QIN-LUNG-CT, SPIE-AAPM Lung CT Challenge

    Links: model weights, github

    lung-fdg-pet-ct.zip

    Segment Description: Lungs and FDG-avid lesions in the lung from FDG PET/CT scans

    IDC Collections:

    ACRIN-NSCLC-FDG-PET

    Anti-PD-1-Lung

    LUNG-PET-CT-Dx

    NSCLC Radiogenomics

    RIDER Lung PET-CT

    TCGA-LUAD

    TCGA-LUSC

    Links: model weights, github

    prostate-mr.zip

    Segment Description: Prostate from T2 MRI scans

    IDC Collection: ProstateX, Prostate-MRI-US-Biopsy

    Links: model weights, github

    Changelog

    2.0.2 - Fix the brain-mr segmentations to be transformed correctly

    2.0.1 - added AIMI 2024 radiologist comments to qa-results.csv

    2.0.0 - added AIMI 2024 segmentations

    1.X - AIMI 2023 segmentations and reviewer scores

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Killian Verlingue (2025). Results of AI segmentations and cell files research Part.2 [Dataset]. http://doi.org/10.6084/m9.figshare.29118605.v1
Organization logoOrganization logo

Results of AI segmentations and cell files research Part.2

Explore at:
pngAvailable download formats
Dataset updated
May 21, 2025
Dataset provided by
Figsharehttp://figshare.com/
figshare
Authors
Killian Verlingue
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

These figures are the graphical results of my Master 2 internship on automatic segmentation using SAM2(Segment Anything Model 2) an artificial intelligence. The red line represents the best cell line from which anatomical measurements were made.

Search
Clear search
Close search
Google apps
Main menu