100+ datasets found
  1. Data from: Segment Anything Model (SAM)

    • morocco.africageoportal.com
    • uneca.africageoportal.com
    • +2more
    Updated Apr 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Segment Anything Model (SAM) [Dataset]. https://morocco.africageoportal.com/content/9b67b441f29f4ce6810979f5f0667ebe
    Explore at:
    Dataset updated
    Apr 17, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Description

    Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

  2. segment-anything-2

    • kaggle.com
    Updated Jul 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Somesh88 (2024). segment-anything-2 [Dataset]. https://www.kaggle.com/datasets/somesh88/segment-anything-2/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 31, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Somesh88
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset

    This dataset was created by Somesh88

    Released under Apache 2.0

    Contents

  3. f

    SAM2 segmentation test and comparison with manual segmentation

    • figshare.com
    png
    Updated May 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Killian Verlingue (2025). SAM2 segmentation test and comparison with manual segmentation [Dataset]. http://doi.org/10.6084/m9.figshare.29136194.v1
    Explore at:
    pngAvailable download formats
    Dataset updated
    May 23, 2025
    Dataset provided by
    figshare
    Authors
    Killian Verlingue
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Visual comparison of 100 human annotations (labels) compared with Segment Anything Model 2 (SAM2) segmentation.

  4. R

    Tomato Segment 2 Dataset

    • universe.roboflow.com
    zip
    Updated Jun 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    rksoft (2024). Tomato Segment 2 Dataset [Dataset]. https://universe.roboflow.com/rksoft-cjxgh/tomato-segment-2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 24, 2024
    Dataset authored and provided by
    rksoft
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Tomato G0Ua 2Rcn Polygons
    Description

    Tomato Segment 2

    ## Overview
    
    Tomato Segment 2 is a dataset for instance segmentation tasks - it contains Tomato G0Ua 2Rcn annotations for 3,439 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  5. R

    Tools Segmentation 2 Dataset

    • universe.roboflow.com
    zip
    Updated Feb 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Reshetnev University (2024). Tools Segmentation 2 Dataset [Dataset]. https://universe.roboflow.com/reshetnev-university-7yeg6/tools-segmentation-2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 14, 2024
    Dataset authored and provided by
    Reshetnev University
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Search For Tools Polygons
    Description

    Tools Segmentation 2

    ## Overview
    
    Tools Segmentation 2 is a dataset for instance segmentation tasks - it contains Search For Tools annotations for 1,153 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  6. O

    SA-1B(segment anything)

    • opendatalab.com
    zip
    Updated May 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meta AI Research (2023). SA-1B(segment anything) [Dataset]. https://opendatalab.com/OpenDataLab/SA-1B
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 1, 2023
    Dataset provided by
    Meta AI Research
    License

    https://ai.facebook.com/datasets/segment-anything-downloads/https://ai.facebook.com/datasets/segment-anything-downloads/

    Description

    Segment Anything 1 Billion (SA-1B) is a dataset designed for training general-purpose object segmentation models from open world images.

    SA-1B consists of 11M diverse, high-resolution, privacy protecting images and 1.1B high-quality segmentation masks that were collected with our data engine. It is intended to be used for computer vision research for the purposes permitted under our Data License.

    The images are licensed from a large photo company. The 1.1B masks were produced using our data engine, all of which were generated fully automatically by the Segment Anything Model (SAM).

  7. R

    Segmentation 2 Dataset

    • universe.roboflow.com
    zip
    Updated Dec 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Janissy (2024). Segmentation 2 Dataset [Dataset]. https://universe.roboflow.com/janissy/segmentation-2-i7p26
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 8, 2024
    Dataset authored and provided by
    Janissy
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Transportation 86VA Polygons
    Description

    Segmentation 2

    ## Overview
    
    Segmentation 2 is a dataset for instance segmentation tasks - it contains Transportation 86VA annotations for 1,557 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  8. D

    Data from: Unlocking the Power of SAM 2 for Few-Shot Segmentation

    • researchdata.ntu.edu.sg
    Updated May 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DR-NTU (Data) (2025). Unlocking the Power of SAM 2 for Few-Shot Segmentation [Dataset]. http://doi.org/10.21979/N9/XIDXVT
    Explore at:
    Dataset updated
    May 22, 2025
    Dataset provided by
    DR-NTU (Data)
    License

    https://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/XIDXVThttps://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/XIDXVT

    Dataset funded by
    RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative
    Description

    Few-Shot Segmentation (FSS) aims to learn class-agnostic segmentation on few classes to segment arbitrary classes, but at the risk of overfitting. To address this, some methods use the well-learned knowledge of foundation models (e.g., SAM) to simplify the learning process. Recently, SAM 2 has extended SAM by supporting video segmentation, whose class-agnostic matching ability is useful to FSS. A simple idea is to encode support foreground (FG) features as memory, with which query FG features are matched and fused. Unfortunately, the FG objects in different frames of SAM 2's video data are always the same identity, while those in FSS are different identities, i.e., the matching step is incompatible. Therefore, we design Pseudo Prompt Generator to encode pseudo query memory, matching with query features in a compatible way. However, the memories can never be as accurate as the real ones, i.e., they are likely to contain incomplete query FG, but some unexpected query background (BG) features, leading to wrong segmentation. Hence, we further design Iterative Memory Refinement to fuse more query FG features into the memory, and devise a Support-Calibrated Memory Attention to suppress the unexpected query BG features in memory. Extensive experiments have been conducted on PASCAL-5i and COCO-20i to validate the effectiveness of our design, e.g., the 1-shot mIoU can be 4.2% better than the best baseline.

  9. Results of AI segmentations and cell files research Part.2

    • figshare.com
    png
    Updated May 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Killian Verlingue (2025). Results of AI segmentations and cell files research Part.2 [Dataset]. http://doi.org/10.6084/m9.figshare.29118605.v1
    Explore at:
    pngAvailable download formats
    Dataset updated
    May 21, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Killian Verlingue
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These figures are the graphical results of my Master 2 internship on automatic segmentation using SAM2(Segment Anything Model 2) an artificial intelligence. The red line represents the best cell line from which anatomical measurements were made.

  10. Z

    Doodleverse/Segmentation Gym Res-UNet models for 2-class (water, other)...

    • data.niaid.nih.gov
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Buscombe, Daniel (2024). Doodleverse/Segmentation Gym Res-UNet models for 2-class (water, other) segmentation of CoastCam runup timestack imagery [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7921970
    Explore at:
    Dataset updated
    Jul 12, 2024
    Dataset authored and provided by
    Buscombe, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Doodleverse/Segmentation Gym Res-UNet models for 2-class (water, other) segmentation of CoastCam runup timestack imagery

    This model release is part of the Doodleverse: https://github.com/Doodleverse

    These Residual-UNet model data are based on RGB (red, green, and blue) images of coasts and associated labels.

    Models have been created using Segmentation Gym* using an as-yet unpublished dataset of images and associated label images. See https://github.com/Doodleverse for more information about how this model was trained, and how to use it for inference

    Classes: {0=other, 1=water}

    File descriptions

    There are two models; v7 has been trained from scratch, and v8 has been fine-tuned using hyperparameter adjustment. For each model, there are 5 files with the same root name:

    1. '.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.

    2. '.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py. Models may be ensembled.

    3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model

    4. '_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py

    5. '.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py

    Additionally,

    1. BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU
    2. sample_images.zip contains a few example input files, for model testing

    References

    *Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

  11. R

    Field Segmentation 2 Dataset

    • universe.roboflow.com
    zip
    Updated Apr 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prototype Models (2025). Field Segmentation 2 Dataset [Dataset]. https://universe.roboflow.com/prototype-models/field-segmentation-2-yz2xv
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 20, 2025
    Dataset authored and provided by
    Prototype Models
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Field UOaR Polygons
    Description

    Field Segmentation 2

    ## Overview
    
    Field Segmentation 2 is a dataset for instance segmentation tasks - it contains Field UOaR annotations for 754 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  12. PointPrompt: A Visual Prompting Dataset based on the Segment Anything Model

    • zenodo.org
    Updated Aug 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jorge Quesada; Jorge Quesada; Zoe Fowler; Zoe Fowler; Mohammad Alotaibi; Mohit Prabhushankar; Mohit Prabhushankar; Ghassan AlRegib; Ghassan AlRegib; Mohammad Alotaibi (2024). PointPrompt: A Visual Prompting Dataset based on the Segment Anything Model [Dataset]. http://doi.org/10.5281/zenodo.11580815
    Explore at:
    Dataset updated
    Aug 4, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jorge Quesada; Jorge Quesada; Zoe Fowler; Zoe Fowler; Mohammad Alotaibi; Mohit Prabhushankar; Mohit Prabhushankar; Ghassan AlRegib; Ghassan AlRegib; Mohammad Alotaibi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Each folder in 'Prompting data.zip' corresponds to a single category (Bird, Cat, Bus etc), and each of these contain folders corresponding to a single participant (st1, st2 etc). Each participant folder should contain 5 subfolders:

    • 'masks' contains the binary masks produced for each image, in the format a_b_mask.png, where 'a' corresponds to the image number (0 to 399) and 'b' indexes through timestamps in the prompting process
    • 'points' contains the inclusion and exclusion points formatted as a_green.npy and a_red.npy respectively, where 'a' corresponds to the image number. Each of these files is a list of lists corresponding to the prompted points at each timestep. The outer list is of size (t,), where 't' is the number of timesteps for that image, an each inner list is fo size (n,2), where 'n' is the number of points at a given timestep
    • 'scores' contains the scores at each timestep for every image (mIoU)
    • 'sorts' contains sorted timestamp indexes, going from max to min based on the score
    • 'eachround' indicates which timesteps belong to each of the two rounds (if they exist). Each file contains a list of lenght t (number of timestamps) where values of 0 corresponds to timestamps that belong to the first round and values of 1 correspond to timestamps that belong to the second round

    Quick usage:

    -To get the best (highes score) mask for a given image : masks[sorts[0]]
    -To get the best set of prompts for that image : green[sorts[0]] and red[sorts[0]]
    -To get which round produced the highest score in that image : eachround[sorts[0]]

    The codebase associated with this work can be found at this Github.

    Please refer to our lab-wide github for more information regarding the code associated with our other papers.

  13. R

    Vehicle Segmentation 2 Dataset

    • universe.roboflow.com
    zip
    Updated Jan 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    zibot (2025). Vehicle Segmentation 2 Dataset [Dataset]. https://universe.roboflow.com/zibot/vehicle-segmentation-2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 8, 2025
    Dataset authored and provided by
    zibot
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Variables measured
    Vehicle Annotation SwXm Polygons
    Description

    Vehicle Segmentation 2

    ## Overview
    
    Vehicle Segmentation 2 is a dataset for instance segmentation tasks - it contains Vehicle Annotation SwXm annotations for 787 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [MIT license](https://creativecommons.org/licenses/MIT).
    
  14. R

    Graffiti Segmentation 2 Dataset

    • universe.roboflow.com
    zip
    Updated Oct 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sumanthrao369 (2023). Graffiti Segmentation 2 Dataset [Dataset]. https://universe.roboflow.com/sumanthrao369/graffiti-segmentation-2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 1, 2023
    Dataset authored and provided by
    Sumanthrao369
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Graffiti Polygons
    Description

    Graffiti Segmentation 2

    ## Overview
    
    Graffiti Segmentation 2 is a dataset for instance segmentation tasks - it contains Graffiti annotations for 1,293 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
    
  15. H

    Replication Data for: "A Topic-based Segmentation Model for Identifying...

    • dataverse.harvard.edu
    • search.dataone.org
    Updated May 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sunghoon Kim; Sanghak Lee; Robert McCulloch (2024). Replication Data for: "A Topic-based Segmentation Model for Identifying Segment-Level Drivers of Star Ratings from Unstructured Text Reviews" [Dataset]. http://doi.org/10.7910/DVN/EE3DE2
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 7, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Sunghoon Kim; Sanghak Lee; Robert McCulloch
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    We provide instructions, codes and datasets for replicating the article by Kim, Lee and McCulloch (2024), "A Topic-based Segmentation Model for Identifying Segment-Level Drivers of Star Ratings from Unstructured Text Reviews." This repository provides a user-friendly R package for any researchers or practitioners to apply A Topic-based Segmentation Model with Unstructured Texts (latent class regression with group variable selection) to their datasets. First, we provide a R code to replicate the illustrative simulation study: see file 1. Second, we provide the user-friendly R package with a very simple example code to help apply the model to real-world datasets: see file 2, Package_MixtureRegression_GroupVariableSelection.R and Dendrogram.R. Third, we provide a set of codes and instructions to replicate the empirical studies of customer-level segmentation and restaurant-level segmentation with Yelp reviews data: see files 3-a, 3-b, 4-a, 4-b. Note, due to the dataset terms of use by Yelp and the restriction of data size, we provide the link to download the same Yelp datasets (https://www.kaggle.com/datasets/yelp-dataset/yelp-dataset/versions/6). Fourth, we provided a set of codes and datasets to replicate the empirical study with professor ratings reviews data: see file 5. Please see more details in the description text and comments of each file. [A guide on how to use the code to reproduce each study in the paper] 1. Full codes for replicating Illustrative simulation study.txt -- [see Table 2 and Figure 2 in main text]: This is R source code to replicate the illustrative simulation study. Please run from the beginning to the end in R. In addition to estimated coefficients (posterior means of coefficients), indicators of variable selections, and segment memberships, you will get dendrograms of selected groups of variables in Figure 2. Computing time is approximately 20 to 30 minutes 3-a. Preprocessing raw Yelp Reviews for Customer-level Segmentation.txt: Code for preprocessing the downloaded unstructured Yelp review data and preparing DV and IVs matrix for customer-level segmentation study. 3-b. Instruction for replicating Customer-level Segmentation analysis.txt -- [see Table 10 in main text; Tables F-1, F-2, and F-3 and Figure F-1 in Web Appendix]: Code for replicating customer-level segmentation study with Yelp data. You will get estimated coefficients (posterior means of coefficients), indicators of variable selections, and segment memberships. Computing time is approximately 3 to 4 hours. 4-a. Preprocessing raw Yelp reviews_Restaruant Segmentation (1).txt: R code for preprocessing the downloaded unstructured Yelp data and preparing DV and IVs matrix for restaurant-level segmentation study. 4-b. Instructions for replicating restaurant-level segmentation analysis.txt -- [see Tables 5, 6 and 7 in main text; Tables E-4 and E-5 and Figure H-1 in Web Appendix]: Code for replicating restaurant-level segmentation study with Yelp. you will get estimated coefficients (posterior means of coefficients), indicators of variable selections, and segment memberships. Computing time is approximately 10 to 12 hours. [Guidelines for running Benchmark models in Table 6] Unsupervised Topic model: 'topicmodels' package in R -- after determining the number of topics(e.g., with 'ldatuning' R package), run 'LDA' function in the 'topicmodels'package. Then, compute topic probabilities per restaurant (with 'posterior' function in the package) which can be used as predictors. Then, conduct prediction with regression Hierarchical topic model (HDP): 'gensimr' R package -- 'model_hdp' function for identifying topics in the package (see https://radimrehurek.com/gensim/models/hdpmodel.html or https://gensimr.news-r.org/). Supervised topic model: 'lda' R package -- 'slda.em' function for training and 'slda.predict' for prediction. Aggregate regression: 'lm' default function in R. Latent class regression without variable selection: 'flexmix' function in 'flexmix' R package. Run flexmix with a certain number of segments (e.g., 3 segments in this study). Then, with estimated coefficients and memberships, conduct prediction of dependent variable per each segment. Latent class regression with variable selection: 'Unconstraind_Bayes_Mixture' function in Kim, Fong and DeSarbo(2012)'s package. Run the Kim et al's model (2012) with a certain number of segments (e.g., 3 segments in this study). Then, with estimated coefficients and memberships, we can do prediction of dependent variables per each segment. The same R package ('KimFongDeSarbo2012.zip') can be downloaded at: https://sites.google.com/scarletmail.rutgers.edu/r-code-packages/home 5. Instructions for replicating Professor ratings review study.txt -- [see Tables G-1, G-2, G-4 and G-5, and Figures G-1 and H-2 in Web Appendix]: Code to replicate the Professor ratings reviews study. Computing time is approximately 10 hours. [A list of the versions of R, packages, and computer...

  16. f

    Segmentation Living Image data

    • figshare.com
    zip
    Updated Oct 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    qianxiang yao (2023). Segmentation Living Image data [Dataset]. http://doi.org/10.6084/m9.figshare.24270154.v3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 9, 2023
    Dataset provided by
    figshare
    Authors
    qianxiang yao
    License

    https://www.apache.org/licenses/LICENSE-2.0.htmlhttps://www.apache.org/licenses/LICENSE-2.0.html

    Description

    This study introduces the concept of "structural beauty" as an objective computational approach for evaluating the aesthetic appeal of images. Through the utilization of the Segment anything model (SAM), we propose a method that leverages recursive segmentation to extract finer-grained substructures. Additionally, by reconstructing the hierarchical structure, we obtain a more accurate representation of substructure quantity and hierarchy. This approach reproduces and extends our previous research, allowing for the simultaneous assessment of Livingness in full-color images without the need for grayscale conversion or separate computations for foreground and background Livingness. Furthermore, the application of our method to the Scenic or Not dataset, a repository of subjective scenic ratings, demonstrates a high degree of consistency with subjective ratings in the 0-6 score range. This underscores that structural beauty is not solely a subjective perception, but a quantifiable attribute accessible through objective computation. Through our case studies, we have arrived at three significant conclusions. 1) our method demonstrates the capability to accurately segment meaningful objects, including trees, buildings, and windows, as well as abstract substructures within paintings. 2) we observed that the clarity of an image impacts our computational results; clearer images tend to yield higher Livingness scores. However, for equally blurry images, Livingness does not exhibit a significant reduction, aligning with human visual perception. 3) our approach fundamentally differs from methods employing Convolutional Neural Networks (CNNs) for predicting image scores. Our method not only provides computational results but also offers transparency and interpretability, positioning it as a novel avenue in the realm of Explainable AI (XAI).

  17. Images and 2-class labels for semantic segmentation of Sentinel-2 and...

    • zenodo.org
    • data.niaid.nih.gov
    txt, zip
    Updated Dec 2, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Buscombe; Daniel Buscombe (2022). Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB, NIR, and SWIR satellite images of coasts (water, other) [Dataset]. http://doi.org/10.5281/zenodo.7384263
    Explore at:
    zip, txtAvailable download formats
    Dataset updated
    Dec 2, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Daniel Buscombe; Daniel Buscombe
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB, NIR, and SWIR satellite images of coasts (water, other)

    Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat 5-band (R+G+B+NIR+SWIR) satellite images of coasts (water, other)

    Description

    3649 images and 3649 associated labels for semantic segmentation of Sentinel-2 and Landsat 5-band (R+G+B+NIR+SWIR) satellite images of coasts. The 2 classes are 1=water, 0=other. Imagery are a mixture of 10-m Sentinel-2 and 15-m pansharpened Landsat 7, 8, and 9 visible-band imagery of various sizes. Red, Green, Blue, near-infrared, and short-wave infrared bands only

    These images and labels could be used within numerous Machine Learning frameworks for image segmentation, but have specifically been made for use with the Doodleverse software package, Segmentation Gym**.

    Two data sources have been combined

    Dataset 1

    * 579 image-label pairs from the following data release**** https://doi.org/10.5281/zenodo.7344571
    * Labels have been reclassified from 4 classes to 2 classes.
    * Some (422) of these images and labels were originally included in the Coast Train*** data release, and have been modified from their original by reclassifying from the original classes to the present 2 classes.
    * These images and labels have been made using the Doodleverse software package, Doodler*.

    Dataset 2

    • 3070 image-label pairs from the Sentinel-2 Water Edges Dataset (SWED)***** dataset, https://openmldata.ukho.gov.uk/, described by Seale et al. (2022)******
    • A subset of the original SWED imagery (256 x 256 x 12) and labels (256 x 256 x 1) have been chosen, based on the criteria of more than 2.5% of the pixels represent water

    File descriptions

    • classes.txt, a file containing the class names
    • images.zip, a zipped folder containing the 3-band RGB images of varying sizes and extents
    • labels.zip, a zipped folder containing the 1-band label images
    • nir.zip, a zipped folder containing the 1-band near-infrared (NIR) images
    • swir.zip, a zipped folder containing the 1-band shorttwave infrared (SWIR) images
    • overlays.zip, a zipped folder containing a semi-transparent overlay of the color-coded label on the image (red=1=water, blue=0=other)
    • resized_images.zip, RGB images resized to 512x512x3 pixels
    • resized_labels.zip, label images resized to 512x512x1 pixels
    • resized_nir.zip, NIR images resized to 512x512x1 pixels
    • resized_swir.zip, SWIR images resized to 512x512x1 pixels

    References

    *Doodler: Buscombe, D., Goldstein, E.B., Sherwood, C.R., Bodine, C., Brown, J.A., Favela, J., Fitzpatrick, S., Kranenburg, C.J., Over, J.R., Ritchie, A.C. and Warrick, J.A., 2021. Human‐in‐the‐Loop Segmentation of Earth Surface Imagery. Earth and Space Science, p.e2021EA002085https://doi.org/10.1029/2021EA002085. See https://github.com/Doodleverse/dash_doodler.

    **Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

    ***Coast Train data release: Wernette, P.A., Buscombe, D.D., Favela, J., Fitzpatrick, S., and Goldstein E., 2022, Coast Train--Labeled imagery for training and evaluation of data-driven models for image segmentation: U.S. Geological Survey data release, https://doi.org/10.5066/P91NP87I. See https://coasttrain.github.io/CoastTrain/ for more information

    ****Buscombe, Daniel. (2022). Images and 4-class labels for semantic segmentation of Sentinel-2 and Landsat RGB, NIR, and SWIR satellite images of coasts (water, whitewater, sediment, other) (v1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7344571

    *****Seale, C., Redfern, T., Chatfield, P. 2022. Sentinel-2 Water Edges Dataset (SWED) https://openmldata.ukho.gov.uk/

    ******Seale, C., Redfern, T., Chatfield, P., Luo, C. and Dempsey, K., 2022. Coastline detection in satellite imagery: A deep learning approach on new benchmark data. Remote Sensing of Environment, 278, p.113044.

  18. Images and 2-class labels for semantic segmentation of Sentinel-2 and...

    • zenodo.org
    • data.niaid.nih.gov
    txt, zip
    Updated Dec 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Buscombe; Daniel Buscombe (2022). Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, other) [Dataset]. http://doi.org/10.5281/zenodo.7384242
    Explore at:
    zip, txtAvailable download formats
    Dataset updated
    Dec 2, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Daniel Buscombe; Daniel Buscombe
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, other)

    Images and 2-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, other)

    Description

    4088 images and 4088 associated labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts. The 2 classes are 1=water, 0=other. Imagery are a mixture of 10-m Sentinel-2 and 15-m pansharpened Landsat 7, 8, and 9 visible-band imagery of various sizes. Red, Green, Blue bands only

    These images and labels could be used within numerous Machine Learning frameworks for image segmentation, but have specifically been made for use with the Doodleverse software package, Segmentation Gym**.

    Two data sources have been combined

    Dataset 1

    • 1018 image-label pairs from the following data release**** https://doi.org/10.5281/zenodo.7335647
    • Labels have been reclassified from 4 classes to 2 classes.
    • Some (422) of these images and labels were originally included in the Coast Train*** data release, and have been modified from their original by reclassifying from the original classes to the present 2 classes.
    • These images and labels have been made using the Doodleverse software package, Doodler*.

    Dataset 2

    • 3070 image-label pairs from the Sentinel-2 Water Edges Dataset (SWED)***** dataset, https://openmldata.ukho.gov.uk/, described by Seale et al. (2022)******
    • A subset of the original SWED imagery (256 x 256 x 12) and labels (256 x 256 x 1) have been chosen, based on the criteria of more than 2.5% of the pixels represent water

    File descriptions

    • classes.txt, a file containing the class names
    • images.zip, a zipped folder containing the 3-band RGB images of varying sizes and extents
    • labels.zip, a zipped folder containing the 1-band label images
    • overlays.zip, a zipped folder containing a semi-transparent overlay of the color-coded label on the image (red=1=water, bllue=0=other)
    • resized_images.zip, RGB images resized to 512x512x3 pixels
    • resized_labels.zip, label images resized to 512x512x1 pixels

    References

    *Doodler: Buscombe, D., Goldstein, E.B., Sherwood, C.R., Bodine, C., Brown, J.A., Favela, J., Fitzpatrick, S., Kranenburg, C.J., Over, J.R., Ritchie, A.C. and Warrick, J.A., 2021. Human‐in‐the‐Loop Segmentation of Earth Surface Imagery. Earth and Space Science, p.e2021EA002085https://doi.org/10.1029/2021EA002085. See https://github.com/Doodleverse/dash_doodler.

    **Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym

    ***Coast Train data release: Wernette, P.A., Buscombe, D.D., Favela, J., Fitzpatrick, S., and Goldstein E., 2022, Coast Train--Labeled imagery for training and evaluation of data-driven models for image segmentation: U.S. Geological Survey data release, https://doi.org/10.5066/P91NP87I. See https://coasttrain.github.io/CoastTrain/ for more information

    ****Buscombe, Daniel, Goldstein, Evan, Bernier, Julie, Bosse, Stephen, Colacicco, Rosa, Corak, Nick, Fitzpatrick, Sharon, del Jesús González Guillén, Anais, Ku, Venus, Paprocki, Julie, Platt, Lindsay, Steele, Bethel, Wright, Kyle, & Yasin, Brandon. (2022). Images and 4-class labels for semantic segmentation of Sentinel-2 and Landsat RGB satellite images of coasts (water, whitewater, sediment, other) (v1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7335647

    *****Seale, C., Redfern, T., Chatfield, P. 2022. Sentinel-2 Water Edges Dataset (SWED) https://openmldata.ukho.gov.uk/

    ******Seale, C., Redfern, T., Chatfield, P., Luo, C. and Dempsey, K., 2022. Coastline detection in satellite imagery: A deep learning approach on new benchmark data. Remote Sensing of Environment, 278, p.113044.

  19. R

    Asap Segmentation 2 Dataset

    • universe.roboflow.com
    zip
    Updated Feb 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ASAP Detection (2025). Asap Segmentation 2 Dataset [Dataset]. https://universe.roboflow.com/asap-detection/asap-segmentation-2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 24, 2025
    Dataset authored and provided by
    ASAP Detection
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    ASAP Detection OCpk YuXG Polygons
    Description

    ASAP Segmentation 2

    ## Overview
    
    ASAP Segmentation 2 is a dataset for instance segmentation tasks - it contains ASAP Detection OCpk YuXG annotations for 1,706 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  20. s

    Indoor Multiple Person & Object Segmentation Dataset

    • shaip.com
    json
    Updated Nov 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Indoor Multiple Person & Object Segmentation Dataset [Dataset]. https://www.shaip.com/offerings/human-animal-segmentation-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Nov 26, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Indoor Multiple Person & Object Segmentation Dataset is designed for the internet and media & entertainment sectors, featuring a collection of drama images set in indoor living scenarios. This dataset, with an average of 5 to 6 persons per picture, spans Asian, American, and English contexts. It supports detailed semantic segmentation tasks for human body areas, clothing and accessories, and indoor objects.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Esri (2023). Segment Anything Model (SAM) [Dataset]. https://morocco.africageoportal.com/content/9b67b441f29f4ce6810979f5f0667ebe
Organization logo

Data from: Segment Anything Model (SAM)

Related Article
Explore at:
Dataset updated
Apr 17, 2023
Dataset authored and provided by
Esrihttp://esri.com/
Description

Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

Search
Clear search
Close search
Google apps
Main menu