100+ datasets found
  1. Data from: Segment Anything Model (SAM)

    • morocco-geoportal-powered-by-esri-africa.hub.arcgis.com
    • uneca.africageoportal.com
    • +1more
    Updated Apr 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Segment Anything Model (SAM) [Dataset]. https://morocco-geoportal-powered-by-esri-africa.hub.arcgis.com/datasets/esri::segment-anything-model-sam
    Explore at:
    Dataset updated
    Apr 17, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Description

    Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

  2. h

    sam-dataset

    • huggingface.co
    Updated Jun 27, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sketch AI (2022). sam-dataset [Dataset]. https://huggingface.co/datasets/sketchai/sam-dataset
    Explore at:
    Dataset updated
    Jun 27, 2022
    Dataset authored and provided by
    Sketch AI
    License

    https://choosealicense.com/licenses/lgpl-3.0/https://choosealicense.com/licenses/lgpl-3.0/

    Description

    Dataset Card for Sketch Data Model Dataset

      Dataset Summary
    

    This dataset contains over 6M CAD 2D sketches extracted from Onshape. Sketches are stored as python objects in the custom SAM format. SAM leverages the Sketchgraphs dataset for industrial needs and allows for easier transfer learning on other CAD softwares.

      Supported Tasks and Leaderboards
    

    Tasks: Automatic Sketch Generation, Auto Constraint

      Dataset Structure
    
    
    
    
    
      Data Instances
    

    The… See the full description on the dataset page: https://huggingface.co/datasets/sketchai/sam-dataset.

  3. R

    Sam Dataset

    • universe.roboflow.com
    zip
    Updated Oct 14, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SAm (2023). Sam Dataset [Dataset]. https://universe.roboflow.com/sam-glt4b/sam-oow6j/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 14, 2023
    Dataset authored and provided by
    SAm
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    All Bounding Boxes
    Description

    Sam

    ## Overview
    
    Sam is a dataset for object detection tasks - it contains All annotations for 940 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  4. a

    Tree Segmentation

    • uneca.africageoportal.com
    • hub.arcgis.com
    Updated May 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Tree Segmentation [Dataset]. https://uneca.africageoportal.com/content/6d910b29ff38406986da0abf1ce50836
    Explore at:
    Dataset updated
    May 18, 2023
    Dataset authored and provided by
    Esri
    Description

    This deep learning model is used to detect and segment trees in high resolution drone or aerial imagery. Tree detection can be used for applications such as vegetation management, forestry, urban planning, etc. High resolution aerial and drone imagery can be used for tree detection due to its high spatio-temporal coverage.This deep learning model is based on DeepForest and has been trained on data from the National Ecological Observatory Network (NEON). The model also uses Segment Anything Model (SAM) by Meta.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8 bit, 3-band high-resolution (10-25 cm) imagery.OutputFeature class containing separate masks for each tree.Applicable geographiesThe model is expected to work well in the United States.Model architectureThis model is based upon the DeepForest python package which uses the RetinaNet model architecture implemented in torchvision and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThis model has an precision score of 0.66 and recall of 0.79.Training dataThis model has been trained on NEON Tree Benchmark dataset, provided by the Weecology Lab at the University of Florida. The model also uses Segment Anything Model (SAM) by Meta that is trained on 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.CitationsWeinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309Geographic Generalization in Airborne RGB Deep Learning Tree Detection Ben Weinstein, Sergio Marconi, Stephanie Bohlman, Alina Zare, Ethan P White bioRxiv 790071; doi: https://doi.org/10.1101/790071

  5. t

    MaskSAM: Towards Auto-prompt SAM with Mask Classification for Medical Image...

    • service.tib.eu
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). MaskSAM: Towards Auto-prompt SAM with Mask Classification for Medical Image Segmentation - Dataset - LDM [Dataset]. https://service.tib.eu/ldmservice/dataset/masksam--towards-auto-prompt-sam-with-mask-classification-for-medical-image-segmentation
    Explore at:
    Dataset updated
    Dec 16, 2024
    Description

    Segment Anything Model (SAM) is a prompt-driven foundation model for natural image segmentation, which is trained on the large-scale SA-1B dataset of 1B masks and 11M images.

  6. R

    Dut Sam Dataset

    • universe.roboflow.com
    zip
    Updated Aug 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    dutsam (2024). Dut Sam Dataset [Dataset]. https://universe.roboflow.com/dutsam/dut-sam/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 7, 2024
    Dataset authored and provided by
    dutsam
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Fire Polygons
    Description

    Dut Sam

    ## Overview
    
    Dut Sam is a dataset for instance segmentation tasks - it contains Fire annotations for 2,950 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  7. R

    Sam 2.1 Dataset

    • universe.roboflow.com
    zip
    Updated Mar 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fall (2025). Sam 2.1 Dataset [Dataset]. https://universe.roboflow.com/fall-hjxzu/sam-2.1/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 22, 2025
    Dataset authored and provided by
    Fall
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Anything Polygons
    Description

    SAM 2.1

    ## Overview
    
    SAM 2.1 is a dataset for instance segmentation tasks - it contains Anything annotations for 3,110 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  8. D

    Data from: Unlocking the Power of SAM 2 for Few-Shot Segmentation

    • researchdata.ntu.edu.sg
    Updated May 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DR-NTU (Data) (2025). Unlocking the Power of SAM 2 for Few-Shot Segmentation [Dataset]. http://doi.org/10.21979/N9/XIDXVT
    Explore at:
    Dataset updated
    May 22, 2025
    Dataset provided by
    DR-NTU (Data)
    License

    https://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/XIDXVThttps://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/XIDXVT

    Dataset funded by
    RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative
    Description

    Few-Shot Segmentation (FSS) aims to learn class-agnostic segmentation on few classes to segment arbitrary classes, but at the risk of overfitting. To address this, some methods use the well-learned knowledge of foundation models (e.g., SAM) to simplify the learning process. Recently, SAM 2 has extended SAM by supporting video segmentation, whose class-agnostic matching ability is useful to FSS. A simple idea is to encode support foreground (FG) features as memory, with which query FG features are matched and fused. Unfortunately, the FG objects in different frames of SAM 2's video data are always the same identity, while those in FSS are different identities, i.e., the matching step is incompatible. Therefore, we design Pseudo Prompt Generator to encode pseudo query memory, matching with query features in a compatible way. However, the memories can never be as accurate as the real ones, i.e., they are likely to contain incomplete query FG, but some unexpected query background (BG) features, leading to wrong segmentation. Hence, we further design Iterative Memory Refinement to fuse more query FG features into the memory, and devise a Support-Calibrated Memory Attention to suppress the unexpected query BG features in memory. Extensive experiments have been conducted on PASCAL-5i and COCO-20i to validate the effectiveness of our design, e.g., the 1-shot mIoU can be 4.2% better than the best baseline.

  9. t

    GoodSAM: Bridging Domain and Capacity Gaps via Segment Anything Model -...

    • service.tib.eu
    Updated Dec 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). GoodSAM: Bridging Domain and Capacity Gaps via Segment Anything Model - Dataset - LDM [Dataset]. https://service.tib.eu/ldmservice/dataset/goodsam--bridging-domain-and-capacity-gaps-via-segment-anything-model
    Explore at:
    Dataset updated
    Dec 2, 2024
    Description

    This paper tackles a novel problem: how to transfer knowledge from the emerging Segment Anything Model (SAM) to learn a compact panoramic semantic segmentation model, i.e., student, without requiring any labeled data.

  10. R

    Yolo11 And Sam Dataset

    • universe.roboflow.com
    zip
    Updated Jul 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tension crack detection (2025). Yolo11 And Sam Dataset [Dataset]. https://universe.roboflow.com/tension-crack-detection/yolo11-and-sam/model/3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 29, 2025
    Dataset authored and provided by
    Tension crack detection
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    KAVA Polygons
    Description

    Yolo11 And Sam

    ## Overview
    
    Yolo11 And Sam is a dataset for instance segmentation tasks - it contains KAVA annotations for 1,000 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  11. R

    Sam Semantic Segmentation Dataset

    • universe.roboflow.com
    zip
    Updated Jun 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    masters (2023). Sam Semantic Segmentation Dataset [Dataset]. https://universe.roboflow.com/masters-1u0oz/sam-semantic-segmentation/dataset/8
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset authored and provided by
    masters
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Septoria StripeRust Healthy Masks
    Description

    Sam Semantic Segmentation

    ## Overview
    
    Sam Semantic Segmentation is a dataset for semantic segmentation tasks - it contains Septoria StripeRust Healthy annotations for 405 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  12. SIIM Effnet SAM model

    • kaggle.com
    Updated Jun 29, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    v1nor1 (2021). SIIM Effnet SAM model [Dataset]. https://www.kaggle.com/datasets/v1olet1nor1/siim-effnet-sam-model/data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 29, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    v1nor1
    Description

    Dataset

    This dataset was created by v1nor1

    Contents

  13. N

    Data from: Neural and cognitive dynamics leading to the formation of strong...

    • neurovault.org
    zip
    Updated Mar 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Neural and cognitive dynamics leading to the formation of strong memories: A meta-analysis and the SAM model [Dataset]. http://identifiers.org/neurovault.collection:13389
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 28, 2025
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    A collection of 4 brain maps. Each brain map is a 3D array of values representing properties of the brain at different locations.

    Collection description

    In this paper, a meta-analytic approach is used to compare the effects of two types of contrast: (1) strong-SM (strongly remembered > forgotten) and (2) general-SM (remembered > forgotten).

  14. f

    Experimental results with the Grid prompt method.

    • figshare.com
    xls
    Updated Jul 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qitong Wang; Emmanuel Chinkaka; Romain Richaud; Mehrnaz Haghdadi; Coryn Wolk; Kopo V. Oromeng; Kyle Frankel Davis; Federica B. Bianco; Xi Peng; Julie Michelle Klinger (2025). Experimental results with the Grid prompt method. [Dataset]. http://doi.org/10.1371/journal.pstr.0000182.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 15, 2025
    Dataset provided by
    PLOS Sustainability and Transformation
    Authors
    Qitong Wang; Emmanuel Chinkaka; Romain Richaud; Mehrnaz Haghdadi; Coryn Wolk; Kopo V. Oromeng; Kyle Frankel Davis; Federica B. Bianco; Xi Peng; Julie Michelle Klinger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The purpose of this paper is to leverage the growth of AI-enabled tools to support the democratization of mine observation (MO) research. Mining is essential to meet projected demand for renewable energy technologies crucial to global climate mitigation objectives, but all mining activities pose local and regional challenges to environmental sustainability. Such challenges can be mitigated by good governance, but unequal access among stakeholders to accurately interpreted satellite imagery can weaken good governance. Using readily available software—QGIS, and Segment Anything Model (SAM)—this paper develops and tests the reliability of MO-SAM, a new method to identify and delineate features within the spatially-explicit mine extent at a high level of detail. It focuses on dry tailings, waste dumps, and stockpiles in above-ground mining areas. While we intend for MO-SAM to be used generally, this study tested it on mining areas for energy-critical materials: lithium (Li), cobalt (Co), rare earth elements (REE), and platinum group elements (PGE), selected for their importance to the global transition to renewable energy. MO-SAM demonstrates generalizability through prompt engineering, but performance limitations were observed in imagery with complex mining landscape scenarios, including spatial variations in image morphology and boundary sharpness. Our analysis provides data-driven insights to support advances in the use of MO-SAM for analyzing and monitoring large-scale mining activities with greater speed than methods that rely on manual delineation, and with greater precision than practices that focus primarily on changes in the spatially-explicit mine extent. It also provides insights into the importance of multidisciplinary human expertise in designing processes for and assessing the accuracy of AI-assisted remote sensing image segmentation as well as in evaluating the significance of the land use and land cover changes identified. This has widespread potential to advance the multidisciplinary application of AI for scientific and public interest, particularly in research on global scale human-environment interactions such as industrial mining activities. This is methodologically significant because the potential and limitations of using large pre-trained image segmentation models such as SAM for analyzing remote sensing data is an emergent and underexplored issue. The results can help advance the utilization of large pre-trained segmentation models for remote sensing imagery analysis to support sustainability research and policy.

  15. f

    Airbus Pleiades Neo (PNEO) data information.

    • plos.figshare.com
    xls
    Updated Jul 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qitong Wang; Emmanuel Chinkaka; Romain Richaud; Mehrnaz Haghdadi; Coryn Wolk; Kopo V. Oromeng; Kyle Frankel Davis; Federica B. Bianco; Xi Peng; Julie Michelle Klinger (2025). Airbus Pleiades Neo (PNEO) data information. [Dataset]. http://doi.org/10.1371/journal.pstr.0000182.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 15, 2025
    Dataset provided by
    PLOS Sustainability and Transformation
    Authors
    Qitong Wang; Emmanuel Chinkaka; Romain Richaud; Mehrnaz Haghdadi; Coryn Wolk; Kopo V. Oromeng; Kyle Frankel Davis; Federica B. Bianco; Xi Peng; Julie Michelle Klinger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The purpose of this paper is to leverage the growth of AI-enabled tools to support the democratization of mine observation (MO) research. Mining is essential to meet projected demand for renewable energy technologies crucial to global climate mitigation objectives, but all mining activities pose local and regional challenges to environmental sustainability. Such challenges can be mitigated by good governance, but unequal access among stakeholders to accurately interpreted satellite imagery can weaken good governance. Using readily available software—QGIS, and Segment Anything Model (SAM)—this paper develops and tests the reliability of MO-SAM, a new method to identify and delineate features within the spatially-explicit mine extent at a high level of detail. It focuses on dry tailings, waste dumps, and stockpiles in above-ground mining areas. While we intend for MO-SAM to be used generally, this study tested it on mining areas for energy-critical materials: lithium (Li), cobalt (Co), rare earth elements (REE), and platinum group elements (PGE), selected for their importance to the global transition to renewable energy. MO-SAM demonstrates generalizability through prompt engineering, but performance limitations were observed in imagery with complex mining landscape scenarios, including spatial variations in image morphology and boundary sharpness. Our analysis provides data-driven insights to support advances in the use of MO-SAM for analyzing and monitoring large-scale mining activities with greater speed than methods that rely on manual delineation, and with greater precision than practices that focus primarily on changes in the spatially-explicit mine extent. It also provides insights into the importance of multidisciplinary human expertise in designing processes for and assessing the accuracy of AI-assisted remote sensing image segmentation as well as in evaluating the significance of the land use and land cover changes identified. This has widespread potential to advance the multidisciplinary application of AI for scientific and public interest, particularly in research on global scale human-environment interactions such as industrial mining activities. This is methodologically significant because the potential and limitations of using large pre-trained image segmentation models such as SAM for analyzing remote sensing data is an emergent and underexplored issue. The results can help advance the utilization of large pre-trained segmentation models for remote sensing imagery analysis to support sustainability research and policy.

  16. f

    Clinical Utility of Foundation Segmentation Models in Musculoskeletal MRI –...

    • figshare.com
    xlsx
    Updated Jul 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gabrielle Hoyer; Michelle W. Tong; Rupsa Bhattacharjee; Valentina Pedoia; Sharmila Majumdar (2025). Clinical Utility of Foundation Segmentation Models in Musculoskeletal MRI – Supplementary Tables and Study Data [Dataset]. http://doi.org/10.6084/m9.figshare.29633207.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jul 24, 2025
    Dataset provided by
    figshare
    Authors
    Gabrielle Hoyer; Michelle W. Tong; Rupsa Bhattacharjee; Valentina Pedoia; Sharmila Majumdar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This item contains all supplementary tables (S0–S25) and study-generated data tables (D1–D54) referenced in Clinical Utility of Foundation Segmentation Models in Musculoskeletal MRI: Biomarker Fidelity and Predictive Outcomes. Supplementary tables include summary statistics, segmentation evaluations, statistical tests, and validation metrics presented in the manuscript and supplementary PDF. Data tables include subject-level and slice-level segmentation scores, biomarker measurements, mixed-effects model outputs, and MRI imaging parameters. A clickable table of contents is provided on the first sheet to enable navigation across tables.

  17. A

    Data from: System Advisor Model (SAM)

    • data.amerigeoss.org
    • datadiscoverystudio.org
    • +1more
    html
    Updated Jul 29, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2019). System Advisor Model (SAM) [Dataset]. https://data.amerigeoss.org/dataset/groups/system-advisor-model-sam-2ac84
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jul 29, 2019
    Dataset provided by
    United States
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The System Advisor Model (SAM) is a performance and financial model designed to facilitate decision making for people involved in the renewable energy industry. SAM makes performance predictions and cost of energy estimates for grid-connected power projects based on installation and operating costs and system design parameters that you specify as inputs to the model. Projects can be either on the customer side of the utility meter, buying and selling electricity at retail rates, or on the utility side of the meter, selling electricity at a price negotiated through a power purchase agreement (PPA).

  18. R

    Sam Mla Dataset

    • universe.roboflow.com
    zip
    Updated Mar 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    haoyuwa (2024). Sam Mla Dataset [Dataset]. https://universe.roboflow.com/haoyuwa/sam-mla-unjsj
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 19, 2024
    Dataset authored and provided by
    haoyuwa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Particles Polygons
    Description

    SAM MLA

    ## Overview
    
    SAM MLA is a dataset for instance segmentation tasks - it contains Particles annotations for 200 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  19. f

    Pearson correlation between MO-SAM performance and image...

    • plos.figshare.com
    xls
    Updated Jul 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qitong Wang; Emmanuel Chinkaka; Romain Richaud; Mehrnaz Haghdadi; Coryn Wolk; Kopo V. Oromeng; Kyle Frankel Davis; Federica B. Bianco; Xi Peng; Julie Michelle Klinger (2025). Pearson correlation between MO-SAM performance and image foreground-background contrast. [Dataset]. http://doi.org/10.1371/journal.pstr.0000182.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 15, 2025
    Dataset provided by
    PLOS Sustainability and Transformation
    Authors
    Qitong Wang; Emmanuel Chinkaka; Romain Richaud; Mehrnaz Haghdadi; Coryn Wolk; Kopo V. Oromeng; Kyle Frankel Davis; Federica B. Bianco; Xi Peng; Julie Michelle Klinger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Pearson correlation between MO-SAM performance and image foreground-background contrast.

  20. f

    Experimental results for the Random prompt generation method for each image...

    • plos.figshare.com
    xls
    Updated Jul 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qitong Wang; Emmanuel Chinkaka; Romain Richaud; Mehrnaz Haghdadi; Coryn Wolk; Kopo V. Oromeng; Kyle Frankel Davis; Federica B. Bianco; Xi Peng; Julie Michelle Klinger (2025). Experimental results for the Random prompt generation method for each image in our dataset. [Dataset]. http://doi.org/10.1371/journal.pstr.0000182.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 15, 2025
    Dataset provided by
    PLOS Sustainability and Transformation
    Authors
    Qitong Wang; Emmanuel Chinkaka; Romain Richaud; Mehrnaz Haghdadi; Coryn Wolk; Kopo V. Oromeng; Kyle Frankel Davis; Federica B. Bianco; Xi Peng; Julie Michelle Klinger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Experimental results for the Random prompt generation method for each image in our dataset.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Esri (2023). Segment Anything Model (SAM) [Dataset]. https://morocco-geoportal-powered-by-esri-africa.hub.arcgis.com/datasets/esri::segment-anything-model-sam
Organization logo

Data from: Segment Anything Model (SAM)

Related Article
Explore at:
Dataset updated
Apr 17, 2023
Dataset authored and provided by
Esrihttp://esri.com/
Description

Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

Search
Clear search
Close search
Google apps
Main menu