62 datasets found
  1. Data from: Segment Anything Model (SAM)

    • hub.arcgis.com
    • uneca.africageoportal.com
    Updated Apr 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Segment Anything Model (SAM) [Dataset]. https://hub.arcgis.com/content/9b67b441f29f4ce6810979f5f0667ebe
    Explore at:
    Dataset updated
    Apr 17, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

  2. Brain MRI segmentation Dataset Annotated with SAM

    • kaggle.com
    zip
    Updated May 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nessim ben abbes (2023). Brain MRI segmentation Dataset Annotated with SAM [Dataset]. https://www.kaggle.com/datasets/nessimbenabbes/brain-mri-segmentation-dataset-annotated-with-sam
    Explore at:
    zip(6503189 bytes)Available download formats
    Dataset updated
    May 21, 2023
    Authors
    nessim ben abbes
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Description This dataset is a comprehensive collection of Brain Magnetic Resonance Imaging (MRI) scans, meticulously annotated with the Segment Anything Model (SAM). The data is stored in a CSV file format for easy access and manipulation.

    Content The dataset contains MRI scans of the brain, each of which is annotated with SAM. The annotations provide detailed information about the segmentation of various structures present in brain scans. The dataset is designed to aid in developing and validating algorithms for automatic brain structure segmentation.

  3. h

    Enhancing-Segment-Anything-Model-with-Prioritized-Memory-For-Efficient-Image-Embeddings...

    • huggingface.co
    Updated Apr 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    pandey (2025). Enhancing-Segment-Anything-Model-with-Prioritized-Memory-For-Efficient-Image-Embeddings [Dataset]. https://huggingface.co/datasets/vinit000/Enhancing-Segment-Anything-Model-with-Prioritized-Memory-For-Efficient-Image-Embeddings
    Explore at:
    Dataset updated
    Apr 1, 2025
    Authors
    pandey
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Segment Anything Model (SAM) with Prioritized Memory Overview The Segment Anything Model (SAM) by Meta is a state-of-the-art image segmentation model leveraging vision transformers. However, it suffers from high memory usage and computational inefficiencies. Our research introduces a prioritized memory mechanism to enhance SAM’s performance while optimizing resource consumption. Methodology We propose a structured memory hierarchy to efficiently manage image embeddings and self-attention… See the full description on the dataset page: https://huggingface.co/datasets/vinit000/Enhancing-Segment-Anything-Model-with-Prioritized-Memory-For-Efficient-Image-Embeddings.

  4. O

    SA-1B(segment anything)

    • opendatalab.com
    zip
    Updated May 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meta AI Research (2023). SA-1B(segment anything) [Dataset]. https://opendatalab.com/OpenDataLab/SA-1B
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 1, 2023
    Dataset provided by
    Meta AI Research
    License

    https://ai.facebook.com/datasets/segment-anything-downloads/https://ai.facebook.com/datasets/segment-anything-downloads/

    Description

    Segment Anything 1 Billion (SA-1B) is a dataset designed for training general-purpose object segmentation models from open world images.

    SA-1B consists of 11M diverse, high-resolution, privacy protecting images and 1.1B high-quality segmentation masks that were collected with our data engine. It is intended to be used for computer vision research for the purposes permitted under our Data License.

    The images are licensed from a large photo company. The 1.1B masks were produced using our data engine, all of which were generated fully automatically by the Segment Anything Model (SAM).

  5. f

    Appendix figures.

    • figshare.com
    zip
    Updated Sep 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexandra Dunnum VandeLoo; Nathan J. Malta; Saahil Sanganeriya; Emilio Aponte; Caitlin van Zyl; Danfei Xu; Craig Forest (2025). Appendix figures. [Dataset]. http://doi.org/10.1371/journal.pone.0319532.s002
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 8, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Alexandra Dunnum VandeLoo; Nathan J. Malta; Saahil Sanganeriya; Emilio Aponte; Caitlin van Zyl; Danfei Xu; Craig Forest
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Background: When analyzing cells in culture, assessing cell morphology (shape), confluency (density), and growth patterns are necessary for understanding cell health. These parameters are generally obtained by a skilled biologist inspecting light microscope images, but this can become very laborious for high-throughput applications. One way to speed up this process is by automating cell segmentation. Cell segmentation is the task of drawing a separate boundary around each cell in a microscope image. This task is made difficult by vague cell boundaries and the transparent nature of cells. Many techniques for automatic cell segmentation exist, but these methods often require annotated datasets, model retraining, and associated technical expertise.Results: We present SAMCell, a modified version of Meta’s Segment Anything Model (SAM) trained on an existing large-scale dataset of microscopy images containing varying cell types and confluency. Our approach works on a wide range of microscopy images, including cell types not seen in training and on images taken by a different microscope. We also present a user-friendly UI that reduces the technical expertise needed for this automated microscopy technique.Conclusions: Using SAMCell, biologists can quickly and automatically obtain cell segmentation results of higher quality than previous methods. Further, these results can be obtained through our custom Graphical User Interface, thus decreasing the human labor required in cell culturing.

  6. Tree Segmentation

    • morocco.africageoportal.com
    • angola.africageoportal.com
    • +2more
    Updated May 18, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Tree Segmentation [Dataset]. https://morocco.africageoportal.com/content/6d910b29ff38406986da0abf1ce50836
    Explore at:
    Dataset updated
    May 18, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    This deep learning model is used to detect and segment trees in high resolution drone or aerial imagery. Tree detection can be used for applications such as vegetation management, forestry, urban planning, etc. High resolution aerial and drone imagery can be used for tree detection due to its high spatio-temporal coverage.This deep learning model is based on DeepForest and has been trained on data from the National Ecological Observatory Network (NEON). The model also uses Segment Anything Model (SAM) by Meta.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8 bit, 3-band high-resolution (10-25 cm) imagery.OutputFeature class containing separate masks for each tree.Applicable geographiesThe model is expected to work well in the United States.Model architectureThis model is based upon the DeepForest python package which uses the RetinaNet model architecture implemented in torchvision and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThis model has an precision score of 0.66 and recall of 0.79.Training dataThis model has been trained on NEON Tree Benchmark dataset, provided by the Weecology Lab at the University of Florida. The model also uses Segment Anything Model (SAM) by Meta that is trained on 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.CitationsWeinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309Geographic Generalization in Airborne RGB Deep Learning Tree Detection Ben Weinstein, Sergio Marconi, Stephanie Bohlman, Alina Zare, Ethan P White bioRxiv 790071; doi: https://doi.org/10.1101/790071

  7. R

    Sa V: Test Dataset

    • universe.roboflow.com
    zip
    Updated Nov 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SA-Co VEVal Dataset (2025). Sa V: Test Dataset [Dataset]. https://universe.roboflow.com/sa-co-veval/sa-v-test
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 21, 2025
    Dataset authored and provided by
    SA-Co VEVal Dataset
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Objects BOq2 Polygons
    Description

    SA-Co/VEval – SA-V: Test

    SA-Co/VEval is an evaluation dataset for promptable concept segmentation (PCS) in images developed by Meta for the Segment Anything 3 model (SAM 3). The dataset contains videos paired with text labels (also referred as Noun Phrases aka NPs), each annotated exhaustively with masks on all object instances that match the label.

    This Project allows you to explore SA-V: Test, which is the test split from the SA-V subset. You can see the val split at SA-V: Val.

    Download Instructions

    The full SA-Co/VEval data is available in its canonical, eval-ready form below.

    Download SA-V video frames: https://sa-co.roboflow.com/veval/saco_sav.zip

    Download YT-1B video frames: https://sa-co.roboflow.com/veval/saco_yt1b.zip

    Download SmartGlasses video frames: https://sa-co.roboflow.com/veval/saco_sg.zip

    Download ground truth annotations: https://sa-co.roboflow.com/veval/gt-annotations.zip

    Download the full bundle: https://sa-co.roboflow.com/veval/all.zip

    Explore all SA-Co/VEval datasets

    The Sa-Co/VEval dataset covers 3 image sources. The image sources are: SA-V, YT-Temporal-1B, SmartGlasses.

    Explore all: SA-Co/VEval on Roboflow Universe

    Read Meta's data license for SA-Co/VEval: License

  8. f

    Data Sheet 1_Assisting human annotation of marine images with foundation...

    • frontiersin.figshare.com
    pdf
    Updated Jul 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eric C. Orenstein; Benjamin Woodward; Lonny Lundsten; Kevin Barnard; Brian Schlining; Kakani Katjia (2025). Data Sheet 1_Assisting human annotation of marine images with foundation models.pdf [Dataset]. http://doi.org/10.3389/fmars.2025.1469396.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jul 24, 2025
    Dataset provided by
    Frontiers
    Authors
    Eric C. Orenstein; Benjamin Woodward; Lonny Lundsten; Kevin Barnard; Brian Schlining; Kakani Katjia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Marine scientists have been leveraging supervised machine learning algorithms to analyze image and video data for nearly two decades. There have been many advances, but the cost of generating expert human annotations to train new models remains extremely high. There is broad recognition both in computer and domain sciences that generating training data remains the major bottleneck when developing ML models for targeted tasks. Increasingly, computer scientists are not attempting to produce highly-optimized models from general annotation frameworks, instead focusing on adaptation strategies to tackle new data challenges. Taking inspiration from large language models, computer vision researchers are now thinking in terms of “foundation models” that can yield reasonable zero- and few-shot detection and segmentation performance with human prompting. Here we consider the utility of this approach for ocean imagery, leveraging Meta’s Segment Anything Model to enrich ocean image annotations based on existing labels. This workflow yields promising results, especially for modernizing existing data repositories. Moreover, it suggests that future human annotation efforts could use foundation models to speed progress toward a sufficient training set to address domain specific problems.

  9. Welding Defect Segmentation Dataset

    • kaggle.com
    zip
    Updated Apr 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    kildongGo (2025). Welding Defect Segmentation Dataset [Dataset]. https://www.kaggle.com/datasets/bemorekgg/welding-defect-segmentation-dataset/data
    Explore at:
    zip(86368218 bytes)Available download formats
    Dataset updated
    Apr 27, 2025
    Authors
    kildongGo
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Welding Defect Segmentation Dataset

    This dataset contains segmentation masks for welding defects, created using the Segment Anything Model (SAM) from Meta AI. The segmentation masks were generated from bounding box annotations in YOLO format.

    Dataset Description

    The Welding Defect Segmentation Dataset is an extension of "The Welding Defect Dataset v2" with added segmentation masks. This dataset is designed for:

    • Instance segmentation of welding defects
    • Precise defect area identification
    • Training and evaluating segmentation models for welding inspection

    Contents

    The dataset includes: - Original images of welding samples - YOLO format bounding box annotations

  10. R

    Yt Temporal 1b: Val Dataset

    • universe.roboflow.com
    zip
    Updated Nov 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SA-Co VEVal Dataset (2025). Yt Temporal 1b: Val Dataset [Dataset]. https://universe.roboflow.com/sa-co-veval/yt-temporal-1b-val
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 21, 2025
    Dataset authored and provided by
    SA-Co VEVal Dataset
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Objects Polygons
    Description

    SA-Co/VEval – YT-Temporal-1B: Val

    SA-Co/VEval is an evaluation dataset for promptable concept segmentation (PCS) in images developed by Meta for the Segment Anything 3 model (SAM 3). The dataset contains videos paired with text labels (also referred as Noun Phrases aka NPs), each annotated exhaustively with masks on all object instances that match the label.

    This Project allows you to explore YT-Temporal-1B: Val, which is the val split from the YT-Temporal-1B subset. You can see the test split at YT-Temporal-1B: Test.

    Download Instructions

    The full SA-Co/VEval data is available in its canonical, eval-ready form below.

    Download SA-V video frames: https://sa-co.roboflow.com/veval/saco_sav.zip

    Download YT-1B video frames: https://sa-co.roboflow.com/veval/saco_yt1b.zip

    Download SmartGlasses video frames: https://sa-co.roboflow.com/veval/saco_sg.zip

    Download ground truth annotations: https://sa-co.roboflow.com/veval/gt-annotations.zip

    Download the full bundle: https://sa-co.roboflow.com/veval/all.zip

    Explore all SA-Co/VEval datasets

    The Sa-Co/VEval dataset covers 3 image sources. The image sources are: SA-V, YT-Temporal-1B, SmartGlasses.

    Explore all: SA-Co/VEval on Roboflow Universe

    Read Meta's data license for SA-Co/VEval: License

  11. R

    Meta Segmentation Dataset

    • universe.roboflow.com
    zip
    Updated Apr 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Test me (2025). Meta Segmentation Dataset [Dataset]. https://universe.roboflow.com/test-me/meta-segmentation/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 3, 2025
    Dataset authored and provided by
    Test me
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Ruble Polygons
    Description

    Meta Segmentation

    ## Overview
    
    Meta Segmentation is a dataset for instance segmentation tasks - it contains Ruble annotations for 1,107 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  12. Meta: annual revenue 2009-2024, by segment

    • statista.com
    • abripper.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista, Meta: annual revenue 2009-2024, by segment [Dataset]. https://www.statista.com/statistics/267031/facebooks-annual-revenue-by-segment/
    Explore at:
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    Worldwide
    Description

    Meta Platforms continues to dominate the digital landscape, with its Family of Apps segment generating a remarkable 162.4 billion U.S. dollars in revenue for 2024. This figure underscores the company's ability to monetize its vast user base across platforms like Facebook, Instagram, Messenger, and WhatsApp, despite facing challenges in recent years. Advertising fuels growth amid market fluctuations Despite experiencing its first-ever year-on-year decline in 2022, Meta rebounded strongly in 2024, with total annual revenue reaching 164.5 billion U.S. dollars. This resilience showcases Meta's adaptability in the face of market changes and its continued appeal to advertisers seeking to reach a global audience. Expanding reach and engagement Facebook was the first social network to surpass one billion registered accounts and currently sits at more than three billion monthly active users. Additionally, 2024 saw an astounding 138.9 million Reels played on Facebook and Instagram every 60 seconds.

  13. A performance comparison between SAMCell and 3 existing cell segmentation...

    • plos.figshare.com
    xls
    Updated Sep 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexandra Dunnum VandeLoo; Nathan J. Malta; Saahil Sanganeriya; Emilio Aponte; Caitlin van Zyl; Danfei Xu; Craig Forest (2025). A performance comparison between SAMCell and 3 existing cell segmentation baselines. [Dataset]. http://doi.org/10.1371/journal.pone.0319532.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 8, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Alexandra Dunnum VandeLoo; Nathan J. Malta; Saahil Sanganeriya; Emilio Aponte; Caitlin van Zyl; Danfei Xu; Craig Forest
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Stardist, Cellpose, and CALT-US models were trained only on LIVECell-train then tested on LIVECell-test (top) and separately trained only on Cyto-train, then tested on Cyto-test (bottom). SAMCell, inheriting SAM’s pretraining, was fine-tuned only on LIVECell-train then tested on LIVECell-test (top) and separately fine-tuned only on Cyto-train and then tested on Cyto-test (bottom).

  14. R

    Sa V Dataset

    • universe.roboflow.com
    zip
    Updated Nov 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SA-Co Silver Dataset (2025). Sa V Dataset [Dataset]. https://universe.roboflow.com/sa-co-silver/sa-v
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 21, 2025
    Dataset authored and provided by
    SA-Co Silver Dataset
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Objects 7XPg Polygons
    Description

    SA-Co/Silver – SA-V

    SA-Co/Silver is a benchmark for promptable concept segmentation (PCS) in images developed by Meta for the model Segment Anything 3 (SAM 3). The benchmark contains images paired with text labels (also referred as Noun Phrases aka NPs), each annotated exhaustively with masks on all object instances that match the label.

    SA-Co/Silver comprises 10 subsets, covering a diverse array of domains including food, art, robotics, driving etc.

    This Project allows you to explore images and annotations from the SA-V subset.

    Download Instructions

    The SA-Co/Silver data is available in its canonical, eval-ready form below.

    Download ground truth annotations: * https://sa-co.roboflow.com/silver/gt-annotations.zip

    Download images: * https://sa-co.roboflow.com/silver/geode.zip * https://sa-co.roboflow.com/silver/nga.zip * https://sa-co.roboflow.com/silver/bdd100k.zip * https://sa-co.roboflow.com/silver/inaturalist.zip * https://sa-co.roboflow.com/silver/fathomnet.zip * https://sa-co.roboflow.com/silver/droid.zip * https://sa-co.roboflow.com/silver/sav.zip * https://sa-co.roboflow.com/silver/ego4d.zip * https://sa-co.roboflow.com/silver/yt1b.zip

    Download the full bundle * https://sa-co.roboflow.com/silver/all.zip

    Explore all SA-Co/Silver datasets

    The Sa-Co/Silver dataset covers 10 image sources, 9 of which are explorable in Roboflow. The available image sources are: BDD100k, DROID, Ego4D, GeoDE, iNaturalist-2017, National Gallery of Art, SA-V, YT-Temporal-1B, Fathomnet.

    Explore all: SA-Co/Silver on Roboflow Universe

    Read Meta's data license for SA-Co/Silver: License

  15. f

    Data from: Performance evaluation of semantic segmentation models: a cross...

    • datasetcatalog.nlm.nih.gov
    • tandf.figshare.com
    Updated Mar 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Liang, Liang; Yang, Min; An, Qingxian; Wang, Zixuan (2024). Performance evaluation of semantic segmentation models: a cross meta-frontier DEA approach [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001288024
    Explore at:
    Dataset updated
    Mar 1, 2024
    Authors
    Liang, Liang; Yang, Min; An, Qingxian; Wang, Zixuan
    Description

    Performance evaluation of semantic segmentation models is an essential task because it helps to identify the best-performing model. Traditional methods, however, are generally concerned with the improvement of a single quality or quantity. Moreover, what causes low performance usually goes unnoticed. To address these issues, a new cross meta-frontier data envelopment analysis (DEA) approach is proposed in this article. For evaluating model performance comprehensively, not only accuracy metrics, but also hardware burden and model structure factors, are taken as DEA outputs and inputs, separately. In addition, the potential inefficiency is attributed to architectures and backbones via efficiency decomposition, so that it can find the sources of inefficiency and provides a direction for performance improvement. Finally, based on the proposed approach, the performance of 16 classical semantic segmentation models on the PASCAL VOC dataset are re-evaluated and explained. The results verify that the proposed approach can be considered as a comprehensive and interpretable performance evaluation technique, which expands the traditional accuracy-based measurement.

  16. f

    Data from: Comparative Analysis of different Machine Learning Algorithms for...

    • figshare.com
    pptx
    Updated Jul 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Baoling Gui (2024). Comparative Analysis of different Machine Learning Algorithms for Urban Footprint Extraction in Diverse Urban Contexts Using High-Resolution Remote Sensing Imagery [Dataset]. http://doi.org/10.6084/m9.figshare.26379301.v2
    Explore at:
    pptxAvailable download formats
    Dataset updated
    Jul 31, 2024
    Dataset provided by
    figshare
    Authors
    Baoling Gui
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data involved in this paper is from https://www.planet.com/explorer/. The resolution is 3m, and there are 3 main bands, RGB. Since the platform can only download a certain amount of data after applying for an account in the form of education, and at the same time the data is only retained for one month, we chose 8 major cities for the study, 2 images per city. we also provide detailed information on the data visualization and classification results that we have tested and retained in a PPT file called paper, we also provide detailed information on the data visualization and classification results of our tests in a PPT file called paper-result, which can be easily reviewed by reviewers. At the same time, reviewers can also download the data to verify the applicability of the results based on the coordinates of the data sources provided in this paper.The algorithms consist of three main types, one is based on traditional algorithms including object-based and pixel-based, in which we tested the generalization ability of four classifiers, including Random Forest, Support Vector Machine, Maximum Likelihood, and K-mean, in the form of classification in this different way. In addition, we tested two of the more mainstream deep learning classification algorithms, U-net and deeplabV3, both of which can be found and applied in the ArcGIS pro software. The traditional algorithms can be found by checking https://pro.arcgis.com/en/pro-app/latest/help/analysis/image-analyst/the-image-classification-wizard.htm to find the running process, while the related parameter settings and Sample selection rules can be found in detail in the article. Deep learning algorithms can be found at https://pro.arcgis.com/en/pro-app/latest/help/analysis/deep-learning/deep-learning-in-arcgis-pro.htm, and the related parameter settings and sample selection rules can be found in detail in the article. Finally, the big model is based on the SAM model, in which the running process of SAM is from https://github.com/facebookresearch/segment-anything, and you can also use the official Meta segmentation official website to provide a web-based segmentation platform for testing https:// segment-anything.com/. However, the official website has restrictions on the format of the data and the scope of processing.

  17. u

    Data from: NutriConv: Dataset adapted from EFSA PANCAKE project

    • portalinvestigacion.uniovi.es
    Updated 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Junquera Álvarez, Enol; Díaz, Irene; Remeseiro, Beatriz; Rico, Noelia; Gonzalez-Solares, Sonia; Junquera Álvarez, Enol; Díaz, Irene; Remeseiro, Beatriz; Rico, Noelia; Gonzalez-Solares, Sonia (2025). NutriConv: Dataset adapted from EFSA PANCAKE project [Dataset]. https://portalinvestigacion.uniovi.es/documentos/67f62fbcc9d0c3013599a4cd?lang=fr
    Explore at:
    Dataset updated
    2025
    Authors
    Junquera Álvarez, Enol; Díaz, Irene; Remeseiro, Beatriz; Rico, Noelia; Gonzalez-Solares, Sonia; Junquera Álvarez, Enol; Díaz, Irene; Remeseiro, Beatriz; Rico, Noelia; Gonzalez-Solares, Sonia
    Description

    This dataset has been adapted from the PANCAKE project developed by the European Food Safety Authority (EFSA), originally designed for dietary assessment in European populations. It contains 210 low-resolution images (182×136 px), each depicting a single food item with associated weight annotations.

    To support the development and evaluation of multitask deep learning models for food classification and weight estimation, each image is labeled with:

    A food category identifier

    The corresponding food weight in grams

    A segmentation mask (PNG) generated using Meta’s Segment Anything Model (SAM), manually refined for pixel-level accuracy

    This dataset was used in the article "NutriConv: A Convolutional Approach for Digital Dietary Tracking trained on EFSA’s PANCAKE Dataset". While the original PANCAKE data was not structured for machine learning, this version includes preprocessed, cleaned, and annotated images in a format suitable for deep learning workflows.

    Contents:

    images/: Cleaned food images

    masks/: Segmentation masks in PNG format

    labels.csv: File containing image names, food class IDs, and weights in grams

    Additionally, we include a subset of the Nutrition5k dataset, reorganized into classes based on unique sets of ingredients, disregarding their quantities. Only combinations appearing in at least ten images were retained, resulting in 896 images grouped into 44 ingredient-based classes. While this class definition introduces visual variability—since different dishes may share ingredients but differ in appearance—it provides a pragmatic approximation aligned with our classification task. This curated subset was used as an external validation set for evaluating the performance of the NutriConv model.

    Thames, Q., Karpur, A., Norris, W., Xia, F., Panait, L., Weyand, T., & Sim, J. (2021). Nutrition5k: Towards automatic nutritional understanding of generic food. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8903-8911).

  18. Pool Segmentation - USA

    • hub.arcgis.com
    Updated May 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Pool Segmentation - USA [Dataset]. https://hub.arcgis.com/content/0d4b8ab238b74da8819df21834338c0d
    Explore at:
    Dataset updated
    May 18, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    United States
    Description

    Swimming pools are important for property tax assessment because they impact the value of the property. Tax assessors at local government agencies often rely on expensive and infrequent surveys, leading to assessment inaccuracies. Finding the area of pools that are not on the assessment roll (such as those recently constructed) is valuable to assessors and will ultimately mean additional revenue for the community.This deep learning model helps automate the task of finding the area of pools from high resolution satellite imagery. This model can also benefit swimming pool maintenance companies and help redirect their marketing efforts. Public health and mosquito control agencies can also use this model to detect pools and drive field activity and mitigation efforts. Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8-bit, 3-band high resolution (5-30 centimeters) imagery.OutputFeature class containing masks depicting pool.Applicable geographiesThe model is expected to work well in the United States.Model architectureThe model uses the FasterRCNN model architecture implemented using ArcGIS API for Python and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThe model has an average precision score of 0.59.Sample resultsHere are a few results from the model.

  19. f

    A zero-shot, cross-dataset performance comparison between SAMCell-Generalist...

    • figshare.com
    xls
    Updated Sep 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexandra Dunnum VandeLoo; Nathan J. Malta; Saahil Sanganeriya; Emilio Aponte; Caitlin van Zyl; Danfei Xu; Craig Forest (2025). A zero-shot, cross-dataset performance comparison between SAMCell-Generalist (fine-tuned on Cellpose Cytoplasm and LIVECell datasets), SAMCell-Cyto (fine-tuned on Cellpose Cytoplasm dataset), SAMCell-LIVECell (fine-tuned on LIVECell dataset), and baselines trained on the Cellpose Cytoplasm dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0319532.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 8, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Alexandra Dunnum VandeLoo; Nathan J. Malta; Saahil Sanganeriya; Emilio Aponte; Caitlin van Zyl; Danfei Xu; Craig Forest
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A zero-shot, cross-dataset performance comparison between SAMCell-Generalist (fine-tuned on Cellpose Cytoplasm and LIVECell datasets), SAMCell-Cyto (fine-tuned on Cellpose Cytoplasm dataset), SAMCell-LIVECell (fine-tuned on LIVECell dataset), and baselines trained on the Cellpose Cytoplasm dataset.

  20. K1702 - Kura Clover (Trifolium ambiguum) USDA Accession Image Dataset

    • zenodo.org
    csv, json, zip
    Updated Jul 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bo Meyering; Bo Meyering; Spencer Barriball; Spencer Barriball; Muhammet Şakiroğlu; Muhammet Şakiroğlu; Brandon Schlautman; Brandon Schlautman (2025). K1702 - Kura Clover (Trifolium ambiguum) USDA Accession Image Dataset [Dataset]. http://doi.org/10.5281/zenodo.16412834
    Explore at:
    json, csv, zipAvailable download formats
    Dataset updated
    Jul 24, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Bo Meyering; Bo Meyering; Spencer Barriball; Spencer Barriball; Muhammet Şakiroğlu; Muhammet Şakiroğlu; Brandon Schlautman; Brandon Schlautman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Images

    This dataset consists of 1135 images of Kura clover (Trifolium ambiguum) USDA accessions grown at The Land Institute in Salina, Kansas over the 2017 growing season. Each image contains a single Kura clover plant framed by a 1/2" PVC sampling quadrat with internal dimensions of 16"x16" (internal area of 0.165 m2). Kura clover plots were hand weeded to remove all other vegetation except Kura clover. Some images may contain dead clover accessions that are either brown and dried up, or missing entirely. The images were acquired with a Canon EOS Rebel T6 DSLR camera under the following settings:

    • ISO: 200
    • Exposure: Auto
    • Focal Length: Variable (33-40mm)
    • Format: JPEG
    • Size: 5184x3456
    • Metering Mode: Multi-segment

    Images were acquired on two different dates: 2017-06-08 and 2017-07-03 and were named using the following convention "

    Annotations

    The annotations consist of segmentation masks and bounding boxes. Each segmentation mask is saved as a png image and named using the convention "IMG_ID>_

    • 0: 'soil' background class containing all soil and non-target materials
    • 1: 'quadrat'
    • 2: 'clover'

    We drew bounding boxes for the quadrat, each quadrat corner, and the entire clover plant. The class labels ('obj_det_class_map.json') are as follows:

    • 1: 'clover'
    • 2: 'quadrat'
    • 3: 'quadrat_corner'

    Bounding boxes are in (xmin, ymin, xmax, ymax) format and can be found in 'bboxes.csv'.

    All images are annotated using Labelbox software. Masks were generated by point prompts using Meta's Segment Anything model (SAM). The point prompts used to generate the masks can be found in 'SAM_points.csv'

    Additionally, some kura clover plants died or are not present in the plots where they were planted. We included the file 'plant_status.csv' to indicate which images include a living plant or a dead one.

    Train/Val/Test Split

    All 1035 images were randomly split with an 80/20 split on 1000 of the images (n=880, n=220) with the final 35 images reserved for the test holdout set. The file 'data_split.csv' holds the split class for each image.

    This dataset is released under a Creative Commons Attribution 4.0 International license which allows redistribution and re-use of the data herein as long as all authors are appropriately credited.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Esri (2023). Segment Anything Model (SAM) [Dataset]. https://hub.arcgis.com/content/9b67b441f29f4ce6810979f5f0667ebe
Organization logo

Data from: Segment Anything Model (SAM)

Related Article
Explore at:
Dataset updated
Apr 17, 2023
Dataset authored and provided by
Esrihttp://esri.com/
Area covered
Description

Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

Search
Clear search
Close search
Google apps
Main menu