5 datasets found
  1. Data from: Segment Anything Model (SAM)

    • morocco-geoportal-powered-by-esri-africa.hub.arcgis.com
    • morocco.africageoportal.com
    • +2more
    Updated Apr 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Segment Anything Model (SAM) [Dataset]. https://morocco-geoportal-powered-by-esri-africa.hub.arcgis.com/datasets/esri::segment-anything-model-sam
    Explore at:
    Dataset updated
    Apr 17, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Description

    Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

  2. Tree Segmentation

    • hub.arcgis.com
    • uneca.africageoportal.com
    • +1more
    Updated May 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Tree Segmentation [Dataset]. https://hub.arcgis.com/content/6d910b29ff38406986da0abf1ce50836
    Explore at:
    Dataset updated
    May 18, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Description

    This deep learning model is used to detect and segment trees in high resolution drone or aerial imagery. Tree detection can be used for applications such as vegetation management, forestry, urban planning, etc. High resolution aerial and drone imagery can be used for tree detection due to its high spatio-temporal coverage.This deep learning model is based on DeepForest and has been trained on data from the National Ecological Observatory Network (NEON). The model also uses Segment Anything Model (SAM) by Meta.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8 bit, 3-band high-resolution (10-25 cm) imagery.OutputFeature class containing separate masks for each tree.Applicable geographiesThe model is expected to work well in the United States.Model architectureThis model is based upon the DeepForest python package which uses the RetinaNet model architecture implemented in torchvision and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThis model has an precision score of 0.66 and recall of 0.79.Training dataThis model has been trained on NEON Tree Benchmark dataset, provided by the Weecology Lab at the University of Florida. The model also uses Segment Anything Model (SAM) by Meta that is trained on 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.CitationsWeinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309Geographic Generalization in Airborne RGB Deep Learning Tree Detection Ben Weinstein, Sergio Marconi, Stephanie Bohlman, Alina Zare, Ethan P White bioRxiv 790071; doi: https://doi.org/10.1101/790071

  3. K1702 - Kura Clover (Trifolium ambiguum) USDA Accession Image Dataset

    • zenodo.org
    csv, json, zip
    Updated Jul 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bo Meyering; Bo Meyering; Spencer Barriball; Spencer Barriball; Muhammet Şakiroğlu; Muhammet Şakiroğlu; Brandon Schlautman; Brandon Schlautman (2025). K1702 - Kura Clover (Trifolium ambiguum) USDA Accession Image Dataset [Dataset]. http://doi.org/10.5281/zenodo.16412834
    Explore at:
    json, csv, zipAvailable download formats
    Dataset updated
    Jul 24, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Bo Meyering; Bo Meyering; Spencer Barriball; Spencer Barriball; Muhammet Şakiroğlu; Muhammet Şakiroğlu; Brandon Schlautman; Brandon Schlautman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Images

    This dataset consists of 1135 images of Kura clover (Trifolium ambiguum) USDA accessions grown at The Land Institute in Salina, Kansas over the 2017 growing season. Each image contains a single Kura clover plant framed by a 1/2" PVC sampling quadrat with internal dimensions of 16"x16" (internal area of 0.165 m2). Kura clover plots were hand weeded to remove all other vegetation except Kura clover. Some images may contain dead clover accessions that are either brown and dried up, or missing entirely. The images were acquired with a Canon EOS Rebel T6 DSLR camera under the following settings:

    • ISO: 200
    • Exposure: Auto
    • Focal Length: Variable (33-40mm)
    • Format: JPEG
    • Size: 5184x3456
    • Metering Mode: Multi-segment

    Images were acquired on two different dates: 2017-06-08 and 2017-07-03 and were named using the following convention "

    Annotations

    The annotations consist of segmentation masks and bounding boxes. Each segmentation mask is saved as a png image and named using the convention "IMG_ID>_

    • 0: 'soil' background class containing all soil and non-target materials
    • 1: 'quadrat'
    • 2: 'clover'

    We drew bounding boxes for the quadrat, each quadrat corner, and the entire clover plant. The class labels ('obj_det_class_map.json') are as follows:

    • 1: 'clover'
    • 2: 'quadrat'
    • 3: 'quadrat_corner'

    Bounding boxes are in (xmin, ymin, xmax, ymax) format and can be found in 'bboxes.csv'.

    All images are annotated using Labelbox software. Masks were generated by point prompts using Meta's Segment Anything model (SAM). The point prompts used to generate the masks can be found in 'SAM_points.csv'

    Additionally, some kura clover plants died or are not present in the plots where they were planted. We included the file 'plant_status.csv' to indicate which images include a living plant or a dead one.

    Train/Val/Test Split

    All 1035 images were randomly split with an 80/20 split on 1000 of the images (n=880, n=220) with the final 35 images reserved for the test holdout set. The file 'data_split.csv' holds the split class for each image.

    This dataset is released under a Creative Commons Attribution 4.0 International license which allows redistribution and re-use of the data herein as long as all authors are appropriately credited.

  4. O

    SA-1B(segment anything)

    • opendatalab.com
    zip
    Updated May 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meta AI Research (2023). SA-1B(segment anything) [Dataset]. https://opendatalab.com/OpenDataLab/SA-1B
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 1, 2023
    Dataset provided by
    Meta AI Research
    License

    https://ai.facebook.com/datasets/segment-anything-downloads/https://ai.facebook.com/datasets/segment-anything-downloads/

    Description

    Segment Anything 1 Billion (SA-1B) is a dataset designed for training general-purpose object segmentation models from open world images.

    SA-1B consists of 11M diverse, high-resolution, privacy protecting images and 1.1B high-quality segmentation masks that were collected with our data engine. It is intended to be used for computer vision research for the purposes permitted under our Data License.

    The images are licensed from a large photo company. The 1.1B masks were produced using our data engine, all of which were generated fully automatically by the Segment Anything Model (SAM).

  5. K1702 - Kura Clover (Trifolium ambiguum) USDA Accession Image Dataset

    • zenodo.org
    csv, json, zip
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bo Meyering; Bo Meyering; Spencer Barriball; Spencer Barriball; Brandon Schlautman; Brandon Schlautman (2025). K1702 - Kura Clover (Trifolium ambiguum) USDA Accession Image Dataset [Dataset]. http://doi.org/10.5281/zenodo.14051742
    Explore at:
    json, zip, csvAvailable download formats
    Dataset updated
    May 13, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Bo Meyering; Bo Meyering; Spencer Barriball; Spencer Barriball; Brandon Schlautman; Brandon Schlautman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Images

    This dataset consists of 1135 images of Kura clover (Trifolium ambiguum) USDA accessions grown in Salina, Kansas over the 2017 growing season. Each image contains a single Kura clover plant framed by a PVC sampling quadrat that has an internal area of 0.25 m2. Kura clover plots were hand weeded to remove all other vegetation except Kura clover. Some images may contain dead clover accessions that are either brown and dried up, or missing entirely. The images were acquired with a Canon EOS Rebel T6 DSLR camera under the following settings:

    • ISO: 200
    • Exposure: Auto
    • Focal Length: Variable (33-40mm)
    • Format: JPEG
    • Size: 5184x3456
    • Metering Mode: Multi-segment

    Images are separated into three folders: train, val, and test. The train/val split is 80/20 (n=880, n=220), randomly selected. The test set contains 35 images. Images are named in the format "

    Annotations

    All images are annotated using Labelbox software. Masks were generated by point prompts using Meta's Segment Anything model (SAM). PVC quadrat, quadrat corners, and Kura clover objects were annotated with bounding boxes.

    Segmentation masks are in .mat format with the name "

    • data: A 2d Numpy array with data type np.uint8 corresponding to the class for each pixel.
    • project: A text field with the name of the project.

    The segmentation mapping can be found in the file "segmentation_classes.json". The class mapping is as follows:

    • 0: 'background' class containing all soil and non target materials.
    • 1: 'quadrat' class.
    • 2: 'clover' class for all clover related vegetation whether alive or dead.

    Bounding boxes are in the file "bounding_boxes.csv". bbox coordinates are in the format (y1, x1, height, width). Column names are the following:

    • img_id: The name of the image file.
    • class: Class of the bbox annotation ∈ ['quadrat', 'quadrat_corner', 'kura_clover'].
    • y1: The y coordinate of the top of the bbox.
    • x1: The x coordinate of the left side of the bbox.
    • height: The height of the bbox (y2 = y1 + height).
    • width: The width of the bbox (x2 = x1 + width).

    No image preprocessing was performed.

  6. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Esri (2023). Segment Anything Model (SAM) [Dataset]. https://morocco-geoportal-powered-by-esri-africa.hub.arcgis.com/datasets/esri::segment-anything-model-sam
Organization logo

Data from: Segment Anything Model (SAM)

Related Article
Explore at:
Dataset updated
Apr 17, 2023
Dataset authored and provided by
Esrihttp://esri.com/
Description

Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

Search
Clear search
Close search
Google apps
Main menu