9 datasets found
  1. Data from: Segment Anything Model (SAM)

    • hub.arcgis.com
    • uneca.africageoportal.com
    Updated Apr 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Segment Anything Model (SAM) [Dataset]. https://hub.arcgis.com/content/9b67b441f29f4ce6810979f5f0667ebe
    Explore at:
    Dataset updated
    Apr 17, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

  2. LuFI-RiverSnap

    • kaggle.com
    zip
    Updated Apr 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Armin Moghimi (2024). LuFI-RiverSnap [Dataset]. https://www.kaggle.com/datasets/arminmoghimi/lufi-riversnap/discussion
    Explore at:
    zip(1630738888 bytes)Available download formats
    Dataset updated
    Apr 2, 2024
    Authors
    Armin Moghimi
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    The LuFI-RiverSnap dataset includes close-range river scene images obtained from various devices, such as UAVs, surveillance cameras, smartphones, and handheld cameras, with sizes up to 4624 × 3468 pixels. Several social media images, typically volunteered geographic information (VGI), have also been incorporated into the dataset to create more diverse river landscapes from various locations and sources.

    Please see the following links:

    https://doi.org/10.1109/ACCESS.2024.3385425

    We conducted the tests using the GitLab repository with Segment Anything Model (SAM) model: https://github.com/ArminMoghimi/RiverSnap

    Fine-tuning SAM segmentation: https://github.com/ArminMoghimi/Fine-tune-the-Segment-Anything-Model-SAM-

    The images mainly include river scenes from several cities in Germany (Hannover, Hamburg, Bremen, Berlin, and Dresden), Italy (Venice), Iran (Ahvaz), the USA, and Australia.

    To further enhance the dataset’s diversity and accuracy, a small subset of images of Elbersdorf/Wesenitz, RIWA.v1, and Kaggle WaterNet/Water Segmentation Dataset has been added.

    This comprehensive dataset includes 1092 images, all accurately annotated, establishing it as a valuable resource for advancing research and development in river scene analysis and segmentation.

    The dataset comprises challenging cases for water segmentation, such as rivers with significant reflection, shadows, various colors, and flooded areas.

    Citation

    If you use this dataset, please cite as:

    A. Moghimi, M. Welzel, T. Celik, and T. Schlurmann, "A Comparative Performance Analysis of Popular Deep Learning Models and Segment Anything Model (SAM) for River Water Segmentation in Close-Range Remote Sensing Imagery," in IEEE Access, doi: https://doi.org/10.1109/ACCESS.2024.3385425

    Acknowledgement:

    As you know, other researchers, such as Xabier Blanch, Franz Wagner, and Professor Anette Eltner from TU Dresden, have already provided very perfect water segmentation datasets. We are not the first; please consider the following links for other benchmark datasets.

    Elbersdorf/Wesenitz, RIWA, and Kaggle WaterNet/Water Segmentation Dataset

    Contact:

    • Armin Moghimi moghimi.armin@gmail.com

    moghimi@lufi.uni-hannover.de

  3. Performances of SAM-ResNet after fine-tuning on the test dataset.

    • plos.figshare.com
    xls
    Updated Jun 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Olivier Le Meur; Tugdual Le Pen; Rémi Cozot (2023). Performances of SAM-ResNet after fine-tuning on the test dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0239980.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 6, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Olivier Le Meur; Tugdual Le Pen; Rémi Cozot
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Performances of SAM-ResNet after fine-tuning on the test dataset.

  4. new_models_sam_fine_tune

    • kaggle.com
    zip
    Updated Apr 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andre Ivann Herrera Chavez (2024). new_models_sam_fine_tune [Dataset]. https://www.kaggle.com/andreivann17/new-models-sam-fine-tune
    Explore at:
    zip(1393404585 bytes)Available download formats
    Dataset updated
    Apr 16, 2024
    Authors
    Andre Ivann Herrera Chavez
    Description

    Dataset

    This dataset was created by Andre Ivann Herrera Chavez

    Contents

  5. Tree Segmentation

    • morocco.africageoportal.com
    • angola.africageoportal.com
    • +2more
    Updated May 18, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Tree Segmentation [Dataset]. https://morocco.africageoportal.com/content/6d910b29ff38406986da0abf1ce50836
    Explore at:
    Dataset updated
    May 18, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    This deep learning model is used to detect and segment trees in high resolution drone or aerial imagery. Tree detection can be used for applications such as vegetation management, forestry, urban planning, etc. High resolution aerial and drone imagery can be used for tree detection due to its high spatio-temporal coverage.This deep learning model is based on DeepForest and has been trained on data from the National Ecological Observatory Network (NEON). The model also uses Segment Anything Model (SAM) by Meta.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8 bit, 3-band high-resolution (10-25 cm) imagery.OutputFeature class containing separate masks for each tree.Applicable geographiesThe model is expected to work well in the United States.Model architectureThis model is based upon the DeepForest python package which uses the RetinaNet model architecture implemented in torchvision and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThis model has an precision score of 0.66 and recall of 0.79.Training dataThis model has been trained on NEON Tree Benchmark dataset, provided by the Weecology Lab at the University of Florida. The model also uses Segment Anything Model (SAM) by Meta that is trained on 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.CitationsWeinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309Geographic Generalization in Airborne RGB Deep Learning Tree Detection Ben Weinstein, Sergio Marconi, Stephanie Bohlman, Alina Zare, Ethan P White bioRxiv 790071; doi: https://doi.org/10.1101/790071

  6. Pool Segmentation - USA

    • hub.arcgis.com
    Updated May 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Pool Segmentation - USA [Dataset]. https://hub.arcgis.com/content/0d4b8ab238b74da8819df21834338c0d
    Explore at:
    Dataset updated
    May 18, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    United States
    Description

    Swimming pools are important for property tax assessment because they impact the value of the property. Tax assessors at local government agencies often rely on expensive and infrequent surveys, leading to assessment inaccuracies. Finding the area of pools that are not on the assessment roll (such as those recently constructed) is valuable to assessors and will ultimately mean additional revenue for the community.This deep learning model helps automate the task of finding the area of pools from high resolution satellite imagery. This model can also benefit swimming pool maintenance companies and help redirect their marketing efforts. Public health and mosquito control agencies can also use this model to detect pools and drive field activity and mitigation efforts. Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8-bit, 3-band high resolution (5-30 centimeters) imagery.OutputFeature class containing masks depicting pool.Applicable geographiesThe model is expected to work well in the United States.Model architectureThe model uses the FasterRCNN model architecture implemented using ArcGIS API for Python and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThe model has an average precision score of 0.59.Sample resultsHere are a few results from the model.

  7. R

    Erebia_wingseg Dataset

    • universe.roboflow.com
    zip
    Updated Sep 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    KAU202 (2025). Erebia_wingseg Dataset [Dataset]. https://universe.roboflow.com/kau202/erebia_wingseg-z5xj6
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 22, 2025
    Dataset authored and provided by
    KAU202
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Wings Dwx9 Polygons
    Description

    wing segmentation of Erebia fine tuning SAM

  8. A performance comparison between SAMCell and 3 existing cell segmentation...

    • plos.figshare.com
    xls
    Updated Sep 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexandra Dunnum VandeLoo; Nathan J. Malta; Saahil Sanganeriya; Emilio Aponte; Caitlin van Zyl; Danfei Xu; Craig Forest (2025). A performance comparison between SAMCell and 3 existing cell segmentation baselines. [Dataset]. http://doi.org/10.1371/journal.pone.0319532.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Sep 8, 2025
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Alexandra Dunnum VandeLoo; Nathan J. Malta; Saahil Sanganeriya; Emilio Aponte; Caitlin van Zyl; Danfei Xu; Craig Forest
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Stardist, Cellpose, and CALT-US models were trained only on LIVECell-train then tested on LIVECell-test (top) and separately trained only on Cyto-train, then tested on Cyto-test (bottom). SAMCell, inheriting SAM’s pretraining, was fine-tuned only on LIVECell-train then tested on LIVECell-test (top) and separately fine-tuned only on Cyto-train and then tested on Cyto-test (bottom).

  9. m

    Data from: A Dataset of Aligned RGB and Multispectral UAV Imagery for...

    • data.mendeley.com
    Updated Jul 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Van Hoa Nguyen (2025). A Dataset of Aligned RGB and Multispectral UAV Imagery for Semantic Segmentation of Weedy Rice [Dataset]. http://doi.org/10.17632/vt4s83pxx6.1
    Explore at:
    Dataset updated
    Jul 21, 2025
    Authors
    Van Hoa Nguyen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset includes 734 UAV-captured RGB images and their corresponding aligned multispectral (MS) images for the semantic segmentation of weedy rice in cultivated rice fields. The images were collected using a DJI Mavic 3 Multispectral UAV during three cropping seasons in Vietnam’s Mekong Delta. Each sample contains an RGB image, four MS bands (Green, Red, Red Edge, Near-Infrared), a binary mask indicating weedy rice regions, and a visualization overlay. All images were preprocessed (radiometric correction, undistortion, alignment, cropping) and resized to 1280 × 960 pixels. Ground-truth masks were generated using a fine-tuned Segment Anything Model (SAM), followed by manual verification. Spatial metadata and file mappings are included. The dataset supports research in precision agriculture, multi-modal semantic segmentation, and UAV-based crop monitoring.

  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Esri (2023). Segment Anything Model (SAM) [Dataset]. https://hub.arcgis.com/content/9b67b441f29f4ce6810979f5f0667ebe
Organization logo

Data from: Segment Anything Model (SAM)

Related Article
Explore at:
Dataset updated
Apr 17, 2023
Dataset authored and provided by
Esrihttp://esri.com/
Area covered
Description

Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

Search
Clear search
Close search
Google apps
Main menu