54 datasets found
  1. Data from: Segment Anything Model (SAM)

    • morocco.africageoportal.com
    • uneca.africageoportal.com
    • +2more
    Updated Apr 17, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Segment Anything Model (SAM) [Dataset]. https://morocco.africageoportal.com/content/9b67b441f29f4ce6810979f5f0667ebe
    Explore at:
    Dataset updated
    Apr 17, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Description

    Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

  2. Z

    Supporting Data -- Evaluating Mask R-CNN Models to Extract Terracing across...

    • data.niaid.nih.gov
    Updated Jul 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dylan S. Davis (2023). Supporting Data -- Evaluating Mask R-CNN Models to Extract Terracing across Oceanic High Islands: an example from Sāmoa. [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7312102
    Explore at:
    Dataset updated
    Jul 17, 2023
    Dataset provided by
    Seth Quintus
    Ethan E. Cochrane
    Dylan S. Davis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Samoa
    Description

    This dataset provides supplemental information for the manuscript, "Diverse terracing practices revealed by automated lidar analysis across the Sāmoan islands", submitted to Archaeological Prospection. The dataset contains a trained Mask R-CNN deep learning model designed for detecting archaeological terracing features on the islands of American Samoa, associated training data, and the raw and cleaned output of detected terraces.

  3. S

    Seaweed Mask Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Apr 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). Seaweed Mask Report [Dataset]. https://www.archivemarketresearch.com/reports/seaweed-mask-536395
    Explore at:
    doc, pdf, pptAvailable download formats
    Dataset updated
    Apr 15, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global seaweed mask market is experiencing robust growth, driven by increasing consumer awareness of natural skincare products and the proven benefits of seaweed extracts for skin health. Seaweed's rich nutrient profile, including vitamins, minerals, and antioxidants, addresses various skin concerns, making it a popular ingredient in face masks. The market is segmented by type (anti-acne, hydrating, whitening, and others) and application (dry skin, sunburnt skin, sensitive skin, and others), catering to diverse consumer needs. While precise figures for market size and CAGR are unavailable, a reasonable estimate, based on the growth of the broader natural skincare market and the increasing popularity of seaweed-based products, suggests a market size of approximately $500 million in 2025, with a projected CAGR of 7% from 2025 to 2033. This growth is fueled by several factors, including the rising demand for organic and sustainable beauty products, the expanding e-commerce sector facilitating easy access to these products, and increased marketing and brand promotion emphasizing seaweed's skin benefits. The market's geographical spread is vast, with North America and Europe currently holding significant shares, but the Asia-Pacific region is poised for rapid growth driven by rising disposable incomes and increased adoption of skincare routines. Several factors, however, could restrain market growth. These include potential seasonal variations in seaweed availability, fluctuations in raw material prices, and the emergence of competing skincare ingredients. The presence of established players like Benedetta, LUSH, and Algo, alongside smaller niche brands, indicates a competitive landscape. To maintain growth, companies are focusing on innovation, developing specialized seaweed masks targeting specific skin concerns, and emphasizing sustainability in their sourcing and manufacturing processes. The market's future success relies on continued research showcasing the efficacy of seaweed extracts, effective marketing highlighting these benefits, and sustainable sourcing practices that support the long-term viability of the industry.

  4. c

    Global Mask Blank Market Report 2025 Edition, Market Size, Share, CAGR,...

    • cognitivemarketresearch.com
    pdf,excel,csv,ppt
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cognitive Market Research, Global Mask Blank Market Report 2025 Edition, Market Size, Share, CAGR, Forecast, Revenue [Dataset]. https://www.cognitivemarketresearch.com/mask-blank-market-report
    Explore at:
    pdf,excel,csv,pptAvailable download formats
    Dataset authored and provided by
    Cognitive Market Research
    License

    https://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy

    Time period covered
    2021 - 2033
    Area covered
    Global
    Description

    Global Mask Blank market size 2025 was XX Million. Mask Blank Industry compound annual growth rate (CAGR) will be XX% from 2025 till 2033.

  5. c

    ROI Masks Defining Low-Grade Glioma Tumor Regions In the TCGA-LGG Image...

    • cancerimagingarchive.net
    • dev.cancerimagingarchive.net
    csv, matlab and zip +2
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Cancer Imaging Archive, ROI Masks Defining Low-Grade Glioma Tumor Regions In the TCGA-LGG Image Collection [Dataset]. http://doi.org/10.7937/K9/TCIA.2017.BD7SGWCA
    Explore at:
    pdf, n/a, matlab and zip, csvAvailable download formats
    Dataset authored and provided by
    The Cancer Imaging Archive
    License

    https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/

    Time period covered
    Mar 17, 2017
    Dataset funded by
    National Cancer Institutehttp://www.cancer.gov/
    Description

    This collection contains 406 ROI masks in MATLAB format defining the low grade glioma (LGG) tumour region on T1-weighted (T1W), T2-weighted (T2W), T1-weighted post-contrast (T1CE) and T2-flair (T2F) MR images of 108 different patients from the TCGA-LGG collection. From this subset of 108 patients, 81 patients have ROI masks drawn for the four MRI sequences (T1W, T2W, T1CE and T2F), and 27 patients have ROI masks drawn for three or less of the four MRI sequences. The ROI masks were used to extract texture features in order to develop radiomic-based multivariable models for the prediction of isocitrate dehydrogenase 1 (IDH1) mutation, 1p/19q codeletion status, histological grade and tumour progression. Clinical data (188 patients in total from the TCGA-LGG collection, some incomplete depending on the clinical attribute), VASARI scores (188 patients in total from the TCGA-LGG collection, 178 complete) with feature keys, and source code used in this study are also available with this collection. Please contact Martin Vallières (mart.vallieres@gmail.com) of the Medical Physics Unit of McGill University for any scientific inquiries about this dataset.

  6. w

    Global Paper Facial Mask Market Research Report: By Function (Cleansing,...

    • wiseguyreports.com
    Updated Aug 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wWiseguy Research Consultants Pvt Ltd (2024). Global Paper Facial Mask Market Research Report: By Function (Cleansing, Hydrating, Brightening, Anti-aging, Pore tightening, Oil-control, Acne-reducing), By Extract/Ingredients (Collagen, Hyaluronic Acid, Green Tea, Aloe Vera, Vitamin C, Charcoal, Tea Tree Oil, Shea Butter), By Target Audience (Women, Men, Teenagers, Individuals with sensitive skin, Individuals with acne-prone skin, Individuals seeking anti-aging benefits) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/reports/paper-facial-mask-market
    Explore at:
    Dataset updated
    Aug 6, 2024
    Dataset authored and provided by
    wWiseguy Research Consultants Pvt Ltd
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jan 8, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20237.74(USD Billion)
    MARKET SIZE 20248.21(USD Billion)
    MARKET SIZE 203213.2(USD Billion)
    SEGMENTS COVEREDFunction ,Extract/Ingredients ,Target Audience ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICSRising demand for natural and organic skincare Growing popularity of sheet masks Innovations in paper materials and formulations Increasing focus on personalization Expansion into emerging markets
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDInnisfree ,My Beauty Diary ,Too Cool For School ,Mediheal ,Etude House ,The History of Whoo ,AHC ,Sulwhasoo ,TonyMoly ,Nature Republic ,Laneige ,Dr. Jart+ ,Klairs ,Mamonde ,SKII
    MARKET FORECAST PERIOD2025 - 2032
    KEY MARKET OPPORTUNITIESGrowing demand for natural and biodegradable skincare products Rising disposable income and improving living standards Increasing popularity of sheet masks among consumers Technological advancements in mask design and formulation Expansion into emerging markets
    COMPOUND ANNUAL GROWTH RATE (CAGR) 6.12% (2025 - 2032)
  7. Segment Anything Model (SAM)

    • morocco-geoportal-powered-by-esri-africa.hub.arcgis.com
    Updated Apr 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2023). Segment Anything Model (SAM) [Dataset]. https://morocco-geoportal-powered-by-esri-africa.hub.arcgis.com/datasets/esri::segment-anything-model-sam
    Explore at:
    Dataset updated
    Apr 17, 2023
    Dataset authored and provided by
    Esrihttp://esri.com/
    Description

    Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

  8. f

    Data_Sheet_1_3D U-Net Improves Automatic Brain Extraction for Isotropic Rat...

    • frontiersin.figshare.com
    pdf
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Li-Ming Hsu; Shuai Wang; Lindsay Walton; Tzu-Wen Winnie Wang; Sung-Ho Lee; Yen-Yu Ian Shih (2023). Data_Sheet_1_3D U-Net Improves Automatic Brain Extraction for Isotropic Rat Brain Magnetic Resonance Imaging Data.PDF [Dataset]. http://doi.org/10.3389/fnins.2021.801008.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Li-Ming Hsu; Shuai Wang; Lindsay Walton; Tzu-Wen Winnie Wang; Sung-Ho Lee; Yen-Yu Ian Shih
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Brain extraction is a critical pre-processing step in brain magnetic resonance imaging (MRI) analytical pipelines. In rodents, this is often achieved by manually editing brain masks slice-by-slice, a time-consuming task where workloads increase with higher spatial resolution datasets. We recently demonstrated successful automatic brain extraction via a deep-learning-based framework, U-Net, using 2D convolutions. However, such an approach cannot make use of the rich 3D spatial-context information from volumetric MRI data. In this study, we advanced our previously proposed U-Net architecture by replacing all 2D operations with their 3D counterparts and created a 3D U-Net framework. We trained and validated our model using a recently released CAMRI rat brain database acquired at isotropic spatial resolution, including T2-weighted turbo-spin-echo structural MRI and T2*-weighted echo-planar-imaging functional MRI. The performance of our 3D U-Net model was compared with existing rodent brain extraction tools, including Rapid Automatic Tissue Segmentation, Pulse-Coupled Neural Network, SHape descriptor selected External Regions after Morphologically filtering, and our previously proposed 2D U-Net model. 3D U-Net demonstrated superior performance in Dice, Jaccard, center-of-mass distance, Hausdorff distance, and sensitivity. Additionally, we demonstrated the reliability of 3D U-Net under various noise levels, evaluated the optimal training sample sizes, and disseminated all source codes publicly, with a hope that this approach will benefit rodent MRI research community.Significant Methodological Contribution: We proposed a deep-learning-based framework to automatically identify the rodent brain boundaries in MRI. With a fully 3D convolutional network model, 3D U-Net, our proposed method demonstrated improved performance compared to current automatic brain extraction methods, as shown in several qualitative metrics (Dice, Jaccard, PPV, SEN, and Hausdorff). We trust that this tool will avoid human bias and streamline pre-processing steps during 3D high resolution rodent brain MRI data analysis. The software developed herein has been disseminated freely to the community.

  9. f

    Table_1_Public interest in different types of masks and its relationship...

    • figshare.com
    xlsx
    Updated Jun 8, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andy Wai Kan Yeung; Emil D. Parvanov; Jarosław Olav Horbańczuk; Maria Kletecka-Pulker; Oliver Kimberger; Harald Willschke; Atanas G. Atanasov (2023). Table_1_Public interest in different types of masks and its relationship with pandemic and policy measures during the COVID-19 pandemic: a study using Google Trends data.XLSX [Dataset]. http://doi.org/10.3389/fpubh.2023.1010674.s002
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 8, 2023
    Dataset provided by
    Frontiers
    Authors
    Andy Wai Kan Yeung; Emil D. Parvanov; Jarosław Olav Horbańczuk; Maria Kletecka-Pulker; Oliver Kimberger; Harald Willschke; Atanas G. Atanasov
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Google Trends data have been used to investigate various themes on online information seeking. It was unclear if the population from different parts of the world shared the same amount of attention to different mask types during the COVID-19 pandemic. This study aimed to reveal which types of masks were frequently searched by the public in different countries, and evaluated if public attention to masks could be related to mandatory policy, stringency of the policy, and transmission rate of COVID-19. By referring to an open dataset hosted at the online database Our World in Data, the 10 countries with the highest total number of COVID-19 cases as of 9th of February 2022 were identified. For each of these countries, the weekly new cases per million population, reproduction rate (of COVID-19), stringency index, and face covering policy score were computed from the raw daily data. Google Trends were queried to extract the relative search volume (RSV) for different types of masks from each of these countries. Results found that Google searches for N95 masks were predominant in India, whereas surgical masks were predominant in Russia, FFP2 masks were predominant in Spain, and cloth masks were predominant in both France and United Kingdom. The United States, Brazil, Germany, and Turkey had two predominant types of mask. The online searching behavior for masks markedly varied across countries. For most of the surveyed countries, the online searching for masks peaked during the first wave of COVID-19 pandemic before the government implemented mandatory mask wearing. The search for masks positively correlated with the government response stringency index but not with the COVID-19 reproduction rate or the new cases per million.

  10. P

    Purslane Extract Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Purslane Extract Report [Dataset]. https://www.datainsightsmarket.com/reports/purslane-extract-1914209
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    Apr 29, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The purslane extract market, valued at $7,298 million in 2025, is experiencing robust growth, projected to expand at a Compound Annual Growth Rate (CAGR) of 23.2% from 2025 to 2033. This significant growth is driven by the increasing consumer demand for natural and organic skincare and cosmetic products. The rising awareness of purslane's potent antioxidant and anti-inflammatory properties, coupled with its efficacy in treating various skin conditions like acne and eczema, fuels market expansion. Furthermore, the versatility of purslane extract across diverse applications, including facial masks, toners, and toiletries, contributes to its widespread adoption by both established cosmetic companies and emerging brands. The market segmentation, with 50ML and 250ML packaging options catering to varying consumer needs, further enhances market accessibility and appeal. Growth is particularly strong in North America and Asia Pacific, regions with a high concentration of consumers prioritizing natural beauty solutions and displaying a willingness to invest in premium skincare. The market's expansion is further bolstered by ongoing research exploring purslane's potential benefits beyond skincare, potentially opening new avenues in the pharmaceutical and nutraceutical sectors. However, challenges remain. Fluctuations in raw material supply and the need for stringent quality control to ensure consistent product efficacy could impact market growth. Furthermore, potential competition from synthetic alternatives and the need for effective marketing and consumer education to highlight purslane's unique benefits are crucial considerations for continued market expansion. The presence of established players like Plamed Green Science Group and Durae Corporation alongside emerging brands indicates a competitive landscape requiring innovation and strategic marketing to capture market share. The forecast period (2025-2033) presents significant opportunities for market players to capitalize on this burgeoning industry.

  11. m

    Geology 250k - Gippsland Basin bioregion clip

    • demo.dev.magda.io
    • researchdata.edu.au
    • +2more
    zip
    Updated Dec 4, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2022). Geology 250k - Gippsland Basin bioregion clip [Dataset]. https://demo.dev.magda.io/dataset/ds-dga-55377ed1-16bd-4acd-adbe-a837e3f526f8
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 4, 2022
    Dataset provided by
    Bioregional Assessment Program
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Gippsland
    Description

    Abstract The dataset was derived by the Bioregional Assessment Programme by clipping the Victoria - Seamless Geology 2014 dataset (GUID: 2872d02e-66cb-42b6-9e5a-63abc8ad871b) to the Gippsland Basin bioregion Project Boundary dataset (GUID: 2872d02e-66cb-42b6-9e5a-63abc8ad871b). You can find a link to the parent datasets in the Lineage Field in this metadata statement. The History Field in this metadata statement describes how this dataset was derived. The dataset shows the surface geology of …Show full descriptionAbstract The dataset was derived by the Bioregional Assessment Programme by clipping the Victoria - Seamless Geology 2014 dataset (GUID: 2872d02e-66cb-42b6-9e5a-63abc8ad871b) to the Gippsland Basin bioregion Project Boundary dataset (GUID: 2872d02e-66cb-42b6-9e5a-63abc8ad871b). You can find a link to the parent datasets in the Lineage Field in this metadata statement. The History Field in this metadata statement describes how this dataset was derived. The dataset shows the surface geology of the Gippsland Basin Bioregion at 250k scale. Dataset History This dataset was created using the 'Extract by Mask (Spatial Analyst)' tool within ESRI ArcMap 10.2 to clip the layer geology GeolUnit_250k_py.shp (from dataset Victoria - Seamless Geology 2014 GUID: 2872d02e-66cb-42b6-9e5a-63abc8ad871b), to the Gippsland Basin bioregion Project Boundary dataset (GUID: 2872d02e-66cb-42b6-9e5a-63abc8ad871b). Dataset Citation Bioregional Assessment Programme (2015) Geology 250k - Gippsland Basin bioregion clip. Bioregional Assessment Derived Dataset. Viewed 29 September 2017, http://data.bioregionalassessments.gov.au/dataset/e248f087-ed0e-4b92-9c66-2fc239f94f58. Dataset Ancestors Derived From Victoria - Seamless Geology 2014 Derived From Gippsland Project boundary Derived From GEODATA TOPO 250K Series 3

  12. f

    Area01DF_surfacemodels

    • figshare.com
    bin
    Updated Jun 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Antonio filho (2023). Area01DF_surfacemodels [Dataset]. http://doi.org/10.6084/m9.figshare.21504780.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 7, 2023
    Dataset provided by
    figshare
    Authors
    Antonio filho
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Brazilian Federal District Area 01 dataset with surface models images to extract road network by deep learning fusion at the suburban area. This dataset contains high-resolution images split into training, validation, and test folders. Each folder contains img, imgm, and mask subfolders with GeoTIFF files (3034x3122 pixels) of 20cm spatial resolution. Subfolder "img" - RGB Orthoimages; "imgm" - surface models image (DSM, DTM, nDSM); "mask" - segmentation binary mask.

  13. P

    Peel Off Face Mask Market Report

    • promarketreports.com
    doc, pdf, ppt
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pro Market Reports (2025). Peel Off Face Mask Market Report [Dataset]. https://www.promarketreports.com/reports/peel-off-face-mask-market-3301
    Explore at:
    pdf, doc, pptAvailable download formats
    Dataset updated
    Jun 4, 2025
    Dataset authored and provided by
    Pro Market Reports
    License

    https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    Types: Activated Charcoal Masks: Effectively remove impurities and toxins, leaving skin feeling clean and refreshed. Often favored for their ability to draw out excess oil and minimize the appearance of pores. Clay Masks: Deeply cleanse and absorb excess oil, ideal for oily and combination skin types. Various clays, such as kaolin and bentonite, offer diverse benefits. Peel-Off Gel Masks: Provide gentle exfoliation, removing dead skin cells to reveal brighter, smoother skin. Often infused with hydrating ingredients to prevent dryness. Peel-Off Biocellulose Masks: Offer superior hydration and firming effects due to the unique biocellulose material's ability to deliver active ingredients effectively. Often favored for their luxurious feel and noticeable results. Sheet Masks (Peel-Off): Combining the convenience of sheet masks with the satisfying peel-off experience, offering targeted treatments for specific skin concerns. Benefits: Deep Cleansing: Removes dirt, oil, and impurities from pores. Blackhead Removal: Helps to extract blackheads and minimize their appearance. Acne Reduction: Certain formulations can help to reduce acne breakouts and inflammation. Skin Brightening: Exfoliation and active ingredients contribute to brighter, more radiant skin. Pore Minimization: Helps to temporarily reduce the appearance of pores. Improved Skin Texture: Leaves skin feeling smoother and softer. Recent developments include: April 2022: In partnership with India's top retailer House of Beauty, Freeman, America's No. 1 award-winning face mask brand, announces its arrival into the Indian beauty industry. By December of this year, in addition to its initial launch on Myntra alone, Goddess Beauty's online and offline stores will also carry the brand.Their top products include Charcoal Black Sugar Gel Mask, Day & Night dual chamber mask, Anti-stress dead sea mineral clay mask, cleaning sweet tea and lemon peel off, and Deep Cleaning Charcoal and Sugar mud mask, which will be included in Freeman's offering in India., January 2022: Since over a century ago, Oscar Mayer has added humor to routine situations. He has done this by inventing 27-foot-long rolling Wieners, developing catchy commercial jingles, and encouraging families to build iconic face masks by poking holes in bologna slices for their eyes and mouths. In homage to what makes Oscar Mayer so recognizable, the company is launching the real deal: a face mask inspired by bologna that refreshes skin while bringing back fond memories of childhood.. Notable trends are: Growing spending on skin and facial care product is driving the market growth.

  14. Beauty Facial Mask Market Report | Global Forecast From 2025 To 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Jan 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Beauty Facial Mask Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/global-beauty-facial-mask-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Jan 7, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Beauty Facial Mask Market Outlook



    The global beauty facial mask market size was valued at approximately USD 5.8 billion in 2023, and it is expected to reach USD 9.2 billion by 2032, growing at a CAGR of 5.1% during the forecast period. The market is driven by a rising consumer inclination towards personal grooming and skincare, heightened awareness regarding the benefits of facial masks, and the continuous introduction of innovative products.



    One of the key growth factors in the beauty facial mask market is the increasing awareness of skincare and beauty routines among consumers. More people are investing time and resources in skincare to achieve a healthier complexion, which has significantly boosted the demand for facial masks. The rapid growth of social media influencers and beauty bloggers who endorse various skincare products has also played a pivotal role in promoting facial masks, driving their popularity across different demographics.



    The growing emphasis on natural and organic ingredients in skincare products is another crucial factor propelling the market. Consumers are becoming more conscious of the ingredients in their skincare products, preferring those that are free from harmful chemicals and synthetic additives. This shift in consumer preference has encouraged manufacturers to develop facial masks that incorporate natural and organic components, thereby catering to the demand for clean beauty products.



    The rise in disposable income, particularly in emerging economies, has further bolstered the beauty facial mask market. With higher disposable incomes, consumers are willing to spend more on premium skincare products that promise better results. This is particularly evident in regions like the Asia Pacific, where economic growth has led to an increase in spending on personal care products. Additionally, the surge in urbanization has exposed more people to pollution and stress, leading to a higher demand for skincare solutions such as facial masks that offer rejuvenation and relaxation.



    The Whitening Mask segment is gaining traction as consumers increasingly seek solutions for a brighter and more even skin tone. Whitening masks are formulated with ingredients that target hyperpigmentation and dark spots, offering a luminous complexion. These masks often contain components like vitamin C, niacinamide, and licorice extract, known for their skin-brightening properties. As consumers become more aware of the benefits of these ingredients, the demand for whitening masks continues to rise. The trend is particularly strong in regions where fair and radiant skin is culturally valued, further driving the segment's growth.



    Regionally, the Asia Pacific dominates the beauty facial mask market, driven by high consumption in countries like China, Japan, and South Korea. The region's robust beauty and personal care industry, combined with a strong cultural emphasis on skincare, has led to a substantial market share. North America and Europe are also significant markets, characterized by a high level of awareness and demand for innovative and premium skincare products. Meanwhile, Latin America, the Middle East, and Africa are witnessing gradual growth, fueled by increasing urbanization and rising disposable incomes.



    Product Type Analysis



    The beauty facial mask market offers a diverse range of product types, each catering to different consumer needs and preferences. Sheet masks have become particularly popular due to their convenience and efficacy. These masks are pre-soaked with various serums and essences that provide deep hydration and nourishment to the skin. The popularity of K-beauty (Korean beauty) trends has significantly influenced the growth of sheet masks globally, as consumers seek quick and easy skincare solutions that offer instant results.



    Cream masks are another widely used type, known for their rich and creamy texture that provides intense moisture and nourishment. These masks are particularly favored by individuals with dry skin, as they help to restore the skin's moisture barrier, making it soft and supple. Cream masks often contain a high concentration of oils, vitamins, and other nourishing ingredients that penetrate deep into the skin, providing long-lasting hydration.



    Clay masks are renowned for their ability to detoxify and purify the skin. They are particularly beneficial for individuals with oily or acne-prone skin, as they help

  15. f

    Dichotomous confusion matrix.

    • plos.figshare.com
    xls
    Updated Mar 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dingyuan Hu; Shiya Qu; Yuhang Jiang; Chunyu Han; Hongbin Liang; Qingyan Zhang (2024). Dichotomous confusion matrix. [Dataset]. http://doi.org/10.1371/journal.pone.0295536.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Mar 11, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Dingyuan Hu; Shiya Qu; Yuhang Jiang; Chunyu Han; Hongbin Liang; Qingyan Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Brain extraction is an important prerequisite for the automated diagnosis of intracranial lesions and determines, to a certain extent, the accuracy of subsequent lesion identification, localization, and segmentation. To address the problem that the current traditional image segmentation methods are fast in extraction but poor in robustness, while the Full Convolutional Neural Network (FCN) is robust and accurate but relatively slow in extraction, this paper proposes an adaptive mask-based brain extraction method, namely AMBBEM, to achieve brain extraction better. The method first uses threshold segmentation, median filtering, and closed operations for segmentation, generates a mask for the first time, then combines the ResNet50 model, region growing algorithm, and image properties analysis to further segment the mask, and finally complete brain extraction by multiplying the original image and the mask. The algorithm was tested on 22 test sets containing different lesions, and the results showed MPA = 0.9963, MIoU = 0.9924, and MBF = 0.9914, which were equivalent to the extraction effect of the Deeplabv3+ model. However, the method can complete brain extraction of approximately 6.16 head CT images in 1 second, much faster than Deeplabv3+, U-net, and SegNet models. In summary, this method can achieve accurate brain extraction from head CT images more quickly, creating good conditions for subsequent brain volume measurement and feature extraction of intracranial lesions.

  16. R

    Table Extraction Pdf Dataset

    • universe.roboflow.com
    zip
    Updated Nov 4, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohamed Traore (2022). Table Extraction Pdf Dataset [Dataset]. https://universe.roboflow.com/mohamed-traore-2ekkp/table-extraction-pdf/model/6
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 4, 2022
    Dataset authored and provided by
    Mohamed Traore
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Data Table Bounding Boxes
    Description

    The dataset comes from Devashish Prasad, Ayan Gadpal, Kshitij Kapadni, Manish Visave, and Kavita Sultanpure - creators of CascadeTabNet.

    Depending on the dataset version downloaded, the images will include annotations for 'borderless' tables, 'bordered' tables', and 'cells'. Borderless tables are those in which every cell in the table does not have a border. Bordered tables are those in which every cell in the table has a border, and the table is bordered. Cells are the individual data points within the table.

    A subset of the full dataset, the ICDAR Table Cells Dataset, was extracted and imported to Roboflow to create this hosted version of the Cascade TabNet project. All the additional dataset components used in the full project are available here: All Files.

    Versions:

    1. Version 1, raw-images : 342 raw images of tables. No augmentations, preprocessing step of auto-orient was all that was added.
    2. Version 2, tableBordersOnly-rawImages : 342 raw images of tables. This dataset version contains the same images as version 1, but with the caveat of Modify Classes being applied to omit the 'cell' class from all images (rendering these images to be apt for creating a model to detect 'borderless' tables and 'bordered' tables.

    For the versions below: Preprocessing step of Resize (416by416 Fit within-white edges) was added along with more augmentations to increase the size of the training set and to make our images more uniform. Preprocessing applies to all images whereas augmentations only apply to training set images. 3. Version 3, augmented-FAST-model : 818 raw images of tables. Trained from Scratch (no transfer learning) with the "Fast" model from Roboflow Train. 3X augmentation (generated images). 4. Version 4, augmented-ACCURATE-model : 818 raw images of tables. Trained from Scratch with the "Accurate" model from Roboflow Train. 3X augmentation. 5. Version 5, tableBordersOnly-augmented-FAST-model : 818 raw images of tables. 'Cell' class ommitted with Modify Classes. Trained from Scratch with the "Fast" model from Roboflow Train. 3X augmentation. 6. Version 6, tableBordersOnly-augmented-ACCURATE-model : 818 raw images of tables. 'Cell' class ommitted with Modify Classes. Trained from Scratch with the "Accurate" model from Roboflow Train. 3X augmentation.

    Example Image from the Datasethttps://i.imgur.com/ruizSQN.png" alt="Example Image from the Dataset">

    Cascade TabNet in Actionhttps://i.imgur.com/nyn98Ue.png" alt="Cascade TabNet in Action"> CascadeTabNet is an automatic table recognition method for interpretation of tabular data in document images. We present an improved deep learning-based end to end approach for solving both problems of table detection and structure recognition using a single Convolution Neural Network (CNN) model. CascadeTabNet is a Cascade mask Region-based CNN High-Resolution Network (Cascade mask R-CNN HRNet) based model that detects the regions of tables and recognizes the structural body cells from the detected tables at the same time. We evaluate our results on ICDAR 2013, ICDAR 2019 and TableBank public datasets. We achieved 3rd rank in ICDAR 2019 post-competition results for table detection while attaining the best accuracy results for the ICDAR 2013 and TableBank dataset. We also attain the highest accuracy results on the ICDAR 2019 table structure recognition dataset.

    From the Original Authors:

    If you find this work useful for your research, please cite our paper: @misc{ cascadetabnet2020, title={CascadeTabNet: An approach for end to end table detection and structure recognition from image-based documents}, author={Devashish Prasad and Ayan Gadpal and Kshitij Kapadni and Manish Visave and Kavita Sultanpure}, year={2020}, eprint={2004.12629}, archivePrefix={arXiv}, primaryClass={cs.CV} }

  17. f

    Test set 1.

    • plos.figshare.com
    zip
    Updated Mar 11, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dingyuan Hu; Shiya Qu; Yuhang Jiang; Chunyu Han; Hongbin Liang; Qingyan Zhang (2024). Test set 1. [Dataset]. http://doi.org/10.1371/journal.pone.0295536.s006
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 11, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Dingyuan Hu; Shiya Qu; Yuhang Jiang; Chunyu Han; Hongbin Liang; Qingyan Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Brain extraction is an important prerequisite for the automated diagnosis of intracranial lesions and determines, to a certain extent, the accuracy of subsequent lesion identification, localization, and segmentation. To address the problem that the current traditional image segmentation methods are fast in extraction but poor in robustness, while the Full Convolutional Neural Network (FCN) is robust and accurate but relatively slow in extraction, this paper proposes an adaptive mask-based brain extraction method, namely AMBBEM, to achieve brain extraction better. The method first uses threshold segmentation, median filtering, and closed operations for segmentation, generates a mask for the first time, then combines the ResNet50 model, region growing algorithm, and image properties analysis to further segment the mask, and finally complete brain extraction by multiplying the original image and the mask. The algorithm was tested on 22 test sets containing different lesions, and the results showed MPA = 0.9963, MIoU = 0.9924, and MBF = 0.9914, which were equivalent to the extraction effect of the Deeplabv3+ model. However, the method can complete brain extraction of approximately 6.16 head CT images in 1 second, much faster than Deeplabv3+, U-net, and SegNet models. In summary, this method can achieve accurate brain extraction from head CT images more quickly, creating good conditions for subsequent brain volume measurement and feature extraction of intracranial lesions.

  18. f

    This is the training set for the CNN.

    • plos.figshare.com
    zip
    Updated Mar 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dingyuan Hu; Shiya Qu; Yuhang Jiang; Chunyu Han; Hongbin Liang; Qingyan Zhang (2024). This is the training set for the CNN. [Dataset]. http://doi.org/10.1371/journal.pone.0295536.s002
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 11, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Dingyuan Hu; Shiya Qu; Yuhang Jiang; Chunyu Han; Hongbin Liang; Qingyan Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Brain extraction is an important prerequisite for the automated diagnosis of intracranial lesions and determines, to a certain extent, the accuracy of subsequent lesion identification, localization, and segmentation. To address the problem that the current traditional image segmentation methods are fast in extraction but poor in robustness, while the Full Convolutional Neural Network (FCN) is robust and accurate but relatively slow in extraction, this paper proposes an adaptive mask-based brain extraction method, namely AMBBEM, to achieve brain extraction better. The method first uses threshold segmentation, median filtering, and closed operations for segmentation, generates a mask for the first time, then combines the ResNet50 model, region growing algorithm, and image properties analysis to further segment the mask, and finally complete brain extraction by multiplying the original image and the mask. The algorithm was tested on 22 test sets containing different lesions, and the results showed MPA = 0.9963, MIoU = 0.9924, and MBF = 0.9914, which were equivalent to the extraction effect of the Deeplabv3+ model. However, the method can complete brain extraction of approximately 6.16 head CT images in 1 second, much faster than Deeplabv3+, U-net, and SegNet models. In summary, this method can achieve accurate brain extraction from head CT images more quickly, creating good conditions for subsequent brain volume measurement and feature extraction of intracranial lesions.

  19. Z

    SH17 Dataset for PPE Detection

    • data.niaid.nih.gov
    Updated Jul 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ahmad, Hafiz Mughees (2024). SH17 Dataset for PPE Detection [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_12659324
    Explore at:
    Dataset updated
    Jul 4, 2024
    Dataset authored and provided by
    Ahmad, Hafiz Mughees
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    We propose Safe Human dataset consisting of 17 different objects referred to as SH17 dataset. We scrapped images from the Pexels website, which offers clear usage rights for all its images, showcasing a range of human activities across diverse industrial operations.

    To extract relevant images, we used multiple queries such as manufacturing worker, industrial worker, human worker, labor, etc. The tags associated with Pexels images proved reasonably accurate. After removing duplicate samples, we obtained a dataset of 8,099 images. The dataset exhibits significant diversity, representing manufacturing environments globally, thus minimizing potential regional or racial biases. Samples of the dataset are shown below.

    Key features

    Collected from diverse industrial environments globally

    High quality images (max resolution 8192x5462, min 1920x1002)

    Average of 9.38 instances per image

    Includes small objects like ears and earmuffs (39,764 annotations < 1% image area, 59,025 annotations < 5% area)

    Classes

    Person

    Head

    Face

    Glasses

    Face-mask-medical

    Face-guard

    Ear

    Earmuffs

    Hands

    Gloves

    Foot

    Shoes

    Safety-vest

    Tools

    Helmet

    Medical-suit

    Safety-suit

    The data consists of three folders,

    images contains all images

    labels contains labels in YOLO format for all images

    voc_labels contains labels in VOC format for all images

    train_files.txt contains list of all images we used for training

    val_files.txt contains list of all images we used for validation

    Disclaimer and Responsible Use:

    This dataset, scrapped through the Pexels website, is intended for educational, research, and analysis purposes only. You may be able to use the data for training of the Machine learning models only. Users are urged to use this data responsibly, ethically, and within the bounds of legal stipulations.

    Users should adhere to Copyright Notice of Pexels when utilizing this dataset.

    Legal Simplicity: All photos and videos on Pexels can be downloaded and used for free.

    Allowed 👌

    All photos and videos on Pexels are free to use.

    Attribution is not required. Giving credit to the photographer or Pexels is not necessary but always appreciated.

    You can modify the photos and videos from Pexels. Be creative and edit them as you like.

    Not allowed 👎

    Identifiable people may not appear in a bad light or in a way that is offensive.

    Don't sell unaltered copies of a photo or video, e.g. as a poster, print or on a physical product without modifying it first.

    Don't imply endorsement of your product by people or brands on the imagery.

    Don't redistribute or sell the photos and videos on other stock photo or wallpaper platforms.

    Don't use the photos or videos as part of your trade-mark, design-mark, trade-name, business name or service mark.

    No Warranty Disclaimer:

    The dataset is provided "as is," without warranty, and the creator disclaims any legal liability for its use by others.

    Ethical Use:

    Users are encouraged to consider the ethical implications of their analyses and the potential impact on broader community.

    GitHub Page:

    https://github.com/ahmadmughees/SH17dataset

  20. f

    Ablation studies of main model components.

    • plos.figshare.com
    xls
    Updated Apr 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ding Xu; Shun Yu; Jingxuan Zhou; Fusen Guo; Lin Li; Jishizhan Chen (2025). Ablation studies of main model components. [Dataset]. http://doi.org/10.1371/journal.pone.0319905.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Apr 15, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Ding Xu; Shun Yu; Jingxuan Zhou; Fusen Guo; Lin Li; Jishizhan Chen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Few-shot semantic segmentation aims to accurately segment objects from a limited amount of annotated data, a task complicated by intra-class variations and prototype representation challenges. To address these issues, we propose the Multi-Scale Prototype Convolutional Network (MPCN). Our approach introduces a Prior Mask Generation (PMG) module, which employs dynamic kernels of varying sizes to capture multi-scale object features. This enhances the interaction between support and query features, thereby improving segmentation accuracy. Additionally, we present a Multi-Scale Prototype Extraction (MPE) module to overcome the limitations of MAP (Mean Average Precision). By augmenting support set features, assessing spatial importance, and utilizing multi-scale downsampling, we obtain a more accurate prototype set. Extensive experiments conducted on the PASCAL- and COCO- datasets demonstrate that our method achieves superior performance in both 1-shot and 5-shot settings.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Esri (2023). Segment Anything Model (SAM) [Dataset]. https://morocco.africageoportal.com/content/9b67b441f29f4ce6810979f5f0667ebe
Organization logo

Data from: Segment Anything Model (SAM)

Related Article
Explore at:
Dataset updated
Apr 17, 2023
Dataset authored and provided by
Esrihttp://esri.com/
Description

Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.

Search
Clear search
Close search
Google apps
Main menu