Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset provides supplemental information for the manuscript, "Diverse terracing practices revealed by automated lidar analysis across the Sāmoan islands", submitted to Archaeological Prospection. The dataset contains a trained Mask R-CNN deep learning model designed for detecting archaeological terracing features on the islands of American Samoa, associated training data, and the raw and cleaned output of detected terraces.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The global seaweed mask market is experiencing robust growth, driven by increasing consumer awareness of natural skincare products and the proven benefits of seaweed extracts for skin health. Seaweed's rich nutrient profile, including vitamins, minerals, and antioxidants, addresses various skin concerns, making it a popular ingredient in face masks. The market is segmented by type (anti-acne, hydrating, whitening, and others) and application (dry skin, sunburnt skin, sensitive skin, and others), catering to diverse consumer needs. While precise figures for market size and CAGR are unavailable, a reasonable estimate, based on the growth of the broader natural skincare market and the increasing popularity of seaweed-based products, suggests a market size of approximately $500 million in 2025, with a projected CAGR of 7% from 2025 to 2033. This growth is fueled by several factors, including the rising demand for organic and sustainable beauty products, the expanding e-commerce sector facilitating easy access to these products, and increased marketing and brand promotion emphasizing seaweed's skin benefits. The market's geographical spread is vast, with North America and Europe currently holding significant shares, but the Asia-Pacific region is poised for rapid growth driven by rising disposable incomes and increased adoption of skincare routines. Several factors, however, could restrain market growth. These include potential seasonal variations in seaweed availability, fluctuations in raw material prices, and the emergence of competing skincare ingredients. The presence of established players like Benedetta, LUSH, and Algo, alongside smaller niche brands, indicates a competitive landscape. To maintain growth, companies are focusing on innovation, developing specialized seaweed masks targeting specific skin concerns, and emphasizing sustainability in their sourcing and manufacturing processes. The market's future success relies on continued research showcasing the efficacy of seaweed extracts, effective marketing highlighting these benefits, and sustainable sourcing practices that support the long-term viability of the industry.
https://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy
Global Mask Blank market size 2025 was XX Million. Mask Blank Industry compound annual growth rate (CAGR) will be XX% from 2025 till 2033.
https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
This collection contains 406 ROI masks in MATLAB format defining the low grade glioma (LGG) tumour region on T1-weighted (T1W), T2-weighted (T2W), T1-weighted post-contrast (T1CE) and T2-flair (T2F) MR images of 108 different patients from the TCGA-LGG collection. From this subset of 108 patients, 81 patients have ROI masks drawn for the four MRI sequences (T1W, T2W, T1CE and T2F), and 27 patients have ROI masks drawn for three or less of the four MRI sequences. The ROI masks were used to extract texture features in order to develop radiomic-based multivariable models for the prediction of isocitrate dehydrogenase 1 (IDH1) mutation, 1p/19q codeletion status, histological grade and tumour progression. Clinical data (188 patients in total from the TCGA-LGG collection, some incomplete depending on the clinical attribute), VASARI scores (188 patients in total from the TCGA-LGG collection, 178 complete) with feature keys, and source code used in this study are also available with this collection. Please contact Martin Vallières (mart.vallieres@gmail.com) of the Medical Physics Unit of McGill University for any scientific inquiries about this dataset.
https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
BASE YEAR | 2024 |
HISTORICAL DATA | 2019 - 2024 |
REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
MARKET SIZE 2023 | 7.74(USD Billion) |
MARKET SIZE 2024 | 8.21(USD Billion) |
MARKET SIZE 2032 | 13.2(USD Billion) |
SEGMENTS COVERED | Function ,Extract/Ingredients ,Target Audience ,Regional |
COUNTRIES COVERED | North America, Europe, APAC, South America, MEA |
KEY MARKET DYNAMICS | Rising demand for natural and organic skincare Growing popularity of sheet masks Innovations in paper materials and formulations Increasing focus on personalization Expansion into emerging markets |
MARKET FORECAST UNITS | USD Billion |
KEY COMPANIES PROFILED | Innisfree ,My Beauty Diary ,Too Cool For School ,Mediheal ,Etude House ,The History of Whoo ,AHC ,Sulwhasoo ,TonyMoly ,Nature Republic ,Laneige ,Dr. Jart+ ,Klairs ,Mamonde ,SKII |
MARKET FORECAST PERIOD | 2025 - 2032 |
KEY MARKET OPPORTUNITIES | Growing demand for natural and biodegradable skincare products Rising disposable income and improving living standards Increasing popularity of sheet masks among consumers Technological advancements in mask design and formulation Expansion into emerging markets |
COMPOUND ANNUAL GROWTH RATE (CAGR) | 6.12% (2025 - 2032) |
Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Brain extraction is a critical pre-processing step in brain magnetic resonance imaging (MRI) analytical pipelines. In rodents, this is often achieved by manually editing brain masks slice-by-slice, a time-consuming task where workloads increase with higher spatial resolution datasets. We recently demonstrated successful automatic brain extraction via a deep-learning-based framework, U-Net, using 2D convolutions. However, such an approach cannot make use of the rich 3D spatial-context information from volumetric MRI data. In this study, we advanced our previously proposed U-Net architecture by replacing all 2D operations with their 3D counterparts and created a 3D U-Net framework. We trained and validated our model using a recently released CAMRI rat brain database acquired at isotropic spatial resolution, including T2-weighted turbo-spin-echo structural MRI and T2*-weighted echo-planar-imaging functional MRI. The performance of our 3D U-Net model was compared with existing rodent brain extraction tools, including Rapid Automatic Tissue Segmentation, Pulse-Coupled Neural Network, SHape descriptor selected External Regions after Morphologically filtering, and our previously proposed 2D U-Net model. 3D U-Net demonstrated superior performance in Dice, Jaccard, center-of-mass distance, Hausdorff distance, and sensitivity. Additionally, we demonstrated the reliability of 3D U-Net under various noise levels, evaluated the optimal training sample sizes, and disseminated all source codes publicly, with a hope that this approach will benefit rodent MRI research community.Significant Methodological Contribution: We proposed a deep-learning-based framework to automatically identify the rodent brain boundaries in MRI. With a fully 3D convolutional network model, 3D U-Net, our proposed method demonstrated improved performance compared to current automatic brain extraction methods, as shown in several qualitative metrics (Dice, Jaccard, PPV, SEN, and Hausdorff). We trust that this tool will avoid human bias and streamline pre-processing steps during 3D high resolution rodent brain MRI data analysis. The software developed herein has been disseminated freely to the community.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Google Trends data have been used to investigate various themes on online information seeking. It was unclear if the population from different parts of the world shared the same amount of attention to different mask types during the COVID-19 pandemic. This study aimed to reveal which types of masks were frequently searched by the public in different countries, and evaluated if public attention to masks could be related to mandatory policy, stringency of the policy, and transmission rate of COVID-19. By referring to an open dataset hosted at the online database Our World in Data, the 10 countries with the highest total number of COVID-19 cases as of 9th of February 2022 were identified. For each of these countries, the weekly new cases per million population, reproduction rate (of COVID-19), stringency index, and face covering policy score were computed from the raw daily data. Google Trends were queried to extract the relative search volume (RSV) for different types of masks from each of these countries. Results found that Google searches for N95 masks were predominant in India, whereas surgical masks were predominant in Russia, FFP2 masks were predominant in Spain, and cloth masks were predominant in both France and United Kingdom. The United States, Brazil, Germany, and Turkey had two predominant types of mask. The online searching behavior for masks markedly varied across countries. For most of the surveyed countries, the online searching for masks peaked during the first wave of COVID-19 pandemic before the government implemented mandatory mask wearing. The search for masks positively correlated with the government response stringency index but not with the COVID-19 reproduction rate or the new cases per million.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The purslane extract market, valued at $7,298 million in 2025, is experiencing robust growth, projected to expand at a Compound Annual Growth Rate (CAGR) of 23.2% from 2025 to 2033. This significant growth is driven by the increasing consumer demand for natural and organic skincare and cosmetic products. The rising awareness of purslane's potent antioxidant and anti-inflammatory properties, coupled with its efficacy in treating various skin conditions like acne and eczema, fuels market expansion. Furthermore, the versatility of purslane extract across diverse applications, including facial masks, toners, and toiletries, contributes to its widespread adoption by both established cosmetic companies and emerging brands. The market segmentation, with 50ML and 250ML packaging options catering to varying consumer needs, further enhances market accessibility and appeal. Growth is particularly strong in North America and Asia Pacific, regions with a high concentration of consumers prioritizing natural beauty solutions and displaying a willingness to invest in premium skincare. The market's expansion is further bolstered by ongoing research exploring purslane's potential benefits beyond skincare, potentially opening new avenues in the pharmaceutical and nutraceutical sectors. However, challenges remain. Fluctuations in raw material supply and the need for stringent quality control to ensure consistent product efficacy could impact market growth. Furthermore, potential competition from synthetic alternatives and the need for effective marketing and consumer education to highlight purslane's unique benefits are crucial considerations for continued market expansion. The presence of established players like Plamed Green Science Group and Durae Corporation alongside emerging brands indicates a competitive landscape requiring innovation and strategic marketing to capture market share. The forecast period (2025-2033) presents significant opportunities for market players to capitalize on this burgeoning industry.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract The dataset was derived by the Bioregional Assessment Programme by clipping the Victoria - Seamless Geology 2014 dataset (GUID: 2872d02e-66cb-42b6-9e5a-63abc8ad871b) to the Gippsland Basin bioregion Project Boundary dataset (GUID: 2872d02e-66cb-42b6-9e5a-63abc8ad871b). You can find a link to the parent datasets in the Lineage Field in this metadata statement. The History Field in this metadata statement describes how this dataset was derived. The dataset shows the surface geology of …Show full descriptionAbstract The dataset was derived by the Bioregional Assessment Programme by clipping the Victoria - Seamless Geology 2014 dataset (GUID: 2872d02e-66cb-42b6-9e5a-63abc8ad871b) to the Gippsland Basin bioregion Project Boundary dataset (GUID: 2872d02e-66cb-42b6-9e5a-63abc8ad871b). You can find a link to the parent datasets in the Lineage Field in this metadata statement. The History Field in this metadata statement describes how this dataset was derived. The dataset shows the surface geology of the Gippsland Basin Bioregion at 250k scale. Dataset History This dataset was created using the 'Extract by Mask (Spatial Analyst)' tool within ESRI ArcMap 10.2 to clip the layer geology GeolUnit_250k_py.shp (from dataset Victoria - Seamless Geology 2014 GUID: 2872d02e-66cb-42b6-9e5a-63abc8ad871b), to the Gippsland Basin bioregion Project Boundary dataset (GUID: 2872d02e-66cb-42b6-9e5a-63abc8ad871b). Dataset Citation Bioregional Assessment Programme (2015) Geology 250k - Gippsland Basin bioregion clip. Bioregional Assessment Derived Dataset. Viewed 29 September 2017, http://data.bioregionalassessments.gov.au/dataset/e248f087-ed0e-4b92-9c66-2fc239f94f58. Dataset Ancestors Derived From Victoria - Seamless Geology 2014 Derived From Gippsland Project boundary Derived From GEODATA TOPO 250K Series 3
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Brazilian Federal District Area 01 dataset with surface models images to extract road network by deep learning fusion at the suburban area. This dataset contains high-resolution images split into training, validation, and test folders. Each folder contains img, imgm, and mask subfolders with GeoTIFF files (3034x3122 pixels) of 20cm spatial resolution. Subfolder "img" - RGB Orthoimages; "imgm" - surface models image (DSM, DTM, nDSM); "mask" - segmentation binary mask.
https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy
Types: Activated Charcoal Masks: Effectively remove impurities and toxins, leaving skin feeling clean and refreshed. Often favored for their ability to draw out excess oil and minimize the appearance of pores. Clay Masks: Deeply cleanse and absorb excess oil, ideal for oily and combination skin types. Various clays, such as kaolin and bentonite, offer diverse benefits. Peel-Off Gel Masks: Provide gentle exfoliation, removing dead skin cells to reveal brighter, smoother skin. Often infused with hydrating ingredients to prevent dryness. Peel-Off Biocellulose Masks: Offer superior hydration and firming effects due to the unique biocellulose material's ability to deliver active ingredients effectively. Often favored for their luxurious feel and noticeable results. Sheet Masks (Peel-Off): Combining the convenience of sheet masks with the satisfying peel-off experience, offering targeted treatments for specific skin concerns. Benefits: Deep Cleansing: Removes dirt, oil, and impurities from pores. Blackhead Removal: Helps to extract blackheads and minimize their appearance. Acne Reduction: Certain formulations can help to reduce acne breakouts and inflammation. Skin Brightening: Exfoliation and active ingredients contribute to brighter, more radiant skin. Pore Minimization: Helps to temporarily reduce the appearance of pores. Improved Skin Texture: Leaves skin feeling smoother and softer. Recent developments include: April 2022: In partnership with India's top retailer House of Beauty, Freeman, America's No. 1 award-winning face mask brand, announces its arrival into the Indian beauty industry. By December of this year, in addition to its initial launch on Myntra alone, Goddess Beauty's online and offline stores will also carry the brand.Their top products include Charcoal Black Sugar Gel Mask, Day & Night dual chamber mask, Anti-stress dead sea mineral clay mask, cleaning sweet tea and lemon peel off, and Deep Cleaning Charcoal and Sugar mud mask, which will be included in Freeman's offering in India., January 2022: Since over a century ago, Oscar Mayer has added humor to routine situations. He has done this by inventing 27-foot-long rolling Wieners, developing catchy commercial jingles, and encouraging families to build iconic face masks by poking holes in bologna slices for their eyes and mouths. In homage to what makes Oscar Mayer so recognizable, the company is launching the real deal: a face mask inspired by bologna that refreshes skin while bringing back fond memories of childhood.. Notable trends are: Growing spending on skin and facial care product is driving the market growth.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global beauty facial mask market size was valued at approximately USD 5.8 billion in 2023, and it is expected to reach USD 9.2 billion by 2032, growing at a CAGR of 5.1% during the forecast period. The market is driven by a rising consumer inclination towards personal grooming and skincare, heightened awareness regarding the benefits of facial masks, and the continuous introduction of innovative products.
One of the key growth factors in the beauty facial mask market is the increasing awareness of skincare and beauty routines among consumers. More people are investing time and resources in skincare to achieve a healthier complexion, which has significantly boosted the demand for facial masks. The rapid growth of social media influencers and beauty bloggers who endorse various skincare products has also played a pivotal role in promoting facial masks, driving their popularity across different demographics.
The growing emphasis on natural and organic ingredients in skincare products is another crucial factor propelling the market. Consumers are becoming more conscious of the ingredients in their skincare products, preferring those that are free from harmful chemicals and synthetic additives. This shift in consumer preference has encouraged manufacturers to develop facial masks that incorporate natural and organic components, thereby catering to the demand for clean beauty products.
The rise in disposable income, particularly in emerging economies, has further bolstered the beauty facial mask market. With higher disposable incomes, consumers are willing to spend more on premium skincare products that promise better results. This is particularly evident in regions like the Asia Pacific, where economic growth has led to an increase in spending on personal care products. Additionally, the surge in urbanization has exposed more people to pollution and stress, leading to a higher demand for skincare solutions such as facial masks that offer rejuvenation and relaxation.
The Whitening Mask segment is gaining traction as consumers increasingly seek solutions for a brighter and more even skin tone. Whitening masks are formulated with ingredients that target hyperpigmentation and dark spots, offering a luminous complexion. These masks often contain components like vitamin C, niacinamide, and licorice extract, known for their skin-brightening properties. As consumers become more aware of the benefits of these ingredients, the demand for whitening masks continues to rise. The trend is particularly strong in regions where fair and radiant skin is culturally valued, further driving the segment's growth.
Regionally, the Asia Pacific dominates the beauty facial mask market, driven by high consumption in countries like China, Japan, and South Korea. The region's robust beauty and personal care industry, combined with a strong cultural emphasis on skincare, has led to a substantial market share. North America and Europe are also significant markets, characterized by a high level of awareness and demand for innovative and premium skincare products. Meanwhile, Latin America, the Middle East, and Africa are witnessing gradual growth, fueled by increasing urbanization and rising disposable incomes.
The beauty facial mask market offers a diverse range of product types, each catering to different consumer needs and preferences. Sheet masks have become particularly popular due to their convenience and efficacy. These masks are pre-soaked with various serums and essences that provide deep hydration and nourishment to the skin. The popularity of K-beauty (Korean beauty) trends has significantly influenced the growth of sheet masks globally, as consumers seek quick and easy skincare solutions that offer instant results.
Cream masks are another widely used type, known for their rich and creamy texture that provides intense moisture and nourishment. These masks are particularly favored by individuals with dry skin, as they help to restore the skin's moisture barrier, making it soft and supple. Cream masks often contain a high concentration of oils, vitamins, and other nourishing ingredients that penetrate deep into the skin, providing long-lasting hydration.
Clay masks are renowned for their ability to detoxify and purify the skin. They are particularly beneficial for individuals with oily or acne-prone skin, as they help
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Brain extraction is an important prerequisite for the automated diagnosis of intracranial lesions and determines, to a certain extent, the accuracy of subsequent lesion identification, localization, and segmentation. To address the problem that the current traditional image segmentation methods are fast in extraction but poor in robustness, while the Full Convolutional Neural Network (FCN) is robust and accurate but relatively slow in extraction, this paper proposes an adaptive mask-based brain extraction method, namely AMBBEM, to achieve brain extraction better. The method first uses threshold segmentation, median filtering, and closed operations for segmentation, generates a mask for the first time, then combines the ResNet50 model, region growing algorithm, and image properties analysis to further segment the mask, and finally complete brain extraction by multiplying the original image and the mask. The algorithm was tested on 22 test sets containing different lesions, and the results showed MPA = 0.9963, MIoU = 0.9924, and MBF = 0.9914, which were equivalent to the extraction effect of the Deeplabv3+ model. However, the method can complete brain extraction of approximately 6.16 head CT images in 1 second, much faster than Deeplabv3+, U-net, and SegNet models. In summary, this method can achieve accurate brain extraction from head CT images more quickly, creating good conditions for subsequent brain volume measurement and feature extraction of intracranial lesions.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset comes from Devashish Prasad, Ayan Gadpal, Kshitij Kapadni, Manish Visave, and Kavita Sultanpure - creators of CascadeTabNet.
Depending on the dataset version downloaded, the images will include annotations for 'borderless' tables, 'bordered' tables', and 'cells'. Borderless tables are those in which every cell in the table does not have a border. Bordered tables are those in which every cell in the table has a border, and the table is bordered. Cells are the individual data points within the table.
A subset of the full dataset, the ICDAR Table Cells Dataset, was extracted and imported to Roboflow to create this hosted version of the Cascade TabNet project. All the additional dataset components used in the full project are available here: All Files.
For the versions below: Preprocessing step of Resize (416by416 Fit within-white edges) was added along with more augmentations to increase the size of the training set and to make our images more uniform. Preprocessing applies to all images whereas augmentations only apply to training set images. 3. Version 3, augmented-FAST-model : 818 raw images of tables. Trained from Scratch (no transfer learning) with the "Fast" model from Roboflow Train. 3X augmentation (generated images). 4. Version 4, augmented-ACCURATE-model : 818 raw images of tables. Trained from Scratch with the "Accurate" model from Roboflow Train. 3X augmentation. 5. Version 5, tableBordersOnly-augmented-FAST-model : 818 raw images of tables. 'Cell' class ommitted with Modify Classes. Trained from Scratch with the "Fast" model from Roboflow Train. 3X augmentation. 6. Version 6, tableBordersOnly-augmented-ACCURATE-model : 818 raw images of tables. 'Cell' class ommitted with Modify Classes. Trained from Scratch with the "Accurate" model from Roboflow Train. 3X augmentation.
Example Image from the Datasethttps://i.imgur.com/ruizSQN.png" alt="Example Image from the Dataset">
Cascade TabNet in Actionhttps://i.imgur.com/nyn98Ue.png" alt="Cascade TabNet in Action">
CascadeTabNet is an automatic table recognition method for interpretation of tabular data in document images. We present an improved deep learning-based end to end approach for solving both problems of table detection and structure recognition using a single Convolution Neural Network (CNN) model. CascadeTabNet is a Cascade mask Region-based CNN High-Resolution Network (Cascade mask R-CNN HRNet) based model that detects the regions of tables and recognizes the structural body cells from the detected tables at the same time. We evaluate our results on ICDAR 2013, ICDAR 2019 and TableBank public datasets. We achieved 3rd rank in ICDAR 2019 post-competition results for table detection while attaining the best accuracy results for the ICDAR 2013 and TableBank dataset. We also attain the highest accuracy results on the ICDAR 2019 table structure recognition dataset.
If you find this work useful for your research, please cite our paper: @misc{ cascadetabnet2020, title={CascadeTabNet: An approach for end to end table detection and structure recognition from image-based documents}, author={Devashish Prasad and Ayan Gadpal and Kshitij Kapadni and Manish Visave and Kavita Sultanpure}, year={2020}, eprint={2004.12629}, archivePrefix={arXiv}, primaryClass={cs.CV} }
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Brain extraction is an important prerequisite for the automated diagnosis of intracranial lesions and determines, to a certain extent, the accuracy of subsequent lesion identification, localization, and segmentation. To address the problem that the current traditional image segmentation methods are fast in extraction but poor in robustness, while the Full Convolutional Neural Network (FCN) is robust and accurate but relatively slow in extraction, this paper proposes an adaptive mask-based brain extraction method, namely AMBBEM, to achieve brain extraction better. The method first uses threshold segmentation, median filtering, and closed operations for segmentation, generates a mask for the first time, then combines the ResNet50 model, region growing algorithm, and image properties analysis to further segment the mask, and finally complete brain extraction by multiplying the original image and the mask. The algorithm was tested on 22 test sets containing different lesions, and the results showed MPA = 0.9963, MIoU = 0.9924, and MBF = 0.9914, which were equivalent to the extraction effect of the Deeplabv3+ model. However, the method can complete brain extraction of approximately 6.16 head CT images in 1 second, much faster than Deeplabv3+, U-net, and SegNet models. In summary, this method can achieve accurate brain extraction from head CT images more quickly, creating good conditions for subsequent brain volume measurement and feature extraction of intracranial lesions.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Brain extraction is an important prerequisite for the automated diagnosis of intracranial lesions and determines, to a certain extent, the accuracy of subsequent lesion identification, localization, and segmentation. To address the problem that the current traditional image segmentation methods are fast in extraction but poor in robustness, while the Full Convolutional Neural Network (FCN) is robust and accurate but relatively slow in extraction, this paper proposes an adaptive mask-based brain extraction method, namely AMBBEM, to achieve brain extraction better. The method first uses threshold segmentation, median filtering, and closed operations for segmentation, generates a mask for the first time, then combines the ResNet50 model, region growing algorithm, and image properties analysis to further segment the mask, and finally complete brain extraction by multiplying the original image and the mask. The algorithm was tested on 22 test sets containing different lesions, and the results showed MPA = 0.9963, MIoU = 0.9924, and MBF = 0.9914, which were equivalent to the extraction effect of the Deeplabv3+ model. However, the method can complete brain extraction of approximately 6.16 head CT images in 1 second, much faster than Deeplabv3+, U-net, and SegNet models. In summary, this method can achieve accurate brain extraction from head CT images more quickly, creating good conditions for subsequent brain volume measurement and feature extraction of intracranial lesions.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
We propose Safe Human dataset consisting of 17 different objects referred to as SH17 dataset. We scrapped images from the Pexels website, which offers clear usage rights for all its images, showcasing a range of human activities across diverse industrial operations.
To extract relevant images, we used multiple queries such as manufacturing worker, industrial worker, human worker, labor, etc. The tags associated with Pexels images proved reasonably accurate. After removing duplicate samples, we obtained a dataset of 8,099 images. The dataset exhibits significant diversity, representing manufacturing environments globally, thus minimizing potential regional or racial biases. Samples of the dataset are shown below.
Key features
Collected from diverse industrial environments globally
High quality images (max resolution 8192x5462, min 1920x1002)
Average of 9.38 instances per image
Includes small objects like ears and earmuffs (39,764 annotations < 1% image area, 59,025 annotations < 5% area)
Classes
Person
Head
Face
Glasses
Face-mask-medical
Face-guard
Ear
Earmuffs
Hands
Gloves
Foot
Shoes
Safety-vest
Tools
Helmet
Medical-suit
Safety-suit
The data consists of three folders,
images contains all images
labels contains labels in YOLO format for all images
voc_labels contains labels in VOC format for all images
train_files.txt contains list of all images we used for training
val_files.txt contains list of all images we used for validation
Disclaimer and Responsible Use:
This dataset, scrapped through the Pexels website, is intended for educational, research, and analysis purposes only. You may be able to use the data for training of the Machine learning models only. Users are urged to use this data responsibly, ethically, and within the bounds of legal stipulations.
Users should adhere to Copyright Notice of Pexels when utilizing this dataset.
Legal Simplicity: All photos and videos on Pexels can be downloaded and used for free.
Allowed 👌
All photos and videos on Pexels are free to use.
Attribution is not required. Giving credit to the photographer or Pexels is not necessary but always appreciated.
You can modify the photos and videos from Pexels. Be creative and edit them as you like.
Not allowed 👎
Identifiable people may not appear in a bad light or in a way that is offensive.
Don't sell unaltered copies of a photo or video, e.g. as a poster, print or on a physical product without modifying it first.
Don't imply endorsement of your product by people or brands on the imagery.
Don't redistribute or sell the photos and videos on other stock photo or wallpaper platforms.
Don't use the photos or videos as part of your trade-mark, design-mark, trade-name, business name or service mark.
No Warranty Disclaimer:
The dataset is provided "as is," without warranty, and the creator disclaims any legal liability for its use by others.
Ethical Use:
Users are encouraged to consider the ethical implications of their analyses and the potential impact on broader community.
GitHub Page:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Few-shot semantic segmentation aims to accurately segment objects from a limited amount of annotated data, a task complicated by intra-class variations and prototype representation challenges. To address these issues, we propose the Multi-Scale Prototype Convolutional Network (MPCN). Our approach introduces a Prior Mask Generation (PMG) module, which employs dynamic kernels of varying sizes to capture multi-scale object features. This enhances the interaction between support and query features, thereby improving segmentation accuracy. Additionally, we present a Multi-Scale Prototype Extraction (MPE) module to overcome the limitations of MAP (Mean Average Precision). By augmenting support set features, assessing spatial importance, and utilizing multi-scale downsampling, we obtain a more accurate prototype set. Extensive experiments conducted on the PASCAL- and COCO- datasets demonstrate that our method achieves superior performance in both 1-shot and 5-shot settings.
Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.