Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The visuAAL Skin Segmentation Dataset contains 46,775 high quality images divided into a training set with 45,623 images, and a validation set with 1,152 images. Skin areas have been obtained automatically from the FashionPedia garment dataset. The process to extract the skin areas is explained in detail in the paper 'From Garment to Skin: The visuAAL Skin Segmentation Dataset'.
If you use the visuAAL Skin Segmentation Dataset, please, cite:
How to use:
A sample of image data in the FashionPedia dataset is:
{'id': 12305,
'width': 680,
'height': 1024,
'file_name': '064c8022b32931e787260d81ed5aafe8.jpg',
'license': 4,
'time_captured': 'March-August, 2018',
'original_url': 'https://farm2.staticflickr.com/1936/8607950470_9d9d76ced7_o.jpg',
'isstatic': 1,
'kaggle_id': '064c8022b32931e787260d81ed5aafe8'}
NOTE: Not all the images in the FashionPedia dataset have the correponding skin mask in the visuAAL Skin Segmentation Dataset, as there are images in which only garment parts and not people are present in them. These images were removed when creating the visuAAL Skin Segmentation Dataset. However, all the instances in the visuAAL skin segmentation dataset have their corresponding match in the FashionPedia dataset.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
The Skin Segmentation dataset is constructed over B, G, R color space. Skin and Nonskin dataset is generated using skin textures from face images of diversity of age, gender, and race people.
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
This dataset is a large collection of skin disease images compiled from various sources to provide a representative sample of multiple skin conditions. It is designed to support machine learning, image segmentation, and classification tasks, particularly in the field of dermatology.
The dataset consists of 84,794 images in total, with each category containing a balanced number of images to represent the disease adequately.
When using this dataset, please cite the following reference: Garcia-Espinosa, E. ., Ruiz-Castilla, J. S., & Garcia-Lamont, F. (2025). Generative AI and Transformers in Advanced Skin Lesion Classification applied on a mobile device. International Journal of Combinatorial Optimization Problems and Informatics, 16(2), 158–175. https://doi.org/10.61467/2007.1558.2025.v16i2.1078
Espinosa, E.G., Castilla, J.S.R., Lamont, F.G. (2025). Skin Disease Pre-diagnosis with Novel Visual Transformers. In: Figueroa-García, J.C., Hernández, G., Suero Pérez, D.F., Gaona García, E.E. (eds) Applied Computer Sciences in Engineering. WEA 2024. Communications in Computer and Information Science, vol 2222. Springer, Cham. https://doi.org/10.1007/978-3-031-74595-9_10
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Skin Lesion Segmentation(2) is a dataset for instance segmentation tasks - it contains Skin Disease annotations for 2,201 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterThis is a segmentation database for skin cancer problem in order to train and validate segmentation models for research purposes. The segmentation labels are Lession (Inner_Region), Skin (Outer_Region), and Noise. Also the initial dataset with the case ids is provided (Dataset_Initial) for matching the segmented regions to their initial image.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Skin Segmentation is a dataset for instance segmentation tasks - it contains Objects annotations for 2,500 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This package contains the deep features extracted from the HGR (http://sun.aei.polsl.pl/~mkawulok/gestures/index.html) and ECU databases presenting hand gestures and human skin in color images, respectively. Additionally, a script (read_binary_file.py) that can be conveniently used for reading these features from the binary files is included in the package. The binary files with deep features are saved in the following form: N_features (4 bytes integer), N_labels (4 bytes integer), H (4 bytes integer), W (4 bytes integer), with H and W being the height and width of an input image. Then, there are H * W rows with N_features * feature_size (4 bytes float), and a label of a size label_size (4 bytes float) - each row represents a single pixel from an input image with the corresponding class label (the 0 label denotes the background, whereas 1 denotes skin). To extract features, we utilized a U-Net model trained in two different ways. In the Full variant, we train the model over the training data extracted from the ECU database presenting human skin color images, whereas in the TF variant, we exploit a model pre-trained for abnormality segmentation on a dataset of brain MRI volumes (kaggle.com/mateuszbuda/lgg-mri-segmentation) and available at https://pytorch.org/hub/mateuszbuda_brain-segmentation-pytorch_unet/, and fine tune it over the same training sample as in the Full variant. We dump the features extracted in the final convolutional layer of the model. The input images were rescaled with preserving the aspect ratio to 512x512 to match the original architecture (the background pixels are zeroed), and standardized using the mean and standard deviation of the original training dataset. Overall, we have 898 files containing the feature vectors extracted for all pixels in 898 HGR images (separately for color and grayscale variants, and separately for Full and TF), and 3998 files containing the feature vectors extracted for all pixels in 3998 ECU images (separately for color and grayscale variants, and separately for Full and TF). Important note: Due to the large size of the deep features, we include only a sample of deep features extracted for the HGR and ECU datasets here. To access full datasets, please contact Jakub Nalepa or Michal Kawulok.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Image Segmentation is a crucial task in computer vision that involves dividing an image into meaningful regions or segments. These segments can correspond to objects, boundaries, or other relevant parts of the image. One common approach for image segmentation is the use of Region of Interest (ROI) techniques.
What Is Image Segmentation?
Region of Interest (ROI) in Image Segmentation:
Skin Classification Using Image Segmentation:
Challenges in Skin Segmentation:
Applications of Skin Segmentation:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Skin cancer is one of the most common malignant tumors worldwide, and early detection is crucial for improving its cure rate. In the field of medical imaging, accurate segmentation of lesion areas within skin images is essential for precise diagnosis and effective treatment. Due to the capacity of deep learning models to conduct adaptive feature learning through end-to-end training, they have been widely applied in medical image segmentation tasks. However, challenges such as boundary ambiguity between normal skin and lesion areas, significant variations in the size and shape of lesion areas, and different types of lesions in different samples pose significant obstacles to skin lesion segmentation. Therefore, this study introduces a novel network model called HDS-Net (Hybrid Dynamic Sparse Network), aiming to address the challenges of boundary ambiguity and variations in lesion areas in skin image segmentation. Specifically, the proposed hybrid encoder can effectively extract local feature information and integrate it with global features. Additionally, a dynamic sparse attention mechanism is introduced, mitigating the impact of irrelevant redundancies on segmentation performance by precisely controlling the sparsity ratio. Experimental results on multiple public datasets demonstrate a significant improvement in Dice coefficients, reaching 0.914, 0.857, and 0.898, respectively.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
To address the challenges of training neural networks for automated diagnosis of pigmented skin lesions, the authors introduced the HAM10000 ("Human Against Machine with 10000 training images") dataset. This dataset aimed to overcome the limitations of small-sized and homogeneous dermatoscopic image datasets by providing a diverse and extensive collection. To achieve this, they collected dermatoscopic images from various populations using different modalities, which necessitated employing distinct acquisition and cleaning methods. The authors also designed semi-automatic workflows that incorporated specialized neural networks to enhance the dataset's quality. The resulting HAM10000 dataset comprised 10,015 dermatoscopic images, which were made available for academic machine learning applications through the ISIC archive. This dataset served as a benchmark for machine learning experiments and comparisons with human experts.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Training of neural networks for automated diagnosis of pigmented skin lesions is hampered by the small size and lack of diversity of available datasets of dermatoscopic images. We tackle this problem by releasing the HAM10000 ("Human Against Machine with 10000 training images") dataset. We collected dermatoscopic images from different populations, acquired and stored by different modalities.
The final dataset consists of 10015 dermatoscopic images which can serve as a training set for academic machine learning purposes.
Cases include a representative collection of all important diagnostic categories in the realm of pigmented lesions: - AKIEC - actinic keratoses and intraepithelial carcinoma / Bowen's disease - BCC - basal cell carcinoma - BKL - benign keratosis-like lesions (solar lentigines / seborrheic keratoses and lichen-planus-like keratoses) - DF - dermatofibroma - MEL - melanoma - NV - melanocytic nevi - VC - vascular lesions (angiomas, angiokeratomas, pyogenic granulomas and haemorrhage)
More than 50% of lesions are confirmed through histopathology, the ground truth for the rest of the cases is either follow-up examination (follow_up), expert consensus (consensus), or confirmation by in-vivo confocal microscopy (confocal). The dataset includes lesions with multiple images, which can be tracked by the lesion_id-column within the metadata file.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Skin lesion segmentation performance results of networks, achieved on the ISIC 2017 dataset (based on experiments).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset is comprised of 38 chemically stained Whole slide image samples along with their corresponding ground truth annotated by histopathologists for 12 classes indicating skin layers (Epidermis, Reticular dermis, Papillary dermis, Dermis, Keratin), Skin tissues (Inflammation, Hair follicles, Glands), skin cancer (Basal cell carcinoma, Squamous cell carcinoma, Intraepidermal carcinoma) and background (BKG).
Facebook
TwitterIn the skin care market in the United States in 2024, the face skin care segment generated the highest revenue, reaching approximately ************ U.S. dollars. The body segment ranked second with around ************ U.S. dollars, while sun protection followed with about ************ U.S. dollars.
Facebook
TwitterMetadata for all sections in datasetIdentifies the characteristics of each RCM en-face section in the dataset. A detailed description is included in the README.txt file.metadata.csvsections_500This file contains the complete dataset of en-face sections at a resolution of 500x500 pixels. This file contains one dataset called 'sections' - this is a three dimensional array of uint8 values. The first axis is individual sections in the dataset. The second and third axes are the rows and columns of intensity values of that section.sections_250This file contains the complete dataset of en-face sections at a resolution of 250x250 pixels. This file contains one dataset called 'sections' - this is a three dimensional array of uint8 values. The first axis is individual sections in the dataset. The second and third axes are the rows and columns of intensity values of that section.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This study uses an in-house two-class dataset comprising 1000 texture images, 500 of which denote human skin texture and the rest represent non-skin textures with high degree of similarity to human skin. These skin texture images were taken by our research team using a DSLR camera to maintain the fine skin texture patterns. Images were captured from various ethnic groups and skin color tones to avoid biasness toward any ethnic groups or skin color. Since different human body parts have different skin texture characteristics, texture images in our dataset were collected from various body parts such as face, leg, core, arms with relatively equal amount of contribution in dataset. In order to better simulate the skin detection challenges in real world and improve the robustness and reliability of our experiments, skin texture images were collected in various scales, direction and lighting conditions. The remaining 500 texture images which denote non-skin textures with high degree of similarity to human skin were collected from various online image repositories. These images, represent the texture of wooden surfaces, sand and other objects such as rugs, animal furs and fabric which are highly similar to actual skin texture, color and hue. The dataset which prepared in this study simulates challenging real-world scenarios to evaluate and compare texture analysis techniques performance in challenging conditions. All texture patches were captured and stored in uncompressed Tagged Image File Format (TIFF) to avoid any alteration or compromise in actual texture patterns. Moreover, any kind of color alternation or image enhancement were avoided. All texture images were manually resized to 150x150 dimension to equalize the amount of contribution of each image in the model.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparative experimental evaluation metrics for ISIC 2018.
Facebook
TwitterOver the last two observations, the revenue is forecast to significantly increase in all segments. As part of the positive trend, the revenue achieves the maximum value across all four different segments by the end of the comparison period. Notably, the segment Body stands out with the highest value of ****** million U.S. dollars. Find other insights concerning similar markets and segments, such as a comparison of revenue in Italy and a comparison of average revenue per user (ARPU) in the Philippines.The Statista Market Insights cover a broad range of additional markets.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Facial Color Segmentation Dataset is tailored for the beauty and visual entertainment sectors, consisting of internet-collected images with resolutions from 1028 x 1028 to 6016 x 4016 pixels. This dataset focuses on semantic segmentation of facial skin colors, including black, yellow, white, and brown, facilitating diverse applications in cosmetics, virtual makeovers, and inclusive digital content.
Facebook
TwitterSkin cancer segmentation and lung lesion segmentation datasets
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The visuAAL Skin Segmentation Dataset contains 46,775 high quality images divided into a training set with 45,623 images, and a validation set with 1,152 images. Skin areas have been obtained automatically from the FashionPedia garment dataset. The process to extract the skin areas is explained in detail in the paper 'From Garment to Skin: The visuAAL Skin Segmentation Dataset'.
If you use the visuAAL Skin Segmentation Dataset, please, cite:
How to use:
A sample of image data in the FashionPedia dataset is:
{'id': 12305,
'width': 680,
'height': 1024,
'file_name': '064c8022b32931e787260d81ed5aafe8.jpg',
'license': 4,
'time_captured': 'March-August, 2018',
'original_url': 'https://farm2.staticflickr.com/1936/8607950470_9d9d76ced7_o.jpg',
'isstatic': 1,
'kaggle_id': '064c8022b32931e787260d81ed5aafe8'}
NOTE: Not all the images in the FashionPedia dataset have the correponding skin mask in the visuAAL Skin Segmentation Dataset, as there are images in which only garment parts and not people are present in them. These images were removed when creating the visuAAL Skin Segmentation Dataset. However, all the instances in the visuAAL skin segmentation dataset have their corresponding match in the FashionPedia dataset.