MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created by Aranya Saha
Released under MIT
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Skin cancer is among the most prevalent types of malignancy all over the global and is strongly associated with the patient’s prognosis and the accuracy of the initial diagnosis. Clinical examination of skin lesions is a key aspect that is important in the assessment of skin disease but comes with some drawbacks mainly with interpretational aspects, time-consuming and healthare expenditure. Skin cancer if detected early and treated in time can be controlled and its deadly impacts arrested completely. Algorithms applied in convolutional neural network (CNN) could lead to an enhanced speed of identifying and distinguishing a disease, which in turn leads to early detection and treatment. So as to eliminate these challenges, optimized CNN prediction models for cancer skin classification is studied in this researche. The objectives of this study were to develop reliable optimized CNN prediction models for skin cancer classification, to handle the severe class imbalance problem where skin cancer class was found to be much smaller than the healthy class. To evaluate model interpretability and to develop an end-to-end smart healthcare system using explainable AI (XAI) such as Grad-CAM and Grad-CAM++. In this researche new activation function namely NGNDG-AF was offered specifically to enhance the capabilities of network fitting and generalization ability, convergence rate and reduction in mathematical computational cost. A research used an optimized CNN and ResNet152V2 with the HAM10000 dataset to differentiate between the seven forms of skin cancer. Model training involved the use of two optimization functions (RMSprop and Adam) and NGNDG-AF activation functions. Cross validation technique the holdout validation is used to estimate of the model’s generalization performance for unseed data. Optimized CNN is performing well as compare to ResNet152V2 for unseen data. The efficacy of the optimized CNN method with NGNDG-AF was examined by a comparative study wirh popular CNN with various activation functions shows that better performance of NGNDG-AF, achieving the classification accuracy rates that are as high as 99% in training and 98% in the validation. The recommended system also involves the integration of the smart healthcare application as a central component to give the doctors as well as the healthcare providers diagnosing and tools that would assist in the early detection of skin cancer hence leading to better outcomes of the treatment.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Este dataset é uma versão adaptada do “Skin Cancer MNIST” - HAM10000 original - convertida de uma tarefa de classificação para detecção de lesões cutâneas. Ele contém:
- 10.000 imagens de lesões de pele humana anotadas manualmente com bounding boxes.
- Divisão em 7 classes principais de lesões cutâneas, incluindo:
1. Actinic keratoses and intraepithelial carcinoma/Bowen disease (akiec): Lesões pré-malignas.
2. Basal cell carcinoma (bcc): Tipo de câncer de pele com bom prognóstico.
3. Benign lesions of the keratosis type (bkl): Incluem lentigo solar, ceratose seborreica e ceratose liquenoide.
4. Dermatofibroma (df): Lesões benignas comuns.
5. Melanoma (mel): Lesão maligna com alta prioridade clínica.
6. Melanocytic nevi (nv): Lesões benignas melanocíticas muito comuns.
7. Vascular lesions (vasc): Incluem angiomas, angiokeratomas, granulomas piogênicos e hemorragias.
This dataset was created by tsaideepak
This dataset was created by tsaideepak
This dataset was created by mash97
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparison table or existing different technique for skin cancer.
This dataset was created by Javaria Tahir
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Skin hair annotations for 75 images taken randomly from: P. Tschandl, C. Rosendahl, and H. Kittler, “The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions,” Sci. data, vol. 5, p. 180161, 2018.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Skin lesion classification performance using deep learning methods.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Identification efficiency index for HAM10000 dataset
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created by Kiam III
Released under Apache 2.0
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overall comparison of EFFNet with other state-of-the-art models on the test dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Skin cancer is among the most prevalent types of malignancy all over the global and is strongly associated with the patient’s prognosis and the accuracy of the initial diagnosis. Clinical examination of skin lesions is a key aspect that is important in the assessment of skin disease but comes with some drawbacks mainly with interpretational aspects, time-consuming and healthare expenditure. Skin cancer if detected early and treated in time can be controlled and its deadly impacts arrested completely. Algorithms applied in convolutional neural network (CNN) could lead to an enhanced speed of identifying and distinguishing a disease, which in turn leads to early detection and treatment. So as to eliminate these challenges, optimized CNN prediction models for cancer skin classification is studied in this researche. The objectives of this study were to develop reliable optimized CNN prediction models for skin cancer classification, to handle the severe class imbalance problem where skin cancer class was found to be much smaller than the healthy class. To evaluate model interpretability and to develop an end-to-end smart healthcare system using explainable AI (XAI) such as Grad-CAM and Grad-CAM++. In this researche new activation function namely NGNDG-AF was offered specifically to enhance the capabilities of network fitting and generalization ability, convergence rate and reduction in mathematical computational cost. A research used an optimized CNN and ResNet152V2 with the HAM10000 dataset to differentiate between the seven forms of skin cancer. Model training involved the use of two optimization functions (RMSprop and Adam) and NGNDG-AF activation functions. Cross validation technique the holdout validation is used to estimate of the model’s generalization performance for unseed data. Optimized CNN is performing well as compare to ResNet152V2 for unseen data. The efficacy of the optimized CNN method with NGNDG-AF was examined by a comparative study wirh popular CNN with various activation functions shows that better performance of NGNDG-AF, achieving the classification accuracy rates that are as high as 99% in training and 98% in the validation. The recommended system also involves the integration of the smart healthcare application as a central component to give the doctors as well as the healthcare providers diagnosing and tools that would assist in the early detection of skin cancer hence leading to better outcomes of the treatment.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Skin Disease GAN-Generated Lightweight Dataset
This dataset is a collection of skin disease images generated using a Generative Adversarial Network (GAN) approach. Specifically, a GAN was utilized with Stable Diffusion as the generator and a transformer-based discriminator to create realistic images of various skin diseases. The GAN approach enhances the accuracy and realism of the generated images, making this dataset a valuable resource for machine learning and computer vision applications in dermatology.
To create this dataset, a series of Low-Rank Adaptations (LoRAs) were generated for each disease category. These LoRAs were trained on the base dataset with 60 epochs and 30,000 steps using OneTrainer. Images were then generated for the following disease categories:
Due to the availability of ample public images, Melanoma was excluded from the generation process. The Fooocus API served as the generator within the GAN framework, creating images based on the LoRAs.
To ensure quality and accuracy, a transformer-based discriminator was employed to verify the generated images, classifying them into the correct disease categories.
The original base dataset used to create this GAN-based dataset includes reputable sources such as:
2019 HAM10000 Challenge - Kaggle - Google Images - Dermnet NZ - Bing Images - Yandex - Hellenic Atlas - Dermatological Atlas The LoRAs and their recommended weights for generating images are available for download on our CivitAi profile. You can refer to this profile for detailed instructions and access to the LoRAs used in this dataset.
Generated Images: High-quality images of skin diseases generated via GAN with Stable Diffusion, using transformer-based discrimination for accurate classification.
This dataset is suitable for:
When using this dataset, please cite the following reference: Espinosa, E.G., Castilla, J.S.R., Lamont, F.G. (2025). Skin Disease Pre-diagnosis with Novel Visual Transformers. In: Figueroa-García, J.C., Hernández, G., Suero Pérez, D.F., Gaona García, E.E. (eds) Applied Computer Sciences in Engineering. WEA 2024. Communications in Computer and Information Science, vol 2222. Springer, Cham. https://doi.org/10.1007/978-3-031-74595-9_10
This dataset was created by VIVEK NARAYAN 21114108
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Skin cancer is rapidly growing globally. In the past decade, an automated diagnosis system has been developed using image processing and machine learning. The machine learning methods require hand-crafted features, which may affect performance. Recently, a convolution neural network (CNN) was applied to dermoscopic images to diagnose skin cancer. The CNN improved its performance through its high-dimension feature extraction capability. However, these methods lack global co-relation of the spatial features. In this study, we design a dual-scale lightweight cross-attention vision transformer network (DSCATNet) that provides global attention to high-dimensional spatial features. In the DSCATNet, we extracted features from different patch sizes and performed cross-attention. The attention from different scales improved the spatial features by focusing on the different parts of the skin lesion. Furthermore, we applied a fusion strategy for the different scale spatial features. After that, enhanced features are fed to the lightweight transformer encoder for global attention. We validated the model superiority on the HAM 10000 and PAD datasets. Furthermore, the model’s performance is compared with CNN and ViT-based methods. Our DSCATNet achieved an average kappa and accuracy of 95.84% and 97.80% on the HAM 10000 dataset, respectively. Moreover,the model obtained 94.56% and 95.81% kappa and precision values on the PAD dataset.
This dataset was created by Nafis Jaman
This dataset was created by HIMANSHU KUMAR SAW
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Skin cancer is rapidly growing globally. In the past decade, an automated diagnosis system has been developed using image processing and machine learning. The machine learning methods require hand-crafted features, which may affect performance. Recently, a convolution neural network (CNN) was applied to dermoscopic images to diagnose skin cancer. The CNN improved its performance through its high-dimension feature extraction capability. However, these methods lack global co-relation of the spatial features. In this study, we design a dual-scale lightweight cross-attention vision transformer network (DSCATNet) that provides global attention to high-dimensional spatial features. In the DSCATNet, we extracted features from different patch sizes and performed cross-attention. The attention from different scales improved the spatial features by focusing on the different parts of the skin lesion. Furthermore, we applied a fusion strategy for the different scale spatial features. After that, enhanced features are fed to the lightweight transformer encoder for global attention. We validated the model superiority on the HAM 10000 and PAD datasets. Furthermore, the model’s performance is compared with CNN and ViT-based methods. Our DSCATNet achieved an average kappa and accuracy of 95.84% and 97.80% on the HAM 10000 dataset, respectively. Moreover,the model obtained 94.56% and 95.81% kappa and precision values on the PAD dataset.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created by Aranya Saha
Released under MIT