Enhanced Melanoma Classifier with VGG16-CNN Mary Adewunmi

Melanoma is the most severe kind of skin cancer that is becoming more common in the Western world. This is still thought to be caused primarily by exposure to the sun. Patients with malignant melanoma have a wide range of prognoses; however public awareness initiatives encouraging early detection have resulted in considerable reductions in mortality rates. This disease primarily affects Caucasian men and women and has a terrible prognosis once it has spread to other parts of the body. As a result, early detection of this malignancy is critical for patient treatment success.
 In this paper, we present an experimental result of a Melanoma Image Classifier using the VGG16 model for preprocessing the images dataset. Thedataset comprises 4596 image samples with 2239 images for training, 2239 images formodel validationand 118 images for testing the model. The resultant images were trained with a Convolutional Neural Network(CNN) Sequential model of a learning rate of 0.0001,adam optimizer with binary cross-entropy as loss and accuracy as a metric. The model yields an accuracy of 93%, thereby outperforming other Deep learning models. The approach is viable and effective, and it achieves the preliminary goal of classifying melanoma lesion images.


Introduction
Melanoma has arisen as a major public health challenge in recent decades (Schadendorf et al., 2015). Melanoma is the most severe kind of skin cancer, yet it can be treated if detected early. Melanoma develops when melanocytes, the pigment-producing cells present in the skin, eye, inner ear, and leptomeninges, develop genetic abnormalities (Domingues et al., 2018). Despite accounting for only around 1% of all skin malignant tumors, cutaneous malignant melanoma is the most aggressive and deadly type of skin cancer. The rising incidence rates and mortality rates of melanoma have prompted a renewed focus on early identification and prevention]. Dermoscopy greatly enhances the diagnostic accuracy of the naked eye examination, according to several meta-analyses. However, dermatologists and medical practitioners who were professionally trained in various dermoscopic algorithms had an average sensitivity of about 80% for identifying melanoma. Artificial intelligence has recently been shown to be capable of categorizing images of benign nevi and melanoma with dermatologist-level precision, according to recent articles. Higher model performance correlates with a large number of images, however, it is not always cost-effective to run a large number of image datasets with a deep learning network. It is pertinent for us to see ways to cushioned more image datasets with the training model used.
Convolutional neural networks (CNNs) have been utilized in recent research in digital skin diagnostics to categorize melanoma images with accuracies comparable to those attained by dermatologists (Nasr-Esfahani et al., 2016;Perez et al., 2019). Prior studies used a huge number of images to train their algorithms, which were then confirmed by consensus decisions. When photos are validated in this way, there is a strong chance that the CNN will learn the dermatologists' decision-making process, including all probable errors. . The rising incidence rates and mortality rates of melanoma have prompted a renewed focus on early identification and prevention.This work focuses on Melanoma amongst 9 classes of Skin cancer(see Fig. 1)  Preprocessing with VGG16 library has a preprocessing function specific to the VGG16 model keras.applications.vgg16.preprocess_input which seems to do additional steps like subtracting the average RGB channel values of the original ImageNet training set. In this paper, we present a model for classifying melanoma based on the DNN presented by Visual Geometry Group with 16 layers (VGG16) .First, we pre samples as byteplot pictures, in which each byte represents one grayscale pixel. Using the convolutional layers of VGG16 pre-trained on the ImageNet dataset, we extract the filter activation maps (also known as bottleneck features) by deducting the average RGB channel values of the original Image net and adjust the training set images.

Project Framework
This framework illustrated how we arrive at a faster model with high accuracy for training melanoma lesion images. In this work, we used VGG16 and CNN architecture.
• Collate the datasets • Preprocess the images with VGG16 by extracting the features and labels • Set the parameters or fine-tuning the images and environment for the CNN model • Train the images with CNN • Evaluate the model with accuracy as the metrics

Implementation
• We used an approach similar to (Mamiya & Miyata, 2020) in which authors first preprocess images with vgg16 and used it to denoise the input images and generate unlabeled images that was trained with CNN. Our hyper parameter configurations are the following: • Image preprocessing: Image data generator (Chollet, 2016 Area under Curve (AUC) metric of 93% was achieved which indicated a very high accuracy melanoma classifier. The results also showed that there was no over fitting because of the decline in training and validation loss (see figure 4a&b).

Results
In this work, we have studied: • Preprocessing images with VGG16 • Training images with CNN • Classification of lesion images into Melanoma and Non melanoma (see figure 5). The results were classified correctly when matched up with the ground truth.

Conclusion
We have developed a Melanoma Classifier, which achieved a performance accuracy of 93%, precision of 1 and recall of 0.5% with Melanoma images, through preprocessing images with VGG16. Overall, this indicated that training images need to be preprocessed before they can be trained on deep learning models especially when aiming at higher performance accuracy.

Future Recommendation
The test datasets for Melanoma was very few compared to the other classes of skin cancer lesion images, which affected the recall and the F1 score of the predicted test images.