Deep Learning-Based Classification of Melanoma and Non-Melanoma Skin Cancer

Deep Learning-Based Classification of Melanoma and Non-Melanoma Skin Cancer

Eatedal Alabdulkreem Hela Elmannai Aymen Saad Israa S. Kamil Ahmed Elaraby*

Department of Computer Sciences, College of Computer and Information Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia

Department of Information Technology, College of Computer and Information Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia

Department of Information Technology, Management Technical College, Al-Furat Al-Awsat Technical University, Kufa 54003, Iraq

Department of Information Security, College of Information Technology, University of Babylon, Hilla 51001, Iraq

Department of Cybersecurity, College of Engineering and Information Technology, Buraydah Private Colleges, Buraydah 51418, Saudi Arabia

Department of Computer Science, Faculty of Computers and Information, South Valley University, Qena 83523, Egypt

Corresponding Author Email: 
ahmed.elaraby@svu.edu.eg
Page: 
213-223
|
DOI: 
https://doi.org/10.18280/ts.410117
Received: 
10 February 2023
|
Revised: 
13 April 2023
|
Accepted: 
31 July 2023
|
Available online: 
29 February 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Melanoma skin cancer is primarily characterized by poor prognostic responses. Surgical treatment can achieve advanced cure rate with early melanoma detection. Manual segmentation of suspected lesions aids early melanoma diagnosis. However, the limitations of manual segmentation include low efficiency and a risk of misclassification. Deep learning, due to its proficiency in image object classification, has gained popularity and is usually used in medical specialties such as ophthalmology, dermatology, and radiology. This paper proposes a deep learning method using a novel light weight convolutional neural networks (LWCNN) and transfer learning techniques (GoogleNet, ResNet-18 & MobilNet-v2). These are used to train datasets and features enhancement of skin scan gathered from Kaggle, aiming to distinguish them into two groups: Melanoma and Non-Melanoma cells. By employing these techniques, new datasets with robust features are produced. All CNN models have been tested in two experiments. In firestone, model was tested solely with original datasets and achieved 97.30%, 88.43%, and 48.28% for AC-Training, AC-Testing, and Time (min) respectively. In second experiment, we used the dataset after enhancing the features of skin scan images, which resulted in 99.18%, 91.05%, and 22.54% for AC-Training, AC-Testing, and Time (min) respectively. According to experimental results, the proposed approach provides higher accuracy results for enhanced images than original images, demonstrating its potential in skin cancer classification.

Keywords: 

image enhancement, light wight convolutional neural networks (LWCNN), melanoma classification, transfer learning

1. Introduction

Cancer is a term broadly used to define a range of diseases that can affect any part of the human body. Essentially, cancer is described by the abnormal growth of cells that invade neighboring cells and distributed to other body organs—a process known as metastasis. This spread can result in death, making cancer the second leading cause of deaths worldwide. Approximately 9.6 million people, or one in six deaths, are caused by cancer annually [1].

One of the most familiar and deadly kinds of cancer is skin cancer. Number of skin cancer cases dramatically escalating global. Failure to skin cancer diagnose in its early stages can lead to the spread of the disease to other body parts, known as metastasis, which significantly increases the risk of mortality. However, it can be treated if detected early, making timely and accurate diagnosis a critical research area. Different machine learning approaches have been used in computer-aided diagnosis to detect skin cancer and classify its malignancy [2].

The modem techniques of artificial intelligence present a hopeful solution to the challenge of delivering excellence healthcare to patients in regions with restricted access to trained dermatologists. Important advancements have been made in advance of automated applications for skin lesions classification from digital images [3].

Using a diverse dataset containing thousands of images representing both benign and malignant cases to train neural network, the model can acquire an understanding complex relationship within the data. This empowers the model to accurately classify new images as either benign or malignant. Automating this process not only enhance efficiency by minimizing false positives and negatives but also saves valuable time that can be redirected towards more productive endeavors. Our model could be particularly beneficial in locations that lack access to expert doctors [4].

In image classification, deep Learning is considered as powerful tools necessary for effective and accurate image classification. Its structure and process like the human brain, with neurons firing across the brain, information processing, data classifying, creation inferences, and making results. Deep Learning structure involves a stack of layers collectively referred to as a neural network that works like neurons in a brain, patterns identifying, and making predictions [5].

Neural networks, particularly CNNs are extensively used in image analysis in different fields. In the realm of medical applications, CNNs have got significant popularity in machine learning. To enhance the precision of identifying melanoma skin lesions, an automated decision framework is proposed, which integrates multiple machine learning approaches. This comprehensive framework leverages the power of neural networks and other methodologies to achieve accurate detection of melanoma with best precision [6]. Deep learning models have gained prominence in many fields due to their exceptional performance. The medical field is one such area where they have been utilized in diagnosis or treatment procedures [7].

One of the main challenges in computer-aided diagnosis is differentiating the lesion from the nearby healthy skin. An increasing number of researchers are currently investigating deep learning with expectations that they will soon achieve a performance comparable to that of dermatologists, using dermoscopic images [8].

Addressing these limitations, we present a new framework to focusing the challenges associated with reliable melanoma diagnosis. It is important to highlight that this paper makes contribution as it presents deep CNN-based framework to improve the accuracy of melanoma detection.

2. Related Work

Transfer learning is powerful tools employed when there is insufficient data to make a model. Instead. It utilizes the insights extracted from a substantial volume of previously known data to analyze the newly acquired data. The authors [9] investigated effectiveness of three cutting-edge pre-trained (CNN) architectures as feature extractors, in combination with four classifiers of machine learning, for skin lesion classification. They indicated that the combination of DenseNet201 as the feature extractor and Cubic SVM as the classifier yielded the highest accuracy results. For an accurate diagnosis, Computer-Aided Diagnosis (CAD) systems offer computer-generated analysis to assist radiologists, aiming to increase diagnostic accuracy and decrease the image reading time. The CAD method for early detection is also being examined. It is utilized for different types of tumor images, such as dermoscopy, MRI, mammograms, and radiography. The CAD system used in medical imaging incorporates five key stages ranging from image acquisition to classification, encompassing data acquisition, preprocessing, segmentation, feature extraction and selection, and finally, classification [10]. The authors [11] indicated that integration of image detection techniques and computer classification capabilities has the potential to significantly for skin cancer detection with improvement the accuracy rate.

The authors [12] presented a deep learning-based approach that tackles the limitations associated with automatic melanoma lesion detection. The proposed method utilizes an enhanced encoder-decoder network architecture, where encoder and decoder sub-networks are interconnected via avoid pathways. This design assists efficient learning and feature extraction by bridging the semantic gap between the encoder and decoder feature maps.

The system incorporates a multi-stage and multi-scale approach, allowing for robust analysis of melanoma lesions at different levels of detail. The pixel-wise classification of melanoma lesions is accomplished using a softmax classifier. The presented method called Lesion-classifier leverages the results obtained from pixel-wise classification to classify skin lesions into melanoma and non-melanoma categories. In the study [13] the process begins by segmenting skin lesions from original images using a dedicated segmentation network. Segmented lesions are then resized to a fixed size for further analysis. To classify dermoscopy images, five classification networks with SE (Squeeze-and-Excitation) blocks are employed. Subsequently, a convolutional neural network is constructed to ensemble the results obtained from the five classification networks which have demonstrated excellent performance in various image classification tasks. By leveraging this ensemble approach and incorporating these high-performing networks, the proposed method achieved accurate and reliable classification of dermoscopy images for skin lesion analysis.

In the study [14] system begins by initializing the LeNet-5 architecture and commences training the network for a specified number of epochs, as determined by the user. Throughout the training process, the network generates probability values for the two classification classes. The model assigns the predicted classification class based on the class with the highest probability value. The training results are then saved as a model file. The experimental findings demonstrate that the accuracy of classifying melanoma cancer images is influenced by the quantity of training data and the number of epochs utilized during the training process. The study [15] indicated that CNN methods, pooling layers are commonly inserted at regular intervals after several convolutional layers. The pooling layer offers several advantages, such as gradually reducing the size of the feature map's output volume, which aids in mitigating overfitting. This reduction in data is achieved through either max-pooling, which chooses maximum value, or mean pooling, which calculates the average value. Hyperparameters, which are parameters with fixed values throughout the model training process, play vital role in influencing the performance of the training. These hyperparameters, such as the size of the pooling window or the stride, determine the behavior of the pooling layer and can impact the overall training performance of the model. Selecting appropriate values for these hyperparameters is essential to achieve optimal results in the training process. In the system proposed in the study [16], there are three fundamental elements. Firstly, a concurrent layer of convolution, activation, and max-Pooling is used. After this, parallel layers are connected at the function level. Subsequently, the flattened features have “Multi-Layer Perceptrons” (MLP) at two levels, but at each layer the number of neurons that drops must be selected using an alteration to prevent overfitting. Finally, the task of classification is performed by the final layer, which includes a softmax layer. A CNN architecture consisting of 25 layers, with 5 of these being convolutional layers, is proposed in the study [17]. Sequentially following each layer in the model, there is a sequence of layers. The proposed method combines the extracted features from the (CNN) architecture with a (SVM) classifier with acceptable accuracy.

In the study [18], a high-level model fusing fuzzy-based GrabCut and (GC-SCNN) was implemented for image training. Lesion classification and feature extraction were executed across a multitude of publicly accessible datasets. The model and support vector machines (SVM) were deployed subsequent to the segmentation phase. The stacked CNN approach was leveraged to extract pertinent features from the segmented image. This approach facilitated the learning of nonlinear discriminative features from dermoscopy scans across various levels.

In the study [19], descriptive features were extracted from images using a pretrained classification network. These networks were trained using a large-scale visual recognition challenge, harnessing over a million images, subsequently categorizing them into diverse classes such as animals, cars, buses, tea, and cups, among others. The ResNet-50 pretrained network, endowed with 50 deep layers, classified the corresponding database images into a staggering 1,000 object categories. A noteworthy property of this pretrained network is its depth, defined as the path from the input to the output layer, comprising a substantial number of fully connected or convolutional layers.

For lesion segmentation, the study in the study [20] harnessed morphological snakes, which represent different averages. The utility of MorphACWE lies in its ability to distinguish between averages within and outside the regions of the object to be segmented. Additionally, MorphACWE does not necessitate the explicit definition of the contour of the skin lesion. The search area for the lesion was minimized to expedite the process while retaining relevance. In the study [21], EfficientNet was proposed as a novel method to bolster the accuracy and efficiency of CNN. This was achieved by uniformly scaling all dimensions of the network, i.e., depth, width, and resolution, while concurrently downscaling the model. Typically, convNets scale down or up by adjusting either the network depth, width, or resolution. However, while scaling a single dimension of the network can enhance accuracy, it has been observed that for larger models, the accuracy tends to decline. In order to optimize accuracy and efficiency, it is crucial to uniformly scale all dimensions of the network, namely width, depth, and resolution.

In the study [22], an innovative automated classification method is introduced for cutaneous lesions in digital dermatoscopic images, specifically targeting the detection of melanoma. The method is founded on two fundamental stages. The first involves cropping a bounding box around the skin lesion within the input image, achieved through the application of Mask R-CNN. The following stage involves the classification using ResNet152.

The study in the study [23] outlines the development of a fully automated approach for the detection and classification of skin lesions, leveraging machine learning and bespoke convolutional neural networks. The proposed methodology primarily focuses on pre-processing and classification. The HAM10000 dataset, a standard in the field comprising 10,015 skin lesion images distributed across seven categories, serves as the foundation for the proposed work.

In a groundbreaking approach outlined in the study [24], new methods are employed that obviate the need for feature extraction or segmentation – the separation of lesions from the remainder of the image. These tasks are automatically managed by the intricate layers and operations within the deep learning model. This approach supplants ad hoc CNN designs with pretrained, meticulously crafted, and rigorously tested models.

3. Methodology

In this paper, three phases are utilized as a methodology: obtaining the dataset, preprocessing data and (feature extraction & classification). The steps of methodology are presented in detail in Figure 1. Where the implemented framework for CNNs-based skin image classifiers is executed by MATLAB.

Figure 1. Methodology overview

The skin images dataset was acquired from Kaggle [25], it is dataset including two classes (Melanoma and Non-melanoma), each class has the same number of images. In the source dataset, classes are divided into three folders (train_sep, test, valid) because the dataset was trained using python. In this study, we combined all these folders into one folder using MATLAB as in Table 1.

Table 1. Characteristics of the dataset [25]

Types

Train_Sep

Test

Valid

Sum

Melanoma

Nonmelanoma

5341

5341

1781

1781

1781

1781

8903

8903

In the dataset, 80% of images are utilized to train the model and 20% are used for validation. The training process is performed on three levels to reduce the number of used resources.

In this paper, we provide a sufficient number of databases to overcome the disadvantages of extracting features and obtaining high resolution. To perform that, we used two experiments. In the first experiment, realization for all datasets is executed from original size to 224×224×3 to be performed using CNNs proposed models. Next, we divided each class of dataset into 80% for training and 20% for testing. Finally, the training dataset is fed to four models (our model (LWNet), GoogleNet, ResNet-18 and MobilNet-v2) to be trained. In the second experiment, we utilized image enhancement techniques to improve quality of the input images. The techniques including image enhancement approach.

3.1 CNN models

In this paper, we used two types of DCNNs models pre-trained and LWNet deep neural networks models to compare accuracy of our model. Moreover, the proposed classifiers systems be intended for low complexity devices. Therefore, 1000 layers are reduced to 2 layers only for all DCNNs. In pretrained deep neural networks, we utilize an image classification network that has previously been trained to extract useful characteristics from real-world images for learning a new task. A database in the study [26] is used to train the bulk of the pre-trained networks [27]. These networks can categorize photographs into 1,000 different item categories, including keyboards, coffee cups, pencils, and several animals.

GoogleNet from Google was awarded the ILSVRC 2014 competition's victor. A top-5 error rate of 6.67% was attained. The challenge's organizers were now required to assess this performance, which was very nearly on par with that of a person. It turns out that in order to surpass GoogleNets accuracy, this was actually rather challenging and required some human instruction. CNN of 22 layers deep is called GoogleNet. The network may be loaded in a pretrained state that has been trained on either the Places365 [27] or ImageNet [28] data sets. In contrast to the network trained on ImageNet, the Places365 network classifies photos into 365 distinct location types, including field, park, runway, and lobby. For a variety of images, GoogleNet has picked up several feature representations. The input picture size for each of the pretrained networks is 224 by 224.

ResNet-18 that uses skip connections or short-cuts that passes through some layers. In ResNet-18, a residual building block is the main structure. It consists of 71 layers and 78 connections in 8 residual building blocks [29, 30]. The convolutional layer is omitted using a form of shortcut [31]. The rectified linear unit (ReLU) activation function may be used to directly add the input and output vectors after the convolutional layer [32]. A ResNet-18 model was introduced in 2015 [33]. Moreover, it has structures like the brain’s cerebral cortex [34].

MobileNetV2 has 53blayers deep, requiring 3.5 million operations and a parameter size of 13 MB [35]. An improvement on MobileNet and MobileNetV1, which were designed for embedded applications, is MobileNetV2. A little accuracy loss that occurs with simplicity is looked into. ImageNet has evaluated the performance of both designs, and they are both capable of classifying photos into up to 1000 different object categories.

3.2 Proposed model (LWNet)

LWNet model adheres to CNNs' fundamental design. It might often take months to train a convolution network from start because it's a difficult task [36, 37]. Therefore, it would be preferable to train the suggested deep learning technique using a pre-trained classifier rather than creating a new deep learning classifier from start. Four models (GoogleNet, ResNet-18, and MobilNet-v2) served as the basis for this analysis.

In this model, we used five hidden layers. Each layer contains additional layers (batch normalization, leaky Relu, max-pooling layers, to address weights scattering and take important features from images. The Convolution Neural Network involving these parts of layers is shown in Figure 2 and its details are presented in Table 2.

Table 2. Details of the proposed LWNet model

Name of Layer

Decimation

# of Filter

Padding

Stride

Input

224 224 3

 

 

 

Conv1

3 3

8

same

 

batch normalization

 

 

 

 

leaky Relu

0.01

 

 

 

max-pooling

2 2

 

 

2 2

Conv2

3 3

16

same

 

batch normalization

 

 

 

 

leaky Relu

0.01

 

 

 

max-pooling

2 2

 

 

2 2

Conv3

3 3

32

same

 

batch normalization

 

 

 

 

leaky Relu

0.01

 

 

 

max-pooling

2 2

 

 

2 2

Conv4

3 3

16

same

 

batch normalization

 

 

 

 

leaky Relu

0.01

 

 

 

max-pooling

2 2

 

 

2 2

Conv5

3 3

8

same

 

batch normalization

 

 

 

 

leaky Relu

0.01

 

 

 

max-pooling

2 2

 

 

2 2

Full Connect

2

 

 

 

Softmax

2

 

 

 

Classification

Melanoma and Non-Melanoma

Figure 2. LWNet CNN architecture

Transfer learning is used in the proposed classifier's CNN model to increase learning effectiveness [38-41]. Transfer learning is the process of transferring and reusing knowledge acquired while resolving one problem to address another. Two components must be made in order to execute transfer learning:

  • Network architecture that representing as an array of layers. This is made by pre-existing network modification such as some types of transfer learning (GoogleNet, ResNet-18 & MobilNet-v2) and a new Light Wight Convolutional Neural Networks (LWCNNs).
  • Dataset (Skin Images) with known labels to be utilize as training data.

The train network function takes these two components as inputs and outputs the trained network.

4. Experiments Results

The training and testing outcomes of CNNs models are covered. 7,122 skin scan images from the training dataset are retrieved for proposed classifier, with 1,780 skin scan images coming from two classes. Ten epochs are used in all CNNs models. Thus, there are 445 iterations of each epoch and 4450 total iterations. The performance details of the results for the experiment (1) and experiment (2) are shown in Tables 3 and 4 and Figures 3-10 respectively.

The accuracy and loss are shown in Figures 5 and 6 when the training progresses to experiments -1 and -2, respectively. As the training progresses, accuracy tends to increase until it reaches a saturation point, where minor fluctuations in accuracy occur. On the other hand, loss decreases until it reaches a specific saturation point, indicating a diminishing improvement in model performance. In essence, accuracy reflects the level of success or performance of the model, while loss measures the degree of error or failure of the model. A good model often has great accuracy and low noise.

A confusion matrix from the training and validation datasets displays how well the model can predict the class that is utilized in supervised learning. The proposed model predicts the classes of the validation test data for each of the two classes. The predictions for each class are displayed in the columns of a matrix, while the actual occurrences of each class are represented in the rows. Figures 3-10 show the accuracy and loss of the training for all experimental models. The performance metrics used in this research encompass the commonly utilized measures indicated in Eqs. (1)-(5) namely accuracy, sensitivity, specificity, and precision as detailed in references [42, 43] and it shown in the Tables 5-12 for all experimental models.

Table 3. Comparison of models in experiment (1)

Models

Ac-Train

Ac-Test

Time (min)

# of Epoch

LWNet

97.30

88.43

48.28

10

GoogleNet

96.22

92.95

155.20

10

ResNet-18

96.61

93.82

323.17

10

MobilNet-v2

96.32

86.74

49.55

10

Table 4. Comparison of models in experiment (2)

Models

Ac-Train

Ac-Test

Time (min)

# of Epoch

LWNet

99.18

91.05

22.54

10

GoogleNet

98.00

95.45

90.27

10

ResNet-18

97.15

95.10

194.38

10

MobilNet-v2

97.81

89.44

50.22

10

Figure 3. LWNet in experiment (1) accuracy and training loss

Figure 4. GoogleNet in experiment (1) accuracy and training loss

Figure 5. ResNet-18 in experiment (1) accuracy and training loss

Figure 6. MobilNet-v2 in experiment (1) accuracy and training loss

Figure 7. Our model (LWNet) in experiment (2) accuracy and training loss

Figure 8. GoogleNet in experiment (2) accuracy and training loss

Figure 9. ResNet-18 in experiment (2) accuracy and training loss

Figure 10. ResNet-18 in experiment (2) accuracy and training loss

Accuracy $=\frac{T N+T P}{T P+F P+T N+F N}$    (1)

Sensitivity $=\frac{T P}{T P+F N}$    (2)

Specificity $=\frac{T N}{T N+F P}$    (3)

Precision $=\frac{T P}{T P+F P}$   (4)

$F_{ {measure }}=\frac{2( { precision \,} * {\, sensitivity })}{{ precision\, }+ {\, sensitivity }}$     (5)

Table 5. Results of LWNet in experiment-1 [Accuray – (AC), Precision (PR), Sensitivity (Sen), Specificity (SP) and F_measure (F_meas.)

Class

AC

PR

Sen

SP

F_meas.

Melanoma

97

96

99

96

97

Non-Melanoma

97

99

96

99

97

Table 6. Results of GoogleNet in experiment-1

Class

AC

PR

Sen

SP

F_meas.

Melanoma

97

97

96

97

97

Non-Melanoma

97

96

97

96

97

Table 7. Results of ResNet-18 in experiment-1

Class

AC

PR

Sen

SP

F_meas.

Melanoma

96

97

95

97

96

Non-Melanoma

96

95

97

95

96

Table 8. Results of MobilNet-v2 in experiment-1

Class

AC

PR

Sen

SP

F_meas.

Melanoma

96

93

100

93

96

Non-Melanoma

96

100

93

100

96

Table 9. Results of LWNet in experiment-2

Class

AC

PR

Sen

SP

F_meas.

Melanoma

99

99

99

99

99

Non-Melanoma

99

99

96

99

99

Table 10. Results of GoogleNet in experiment-2

Class

AC

PR

Sen

SP

F_meas.

Melanoma

98

97

99

97

98

Non-Melanoma

98

99

97

99

98

Table 11. Results of ResNet-18 in experiment-2

Class

AC

PR

Sen

SP

F_meas.

Melanoma

97

98

96

98

97

Non-Melanoma

97

96

98

96

97

Table 12. Results of MobilNet-v2 in experiment-2

Class

AC

PR

Sen

SP

F_meas.

Melanoma

98

97

98

97

98

Non-Melanoma

98

98

97

98

98

5. Conclusions

Deep learning has gained immense popularity in the field of medicine, particularly in areas such as ophthalmology, dermatology, and radiology. In this research work, we presented a novel framework to classify melanoma skin cancer by utilizing LWCNN. A well-known model GoogleNet, ResNet-18, and MobilNet-v2 have been used to validate our results. In our experiment, we utilized a two-stage approaches. The first stage involved using the original datasets, while the second stage involved using the dataset after enhancing the features of skin scan images. Experiment results induced that the second stage achieved better accuracy, demonstrating that preprocessing of skin scans dataset plays a crucial role in enhancing the classification output. In conclusion, our work highlights the importance of preprocessing techniques in skin scan analysis, which can ultimately assist medical professionals in the early detection of melanoma. As for future research, we are exploring more effective techniques and ways to speed up data analysis execution to further enhancement our classification accuracy. This would be incredibly valuable in assisting medical professionals in the early detection of melanoma.

Acknowledgment

The authors would like to thank Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R161), Princes Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

  References

[1] Civelek, Z., Kfashi, M.H.S. (2022). An improved deep CNN for an early and accurate skin cancer detection and diagnosis system. International Journal of Engineering Research and Development, 14(2): 721-734. https://doi.org/10.29137/umagd.1116295 

[2] Bhatt, H., Shah, V., Shah, K., Shah, R. (2022). State-of-the-art machine learning techniques for melanoma skin cancer detection and classification: A comprehensive review. Intelligent Medicine, In Press. https://doi.org/10.1016/j.imed.2022.08.004

[3] Reddy, N.D. (2018). Classification of dermoscopy images using deep learning. arXiv preprint arXiv:1808.01607. https://doi.org/10.48550/arXiv.1808.01607

[4] Sagar, A., Jacob, D. (2021). Convolutional neural networks for classifying melanoma images. bioRxiv, 2020-05. https://doi.org/10.1101/2020.05.22.110973

[5] Agarwal, K., Singh, T. (2022). Classification of skin cancer images using convolutional neural networks. arXiv preprint arXiv:2202.00678. https://doi.org/10./arXiv.2202.00678 

[6] Diab, A.G., Fayez, N., El-Seddek, M.M. (2022). Accurate skin cancer diagnosis based on convolutional neural networks. Indonesian Journal of Electrical Engineering and Computer Science, 25(3): 1429-1441. http://doi.org/10.11591/ijeecs.v25.i3.pp1429-1441

[7] Aljohani, K., Turki, T. (2022). Automatic classification of melanoma skin cancer with deep convolutional neural networks. Artificial Intellegance, 3: 512-525. https://doi.org/10.3390/ai3020029

[8] Samia, B., Boudjelal, M., Lézoray, O. (2021). Skin lesion classification using convolutional neural networks based on multi-features extraction. In 19th International Conference on Computer Analysis of Images and Patterns, Nicosie (Virtual), Cyprus, pp 466-475. https://doi.org/10.1007/978-3-030-89128-2_45

[9] Alazzam, M.B., Alassery, F., Almulihi, A. (2021). Diagnosis of melanoma using deep learning. Mathematical Problems in Engineering, 1-9. https://doi.org/10.1155/2021/1423605

[10] Arif, M., Philip, F. M., Ajesh, F., Izdrui, D., Craciun, M.D., Geman, O. (2022). Automated detection of nonmelanoma skin cancer based on deep convolutional neural network. Journal of Healthcare Engineering, Article ID: 6952304. https://doi.org/10.1155/2022/6952304

[11] Lopez, A.R., Nieto, X.G., Burdick, J., Marques, O. (2022). Skin lesion classification from dermoscopic images using deep learning techniques. Scientific Reports, 12(1): 18134. https://doi.org/10.1038/s41598-022-22644-9

[12] Adegun, A.A., Viriri, S. (2019). Deep learning-based system for automatic melanoma detection. IEEE Access, 8: 7160-7172. https://doi.org/10.1109/ACCESS.2019.2962812

[13] Ding, J., Song, J., Li, J., Tang, J., Guo, F. (2022). Two-stage deep neural network via ensemble learning for melanoma classification. Frontiers in Bioengineering and Biotechnology, 9: 1355. https://doi.org/10.3389/fbioe.2021.758495

[14] Refianti, R., Mutiara, A.B., Priyandini, R.P. (2019). Classification of melanoma skin cancer using convolutional neural network. International Journal of Advanced Computer Science and Applications, 10(3): 409-417. https://doi.org/10.14569/IJACSA.2019.0100353

[15] Yunendah N.F., Caecar Pratiwi, N.K., Pramudito, M.A., Ibrahim, N. (2020). Convolutional neural network (CNN) for automatic skin cancer classification system. In IOP Conference Series: Materials Science and Engineering, 982(1): 012005. https://doi.org/10.1088/1757-899X/982/1/012005 

[16] Rezaoana, N., Hossain, M.S., Andersson, K. (2020). Detection and classification of skin cancer by using a parallel CNN model. In 2020 IEEE International Women in Engineering (WIE) Conference on Electrical and Computer Engineering (WIECON-ECE), Bhubaneswar, India, pp. 380-386. https://doi.org/10.1109/WIECON-ECE52138.2020.9397987

[17] Haghighi, S.N., Danyali, H., Helfroush, M.S., Karami, M.H. (2020). A deep convolutional neural network for melanoma recognition in dermoscopy images. In 2020 10th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, pp. 453-456. https://doi.org/10.1109/ICCKE50421.2020.9303684

[18] Bhimavarapu, U., Battineni, G. (2022). Skin lesion analysis for melanoma detection using the novel deep learning model fuzzy GC-SCNN. Healthcare, 10(5): 962. https://doi.org/10.3390/healthcare10050962

[19] Kumar, S.M., Kumar, J.R., Gopalakrishnan, K. (2020). Melanoma skin cancer classification using deep learning convolutional neural network. Medico-legal Update, 20(3): 351-355.

[20] Daghrir, J., Tlig, L., Bouchouicha, M., Sayadi, M. (2020). Melanoma skin cancer detection using deep learning and classical machine learning techniques: A hybrid approach. In 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP), Sousse, Tunisia, pp. 1-5. https://doi.org/10.1109/ATSIP 49331.2020.9231544

[21] Jaisakthi, S.M., Mirunalini, P., Aravindan, C., Appavu, R. (2023). Classification of skin cancer from dermoscopic images using deep neural network architectures. Multimedia Tools and Applications, 82(10): 15763-15778. https://doi.org/10.1007/s11042-022-13847-3

[22] Acosta, M.F.J., Tovar, L.Y.C., Garcia‑Zapirain, M.B., Percybrooks, W.S. (2021). Melanoma diagnosis using deep learning techniques on dermatoscopic images. BMC Medical Imaging, 21(1): 1-11. https://doi.org/10.1186/s12880-020-00534-8

[23] Shetty, B., Fernandes, R., Rodrigues, A.P., Chengoden, R., Bhattacharya, S., Lakshmanna, K. (2022). Skin lesion classification of dermoscopic images using machine learning and convolutional neural network. Scientific Reports, 12(1): 18134. https://doi.org/10.1038/s41598-022-22644-9

[24] Fraiwan, M., Faouri, E. (2022). On the automatic detection and classification of skin cancer using deep transfer learning. Sensors, 22: 4963. https://doi.org/10.3390/s22134963

[25] Tschandl, P., Rosendahl, C., Kittler, H. (2018). The HAM10000 dataset, a large collection of multisource dermatoscopic images of common pigmented skin lesions. Sci Data, 5: 180161. https://doi.org/10.1038/sdata.2018.161

[26] ImageNet Image Database. https://www.image-net.org/, accessed on 25 September 2021.

[27] Zhou, B., Khosla, A., Lapedriza, A., Torralba, A., Oliva A. (2016). Places: An image database for deep scene understanding. arXiv preprint arXiv:1610.02055. https://doi.org/10.48550/arXiv.1610.02055

[28] Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A. (2017). Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6): 1452-1464. https://doi.org/10.1109/TPAMI.2017.2723009

[29] Ali, H., Kabir, S., Ullah, G. (2021). Indoor scene recognition using, ResNet-18. International Journal of Research Publications, 69(1): 1-7. https://doi.org/10.47119/IJRP100691120211667

[30] Zhu, D., Yang, Y., Zhai, W., Ren, F., Cheng, C., Huang, M., (2020). Geosot grid remote sensing intelligent interpretation model based on fine-tuning ResNet-18: A case study of construction land. In IGARSS, IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, pp. 2535-2538. https://doi.org/10.1109/IGARSS39084.2020.9323864

[31] Sarwinda, D., Paradise, R., Bustamam, A., Anggia, P. (2021). Deep learning in image classification using residual network (ResNet) variants for detection of colorectal cancer. Procedia Computer Science, 179: 423-431. https://doi.org/10.1016/j.procs.2021.01.025

[32] Zeng, F., Hu, W., He, G., Yue, C. (2021). Imbalanced Thangka image classification research based on the ResNet network. Journal of Physics: Conference Series, 1748(4): 42-54. https://doi.org/10.1088/1742-6596/1748/4/042054

[33] He, K., Zhang, H., Ren, S., Sun, J. (2015). Deep residual learning for image recognition. http://arxiv.org/abs/1512.03385

[34] Schaetti, N. (2018). Character-based Convolutional Neural Network and ResNet18 for Twitter Author Profiling Notebook for PAN at CLEF 2018. https://pan.webis.de/.

[35] Saad, A., Kamil, I.S., Alsayat, A., Elaraby, A. (2022). Classification COVID-19 based on enhancement X-ray images and low complexity model. Computers, Materials and Continua, 72(1): 561-576. https://doi.org/10.32604/cmc.2022.023878

[36] Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. https://doi.org/10.48550/arXiv.1704.04861

[37] Swati, Z.N.K., Zhao, Q., Kabir, M., Ali, F., Ali, Z., Ahmed, S., Lu, J. (2019). Brain tumors classification for MR images using transfer learning and fine-tuning. Computerized Medical Imaging and Graphics, 75: 34-46. https://doi.org/10.1016/j.compmedimag.2019.05.001

[38] Magotra, A., Kim, J. (2019). Transfer learning for image classification using Hebbian plasticity principles. In CSAI: 3rd International Conference on Computer Science and Artificial Intelligence, pp. 233-238. https://doi.org/10.1145/3374587.3375880

[39] Weiss, K., Khoshgoftaa, T.M., Wang, D.D. (2016). A survey of transfer learning. Journal of Big Data, 3(1): 1-40. https://doi.org/10.1186/s40537-016-0043-6

[40] Pan, S.J. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10): 1345-1359. https://doi.org/10.1109/TKDE.2009.191

[41] Yang, F., Zhang, W., Tao, L., Ma, J. (2020). Transfer learning strategies for deep learning-based PHM algorithms. Applied Sciences, 10(7): 2361. https://doi.org/10.3390/app10072361

[42] Davis, J., Goadrich, M. (2006). The relationship between Precision-Recall and ROC curves. In 23rd International Conference on Machine Learning, USA, pp. 233-240. https://doi.org/10.1145/1143844.1143874

[43] Powers, D.M.W. (2011). Evaluation: From precision, recall and F-measure to ROC, informedness markedness and correlation. arXiv:2010.16061, https://doi.org/10.48550/arXiv.2010.16061