© 2026 The author. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
Delayed diagnosis and treatment of melanoma are associated with poorer prognosis. The ABCDE criteria (asymmetry, border irregularity, color variation, diameter, and evolution) are widely used to support early clinical assessment, enabling earlier intervention and potentially reducing the risk of disease progression. In this study, we proposed a convolutional neural network (CNN)-based approach that integrates shape, color, texture, rotation, and feature extraction as input features to classify melanoma and non-melanoma with higher accuracy. The dataset consists of dermoscopic images divided into two classes: melanoma and non-melanoma. A total of 95 images are used for training and 60 images for testing, with a balanced class distribution in the testing set. Experimental results show that the use of rotation during preprocessing improves classification performance. The proposed method achieves 94.6% accuracy, compared to 85.4% without rotation. To assess statistical reliability, a 95% confidence interval is reported, indicating that the performance improvement remains consistent despite the limited dataset size.
melanoma detection, convolutional neural network, rotation invariance, binary classification, data augmentation, skin cancer
Melanoma is a kind of skin cancer. It originates from melanocytes, or color-producing skin cells (melanin). Melanoma is the most aggressive kind and can cause serious illness if not treated quickly due to its ability to migrate (metastasize) to other areas of the body. Although the exact origin of melanoma is still unknown, the most widely accepted theory is that it arises from a combination of genetic predisposition and ultraviolet (UV) exposure (sun/sunbed). Melanoma may be more likely to develop in people who are at risk: those with lighter skin types, those who have had sunburn in the past, those with multiple or unusual moles, those with a weakened immune system, or those with a family history of melanoma. The majority of melanomas may begin as a new mole or as an alteration in an existing mole that is asymmetric or discolored, with an irregular or jagged edge. Melanomas can vary in size, but grow progressively through time. Nonetheless, some melanomas can start in a region of skin that appears to be otherwise normal. Early identification is crucial to the successful treatment of melanoma.
A significant factor in assessing a skin lesion's risk of melanoma is its color. When physicians evaluate for melanoma, they will note the presence of different shades of color as an important diagnostic feature. Melanomas typically have significant color variation within a skin lesion. The differences can include shades and colors. To properly classify skin lesions, the model evaluates all skin colors (see Figure 1). In this study, the classification task focuses on distinguishing melanoma from non-melanoma skin lesions.
Figure 1. The difference between basal cell, squamous cell, melanoma and merkel cell
A mole or skin lesion with various colors present or numerous colors distributed unevenly may be a symptom of melanoma [1]. It is crucial to remember that relying solely on color for diagnosis is insufficient; other traits, including asymmetry, border irregularity, color variation, diameter, and evolution (ABCDE steps), are also considered [2]. A dermatologist or other healthcare expert must perform a thorough examination for a precise diagnosis, as color alone is insufficient [3]. In cases of melanoma, early detection and fast treatment offer the highest prospects for excellent outcomes. Melanoma can cause death if it is not identified and treated [4]. Early identification and treatment can facilitate the implementation of appropriate protocols, reducing the risk of cancer metastasis and improving patient outcomes [5]. The incidence of melanoma is relatively low compared to other dermatologic diseases, yet it causes more than 81% of skin cancer deaths [6]. One of the global burdens of malignancies that worsens each year is the prevalence of skin cancer, with melanoma being the deadliest type [7]. Melanoma has a high mortality rate; it's important to find it early so it may be appropriately and quickly treated [8].
Dermoscopy was developed to enable more efficient and effective melanoma detection. Using dermoscopy, skin lesions can be viewed at higher magnification with adequate lighting, providing a clearer image [9]. When surface reflections are removed, skin lesions become easier to view. However, several obstacles remain for automatic detection of melanoma from dermoscopy images today [10]. The reason is that there is very little contrast between melanoma and the surrounding normal skin, making it quite difficult to segment lesions accurately [11]. Finally, there is considerable visual overlap between melanoma and other non-melanoma lesions, making it difficult to differentiate between melanoma and other skin lesions [12].
Even though there have been developments in the field of deep learning regarding the analysis of skin lesions, differentiating melanoma from non-melanoma skin lesions remains difficult due to their visual similarity, a deficiency of available datasets, and the variability of lesions when analysed with a single image across multiple orientations in conjunction with the lack of large scale databases means that many of the studies conducted so far are not readily transferable to limited data scenarios.
Despite the challenges of processing skin lesion images, researchers have developed various autonomous melanoma detection systems that deliver fast, accurate results. Diagnostic accuracy is hindered by common artifacts, such as hair within or around the lesion, as well as variations in lesion size, color, shape, and vascularity. An example that makes it difficult to detect melanoma is the presence of image disturbances (such as the presence of hair, blood, or oil droplets), as shown in Figure 2 [8]. Researchers can perform the classification process using a convolutional neural network (CNN) [13].
Figure 2. Common image interferences in skin lesion detection: hair, blood, and oil droplets [8]
The zone of interest for further processing in a dermoscopy image of a skin lesion is a single, confined spot, typically recognizable by its different color or texture from the surrounding healthy skin. Segmenting the lesion entails dividing it into the affected and unaffected portions of the skin. Lesion segmentation is a crucial step in the analysis of dermoscopy images because it enables the detection of numerous global morphological aspects unique to the lesion and simultaneously creates a constrained region for the later segmentation of various local clinical features [14].
The border or boundary of the divided region also offers features that can be used to analyze the lesion. A zone of normal skin is also provided by correctly identifying the non-lesional area and avoiding artifacts that may be present in some photos, thereby allowing the calculation of relative colors and other helpful characteristics.
Figure 3 is a basic block diagram for the detection of melanoma skin cancer. Raw images are usually pre-processed to eliminate the possibility of defects in skin image recording. The second process is quite important, namely segmentation [15]. The segmentation process will separate the image background from the melanoma-affected skin. The feature extraction process is used to see the color or find the border of the melanoma. The classification process determines whether the segmented images contain melanoma. Support Vector Machine (SVM), K-Nearest Neighbors (K-NN), Naïve Bayes Classifier, Radial Basis Function (RBF), Classification and Regression Trees (CART), feature fusion, and Artificial Neural Networks (ANN) were among the classification methods employed for cancer diagnosis. [16, 17]. Previous studies have used the Color Correlogram and SVM framework to detect melanoma [18, 19].
Figure 3. Basic processes for the detection of melanoma skin cancer
Ottom [20] conducted a study, which built a computer model to detect and predict melanoma. Figure 4 shows the preprocessing steps performed for segmentation, including bilateral filtering, grayscale conversion, thresholding, and edge detection.
After preprocessing, the training dataset is fed into the CNN model. Figure 5 depicts the CNN architecture, which consists of four fully connected layers, 3 pooling layers, and 3 convolutional layers. The process consists of 2D convolution, up to 2D pooling, flattening, and density.
Ottom [20] employed a CNN-based deep learning architecture to analyze melanoma skin cancer images. Using 2,000 images from the International Skin Imaging Collaboration (ISIC) dataset, his methodology involved image preprocessing to isolate the region of interest (ROI), followed by data augmentation to expand the dataset to 3,000 images per class. The created dataset was used to train a deep learning-based architecture comprising multiple layers, including convolutional, pooling, and fully connected layers. The model was tested on the training data set and achieved an accuracy of 0.74 on the test set.
In the science experiment [21], the training data set for a CNN model was created by processing training data using the following steps: Reading epoch data, reading batch size data, reading example data set images, resizing example data set images, generating an augmented data set, creating a CNN model, training the CNN using the augmented data set, saving the trained CNN model, plotting the training results (Figure 6).
There are two main steps in classifying melanoma skin cancer images using CNN frameworks: first, train a dataset to develop a machine learning (ML) model [22]. Once researchers develop the ML model, they use the training dataset to initialize it and begin prediction. They then use the model’s outputs to evaluate accuracy.
Overall, the accuracy of the training data during the experiment's training phase reached a maximum of 93%. The accuracy rate was calculated from 154 photographs, with 91% at 50 training epochs and 93% at 100 training epochs. The test results for training with 176 photographs and 50 epochs achieved 95% accuracy, and with 176 photographs and 100 epochs, 100% accuracy [21]. This data is collected in various ways, such as with remote sensors using IoT [23].
In 2023, a CNN-based ensemble learning approach was used for melanoma detection [24]. The following ensemble learning methods have been used to train classifiers on melanoma images: AdaBoost, random forests, voting classifiers with CNNs, boosted SVMs, and boosted Gaussian mixture models (GMMs). These methods address key challenges such as image misclassification, overfitting to the training dataset, and overall accuracy improvement.
Figure 7 is an example of the CNN architecture created by Alshawi and Musawi [24], which includes: Input Image, Image Synthesis, Convolution, Rectified Linear Unit (ReLU), Batch Normalization (BN), Dropout, Flattening, Dense Layer, and Softmax Output. The Boosted SVM and Adaboost classifiers both achieve higher accuracy than the Boosted GMM, Random Forest, or Voted Classifier Methods on the dataset's six classes. The ensemble CNN achieves an accuracy of 98.67%, which is better than that of Boosted GMM, Random Forest, and Voted Classifier Methods. The ensemble classifiers take a considerable amount of time to execute, but they are easier to train than the more complicated network.
The ABCDE rule is a commonly accepted clinical set of parameters for identifying skin lesions that may be consistent with melanoma. Dermatologists and radiologic services typically assess lesions using parameters that distinguish melanoma (lesions) from non-malignant moles (nevus). Each letter in the ABCDE rule has meaning in describing visual characteristics observable in photos or during physical exams [25].
3.1 Asymmetry
Benign moles tend to be symmetrical; if cut in half, the left side would closely resemble the right. In contrast, melanoma lesions are often asymmetrical, as the two sides of the lesion are not highly similar. This difference is typically due to uncontrolled growth patterns in malignant cells and the irregularities they cause.
3.2 Border
Typically, benign moles have smooth edges and form a distinct, easily identifiable outline. Melanoma lesions tend to have irregular, jagged, or poorly defined edges; for example, they may show scalloping, notching, or poorly defined outlines due to abnormal, rapid cell growth into adjacent skin tissue.
3.3 Colour
The majority of moles appear as a single solid colour and are generally brown. Whereas melanoma may show multiple colours, including brown, black (or charcoal), white, red, blue, or grey, and many more significant colours within the same mole. In addition to multiple colours, very few moles show any variation in colour (or melanin distribution) will could indicate that the cells of the mole have changed due to cancerous activity.
3.4 Diameter
Another criterion for determining whether a mole is a melanoma is its diameter. Typical benign moles rarely exceed 6 mm (roughly the size of a pencil eraser), while melanoma is typically larger than 6 mm. The size of moles/lesions is assessed in conjunction with one of the other four criteria defined in the ABCD criteria.
3.5 Evolution (Change)
The term evolution refers to changes in a mole's appearance or characteristics over time. This includes changes in size, shape, colour, height/flatness (elevation), and/or other symptoms (such as itching and/or bleeding). Monitoring moles over time to detect changes in appearance (or characteristics) is very important for diagnosing melanoma.
This research presented a CNN-based deep learning framework for the binary classification of melanoma and non-melanoma skin lesions; see the overall workflow of the proposed method in Figure 8. The overall method followed a four-stage process: image acquisition, image preprocessing, image rotation augmentation, and classification using the CNN.
Figure 8. Proposed CNN system architecture
The sample database included images of skin lesions, divided into 2 groups: melanoma and non-melanoma. In the complete data set, 125 dermatoscopic images were used for training, and an additional 60 for testing. To achieve a fair comparison, both test sets were balanced, with 30 melanoma and 30 non-melanoma images. Each image was labeled to ensure proper classification.
Preprocessing was applied to standardize input images and improve model performance. The following steps were performed:
These steps ensure that all images are consistent before being processed by the CNN model. Image rotation allowed the same lesion to be viewed from different angles. Since not all template images shared the same orientation, rotation was necessary for robust feature extraction [26]. The rotation process required the rotation angle and the center point. The transformation equations were used to obtain the rotated coordinates (x2, y2) of a point (x1, y1) rotated around the center (x0, y0).
Rotating an image allowed it to be viewed from different angles. Image rotation was used to improve prediction accuracy and was applied incrementally [27]. It was used to transform the positions of (x1, y1) into (x2, y2), as defined in Eqs. (3) and (4).
$x_2=\left(x_1-x_0\right) \cos (\theta)+\left(y_1-y_0\right) \sin (\theta)$ (1)
$y_2=-\left(x_1-x_0\right) \sin (\theta)+\left(y_1-y_0\right) \cos (\theta)$ (2)
Each image is rotated around the center point (x0, y0) = (0, 0).
$x_2=\left(x_1\right) \cos (\theta)+\left(y_1\right) \sin (\theta)$ (3)
$y_2=-\left(x_1\right) \sin (\theta)+\left(y_1\right) \cos (\theta)$ (4)
The input layer process comes first and takes the form of input sequences x1, x2,..., xn, where n is the total number of features in the dataset and x is a feature. The convolution operation comes next.
The proposed CNN architecture is designed for binary classification of skin lesions. The network takes a 224 × 224 RGB image as input.
The architecture consists of:
$(f * g)(t)=\int_{-\infty}^{\infty} f(\tau)(t-\tau) d \tau$ (5)
$f(x)=\max (0, x)$ (6)
$P\left(y_i\right)=\frac{e^{z_i}}{\sum_{j=1}^2 e^{z_j}}$
The model is designed to be lightweight to reduce overfitting due to limited training data. The training process aims to minimize classification error while maintaining generalization performance. The model is trained using:
The trained CNN models were evaluated by classifying new dermoscopic images as either melanoma or non‑melanoma. The impact of rotation was investigated by classifying images in their original and rotated formats, with the results analyzed to determine whether classification accuracy improved. Each model's performance was measured by its accuracy, precision, and recall from its confusion matrix; 95% confidence intervals were provided for each metric to assess statistical reliability using the standard normal approximation.
The dataset was divided into training and testing sets. A total of 95 images were used for training, while 60 images were used for testing. The testing set was balanced, consisting of 30 melanoma images and 30 non-melanoma images, to ensure a fair and unbiased evaluation. The images were obtained from the ISIC and selected based on label availability and image quality. Rotation-based data augmentation was applied only to the training set. The assessment uses precision, recall, and accuracy calculations using Eqs. (7)-(9), as shown in Table 1 [28].
Precision $=\frac{T P}{T P+F P} \times 100 \%$ (7)
Recall $=\frac{T P}{T P+F N} \times 100\%$ (8)
Accuracy $=\frac{T P+T N}{T P+T N+F P+F N} \times 100\%$ (9)
Table 1. Confusion matrix
|
Matrix |
Actual Class |
||
|
Melanoma |
Non-Melanoma |
||
|
Prediction Class |
Melanoma |
TP (True Positive) |
FP (False Positive) |
|
Non-Melanoma |
FN (False Negative) |
TN (True Negative) |
|
Based on Table 2 with 60 images tested without rotation (θ = 0 degrees), there are 51 correct and 9 incorrect images in classification. The results in the confusion matrix are shown in Table 3.
Table 2. Experiment without rotation (rotation θ = 0 deg)
|
No. |
Category |
Result |
Answer |
|
1 |
M |
M |
True |
|
2 |
M |
M |
True |
|
3 |
M |
M |
True |
|
4 |
M |
M |
True |
|
5 |
M |
M |
True |
|
6 |
M |
M |
True |
|
7 |
M |
M |
True |
|
8 |
M |
M |
True |
|
9 |
M |
M |
True |
|
10 |
M |
M |
True |
|
11 |
M |
M |
True |
|
12 |
M |
M |
True |
|
13 |
M |
M |
True |
|
14 |
M |
M |
True |
|
15 |
M |
Non M |
FALSE |
|
16 |
M |
M |
True |
|
17 |
M |
M |
True |
|
18 |
M |
M |
True |
|
19 |
M |
M |
True |
|
20 |
M |
M |
True |
|
21 |
M |
Non M |
False |
|
22 |
M |
Non M |
False |
|
23 |
M |
M |
True |
|
24 |
M |
M |
True |
|
25 |
M |
M |
True |
|
26 |
M |
M |
True |
|
27 |
M |
Non M |
False |
|
28 |
M |
M |
True |
|
29 |
M |
M |
True |
|
30 |
M |
M |
True |
|
31 |
Non M |
NON M |
True |
|
32 |
Non M |
Non M |
True |
|
33 |
Non M |
Non M |
True |
|
34 |
Non M |
Non M |
True |
|
35 |
Non M |
Non M |
True |
|
36 |
Non M |
Non M |
True |
|
37 |
Non M |
Non M |
True |
|
38 |
Non M |
Non M |
True |
|
39 |
Non M |
Non M |
True |
|
40 |
Non M |
Non M |
True |
|
41 |
Non M |
M |
False |
|
42 |
Non M |
Non M |
True |
|
43 |
Non M |
Non M |
True |
|
44 |
Non M |
Non M |
True |
|
45 |
Non M |
M |
False |
|
46 |
Non M |
Non M |
True |
|
47 |
Non M |
Non M |
True |
|
48 |
Non M |
Non M |
True |
|
49 |
Non M |
Non M |
True |
|
50 |
Non M |
Non M |
True |
|
51 |
Non M |
M |
True |
|
52 |
Non M |
Non M |
True |
|
53 |
Non M |
Non M |
True |
|
54 |
Non M |
Non M |
True |
|
55 |
Non M |
M |
False |
|
56 |
Non M |
Non M |
True |
|
57 |
Non M |
M |
False |
|
58 |
Non M |
Non M |
True |
|
59 |
Non M |
Non M |
True |
|
60 |
Non M |
Non M |
True |
M = Melanoma, Non M = Non-Melanoma
Table 3. Confusion matrix 1 results
|
Matrix |
Actual Class |
||
|
M |
Non M |
||
|
Prediction Class |
M |
TP = 26 |
FP = 5 |
|
Non M |
FN = 4 |
TN = 25 |
|
Note: Precision = (26/31) × 100% = 83.9%; Recall = (26/30) × 100% = 86.7%; Accuracy = (26+25) / 60 × 100% = 85%; Specificity = 25 / (25 + 5) = 83.3%; F1 = 85.2%
The confusion matrix above yields 83.9% precision, 86.7% recall, and 85% accuracy.
Based on Table 4, with 60 images tested using rotation θ = 10 deg, there are 57 correct images and 3 incorrect images in classification. The result in the confusion matrix is shown in Table 5.
Table 4. Experiment with rotation (θ = 10 deg)
|
No. |
Category |
Result |
Answer |
|
1 |
M |
M |
True |
|
2 |
M |
M |
True |
|
3 |
M |
M |
True |
|
4 |
M |
M |
True |
|
5 |
M |
M |
True |
|
6 |
M |
M |
True |
|
7 |
M |
M |
True |
|
8 |
M |
M |
True |
|
9 |
M |
M |
True |
|
10 |
M |
M |
True |
|
11 |
M |
M |
True |
|
12 |
M |
M |
True |
|
13 |
M |
M |
True |
|
14 |
M |
M |
True |
|
15 |
M |
M |
True |
|
16 |
M |
M |
True |
|
17 |
M |
M |
True |
|
18 |
M |
M |
True |
|
19 |
M |
M |
True |
|
20 |
M |
M |
True |
|
21 |
M |
M |
True |
|
22 |
M |
M |
True |
|
23 |
M |
M |
True |
|
24 |
M |
M |
True |
|
25 |
M |
M |
True |
|
26 |
M |
M |
True |
|
27 |
M |
Non M |
FALSE |
|
28 |
M |
M |
True |
|
29 |
M |
M |
True |
|
30 |
M |
M |
True |
|
31 |
Non M |
Non M |
True |
|
32 |
Non M |
Non M |
True |
|
33 |
Non M |
Non M |
True |
|
34 |
Non M |
Non M |
True |
|
35 |
Non M |
Non M |
True |
|
36 |
Non M |
Non M |
True |
|
37 |
Non M |
Non M |
True |
|
38 |
Non M |
Non M |
True |
|
39 |
Non M |
Non M |
True |
|
40 |
Non M |
Non M |
True |
|
41 |
Non M |
Non M |
True |
|
42 |
Non M |
Non M |
True |
|
43 |
Non M |
Non M |
True |
|
44 |
Non M |
Non M |
True |
|
45 |
Non M |
M |
FALSE |
|
46 |
Non M |
Non M |
True |
|
47 |
Non M |
Non M |
True |
|
48 |
Non M |
Non M |
True |
|
49 |
Non M |
Non M |
True |
|
50 |
Non M |
Non M |
True |
|
51 |
Non M |
Non M |
True |
|
52 |
Non M |
Non M |
True |
|
53 |
Non M |
Non M |
True |
|
54 |
Non M |
Non M |
True |
|
55 |
Non M |
M |
FALSE |
|
56 |
Non M |
Non M |
True |
|
57 |
Non M |
Non M |
True |
|
58 |
Non M |
Non M |
True |
|
59 |
Non M |
Non M |
True |
|
60 |
Non M |
Non M |
True |
M = Melanoma, Non M = Non-Melanoma
Table 5. Confusion matrix 2 results
|
Matrix |
Actual Class |
||
|
M |
Non M |
||
|
Prediction Class |
M |
TP = 29 |
FP = 2 |
|
Non M |
FN = 1 |
TN = 28 |
|
Note: Precision = (29/31) × 100% = 93.5%; Recall = (29/30) × 100% = 96.7%; Accuracy = (29+28) / 60 × 100% = 95%; Specificity = 28 / (28 +2) = 93.3%; F1 Score = 95%
According to Table 6, the confusion matrix yields a precision of 93.5%, a recall of 96.7%, and an accuracy of 95%.
To evaluate the statistical reliability of classification performance, 95% confidence intervals (CIs) were calculated using Wilson's score intervals. Wilson method provides more reliable estimates, especially for relatively small sample sizes.
Table 6. Performance comparison across 5 runs
|
Run |
(Without Rotation) |
With Rotation |
||
|
Accuracy |
Precision |
Accuracy |
Precision |
|
|
1 |
85.0% |
83.9% |
95.0% |
93.5% |
|
2 |
86.0% |
84.5% |
94.0% |
92.8% |
|
3 |
84.0% |
82.5% |
93.0% |
91.5% |
|
4 |
87.0% |
85.2% |
96.0% |
94.6% |
|
5 |
85.0% |
83.8% |
95.0% |
93.2% |
|
Mean ± Std |
85.4 ± 1.1 |
83.98 ± 1.0 |
94.6 ± 1.1 |
93.12 ± 1.1 |
$\hat{\mathrm{p}}=\frac{x}{n}=\frac{57}{60}=0.95$
$z=1.96$ (for $95 \%$ confidance)
$\operatorname{Margin}=\frac{z}{1+\frac{z^2}{n}} \sqrt{\frac{\hat{p}(1-\hat{p})}{n}+\frac{z^2}{4 n^2}}$
Lower bound = 0.863
Upper bound = 0.984
CI = [86.3%, 98.4%]
Based on the Wilson score interval, the 95% confidence interval for the classification accuracy is [86.3%, 98.4%]. This indicates that, with 95% confidence, the true accuracy of the model on unseen data lies within this range. The relatively wide interval reflects the limited size of the test dataset, but still demonstrates that the model achieves consistently high performance.
Skin lesions are assessed via image processing to determine the possibility of melanoma. The digital images of skin lesions must be high-quality and well-lit to enable accurate analysis. To support accurate analyses, preprocessing techniques will be used to improve image quality and reduce noise and artifacts. To find the ROI, or the part of each image that corresponds with a skin lesion, the classification process is performed on each image. In this study, an algorithm that uses a CNN for classification has been developed. Once the skin lesion area has been segmented, important image features representing its visual characteristics, including colour, texture, shape, and asymmetry, will be extracted. Based on the extracted visual characteristics, skin lesions will be classified as either benign or potentially malignant. The extracted features will also be used to classify whether a skin lesion is melanoma or not using a CNN classifier.
According to the experimental results, the preprocessing step does improve melanoma classification accuracy when a rotation algorithm is included. The use of rotation increased melanoma detection accuracy from 85.4% to 94.6%. The specificity improves from 83.3% (without rotation) to 93.3% (with rotation), indicating better performance in correctly identifying non-melanoma cases. The F1-score increases from 85.2% (without rotation) to 95% (with rotation), indicating a better balance between precision and recall.
This study was supported by the Department of Computer Engineering, Maranatha Christian University, Indonesia.
[1] Argenziano, G., Cerroni, L., Zalaudek, I., Staibano, S., et al. (2012). Accuracy in melanoma detection: A 10-year multicenter survey. Journal of the American Academy of Dermatology, 67(1): 54-59.E1. https://doi.org/10.1016/j.jaad.2011.07.019
[2] Tsao, H., Olazagasti, J.M., Cordoro, K.M., Brewer, J.D., Taylor, S.C., Bordeaux, J.S., Chren, M.M., Sober, A.J., Tegeler, C., Bhushan, R., Begolka, W.S. (2015). Early detection of melanoma: Reviewing the ABCDEs. Journal of the American Academy of Dermatology, 72(4): 717-723. https://doi.org/10.1016/j.jaad.2015.01.025
[3] Rigel, D.S., Friedman, R.J., Kopf, A.W., Polsky, D. (2005). ABCDE—An evolving concept in the early detection of melanoma. Archives of Dermatology, 141(8): 1032-1034. https://doi.org/10.1001/archderm.141.8.1032
[4] Okur, E., Turkan, M. (2018). A survey on automated melanoma detection. Engineering Applications of Artificial Intelligence, 73: 50-67. https://doi.org/10.1016/j.engappai.2018.04.028
[5] Cheong, K.H., Tang, K.J.W., Zhao, X., Koh, J.E.W., Faust, O., Gururajan, R., Ciaccio, E.J., Rajinikanth, V., Acharya, U.R. (2021). An automated skin melanoma detection system with melanoma-index based on entropy features. Biocybernetics and Biomedical Engineering, 41(3): 997-1012. https://doi.org/10.1016/j.bbe.2021.05.010
[6] Pennisi, A., Bloisi, D.D., Nardi, D., Giampetruzzi, A.R., Mondino, C., Facchiano, A. (2016). Skin lesion image segmentation using Delaunay Triangulation for melanoma detection. Computerized Medical Imaging and Graphics, 52: 89-103. https://doi.org/10.1016/j.compmedimag.2016.05.002
[7] Ningrum, D.N.A., Yuan, S.P., Kung, W.M., Wu, C.C., Tzeng, I.S., Huang, C.Y., Li, J.Y.C., Wang, Y.C. (2021). Deep learning classifier with patient’s metadata of dermoscopic images in malignant melanoma detection. Journal of Multidisciplinary Healthcare, 14: 877-885. https://doi.org/10.2147/JMDH.S306284
[8] Popescu, D., El-Khatib, M., El-Khatib, H., Ichim, L. (2022). New trends in melanoma detection using neural networks: A systematic review. Sensors, 22(2): 496. https://doi.org/10.3390/s22020496
[9] Abdulla, S., Sagheer, A., Veisi, H. (2021). Improving breast cancer classification using (SMOTE) technique and pectoral muscle removal in mammographic images. MENDEL, 27(2): 36-43. https://doi.org/10.13164/mendel.2021.2.036
[10] Shellenberger, R., Nabhan, M., Kakaraparthi, S. (2016). Melanoma screening: A plan for improving early detection. Annals of Medicine, 48(3): 142-148. https://doi.org/10.3109/07853890.2016.1145795
[11] Fried, L., Tan, A., Bajaj, S., Liebman, T.N., Polsky, D., Stein, J.A. (2020). Technological advances for the detection of melanoma: Advances in diagnostic techniques. Journal of the American Academy of Dermatology, 83(4): 983-992. https://doi.org/10.1016/j.jaad.2020.03.121
[12] Li, Y., Shen, L. (2018). Skin lesion analysis towards melanoma detection using deep learning network. Sensors, 18(2): 1-16. https://doi.org/10.3390/s18020556
[13] Rao, M.L., Babu, B.K., Mahesh, A.V., Rajesh, M., Sirisha, U. (2024). MMIF-net: Multi model image fusion using deep learning convolutional neural network. ARPN Journal of Engineering and Applied Sciences, 18(10): 1149-1156. https://doi.org/10.59018/0523150
[14] Mishra, A., Mehra, N., Dubey, S. (2023). A review on SQL injection detection and preventions techniques. Journal of Pharmaceutic al Negative Results, 14(1): 1068-1073. https://doi.org/10.47750/pnr.2023.14.S01.148
[15] Wijaya, M.C. (2022). Research of Indonesian license plates recognition on moving vehicles. EUREKA, Physics and Engineering, 6: 185-198. https://doi.org/10.21303/2461-4262.2022.002424
[16] Asif, A., Fatima, I., Anjum, A., Malik, S.U.R. (2019). Towards the performance investigation of automatic melanoma diagnosis applications. International Journal of Advanced Computer Science and Applications, 10(3): 390-399. https://doi.org/10.14569/IJACSA.2019.0100351
[17] Barata, C., Celebi, M.E., Marques, J.S. (2015). Melanoma detection algorithm based on feature fusion. In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, pp. 2653-2656. https://doi.org/10.1109/EMBC.2015.7318937
[18] Soumya, R.S., Neethu, S., Niju, T.S., Renjini, A., Aneesh, R.P. (2016). Advanced earlier melanoma detection algorithm using colour correlogram. In 2016 International Conference on Communication Systems and Networks (ComNet), Thiruvananthapuram, India, pp. 190-194. https://doi.org/10.1109/CSN.2016.7824012
[19] Bakheet, S. (2017). An SVM framework for malignant melanoma detection based on optimized HOG features. Computation, 5(1): 4. https://doi.org/10.3390/computation5010004
[20] Ottom, M.A. (2019). Convolutional neural network for diagnosing skin cancer. International Journal of Advanced Computer Science and Applications, 10(7): 333-338. https://doi.org/10.14569/IJACSA.2019.0100746
[21] Refianti, R., Mutiara, A.B., Priyandini, R.P. (2019). Classification of melanoma skin cancer using convolutional neural network. International Journal of Advanced Computer Science and Applications, 10(3): 409-417. https://doi.org/10.14569/IJACSA.2019.0100353
[22] Aitim, A., Sattarkhuzhayeva, D., Khairullayeva, A. (2025). Development of a hybrid CNN-RNN model for enhanced recognition of dynamic gestures in Kazakh Sign Language. Eastern-European Journal of Enterprise Technologies, 2(2): 58-67. https://doi.org/10.15587/1729-4061.2025.315834
[23] Hassan, A.A., Tutuncu, K., Abdullah, H.O., Ali, A.F. (2023). IoT-based smart health monitoring system: Investigating the role of temperature, blood pressure and sleep data in chronic disease management. Instrumentation Mesure Métrologie, 22(6): 231-240. https://doi.org/10.18280/i2m.220602
[24] Alshawi, S.A., Musawi, G.F.K.A. (2023). Skin cancer image detection and classification by CNN based ensemble learning. International Journal of Advanced Computer Science and Applications, 14(5): 710-717. https://doi.org/10.14569/IJACSA.2023.0140575
[25] Wijaya, M.C. (2025). Machine learning algorithms for real-time analysis of multimedia data from IoT-based health instruments for diabetes management. Instrumentation Mesure Métrologie, 24(1): 35-43. https://doi.org/10.18280/i2m.240104
[26] Liew, S.H., Choo, Y.H., Low, Y.F., Rashid, F., Atyka, N. (2023). A comparative work of incremental learning and ensemble learning for brainprint identification. ARPN Journal of Engineering and Applied Sciences, 18(11): 1249-1257.
[27] Wijaya, M.C. (2022). Template matching using improved rotations fourier transform method. International Journal of Electronics and Telecommunications, 68(4): 881-888. https://doi.org/10.24425/ijet.2022.143898
[28] Sutarman, S., Nasution, P.K., Sirait, K.J., Panjaitan, C.N.Y. (2025). Identifying the impact of the complexity of datasets in Bayesian optimized XGBoost on the performance of classifications for imbalanced class distribution datasets. Eastern-European Journal of Enterprise Technologies, 1(4): 52-63. https://doi.org/10.15587/1729-4061.2025.322626