A Distributed Densely Connected Convolutional Network Approach for Enhanced Recognition of Health-Related Topics: A Societal Analysis Case Study

A Distributed Densely Connected Convolutional Network Approach for Enhanced Recognition of Health-Related Topics: A Societal Analysis Case Study

Yerragudipadu Subbarayudu* Alladi Sureshbabu

Department of Computer Science and Engineering, Jawaharlal Nehru Technological University, Anantapur 515002, India

Department of Computer Science and Engineering, JNTUA College of Engineering, Anantapur 515002, India

Corresponding Author Email: 
subbu.jntua@gmail.com
Page: 
677-684
|
DOI: 
https://doi.org/10.18280/isi.280317
Received: 
6 January 2023
|
Revised: 
9 March 2023
|
Accepted: 
20 March 2023
|
Available online: 
30 June 2023
| Citation

© 2023 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Melanoma, a lethal form of skin cancer, poses a significant risk to global health if not detected and treated promptly. Its early detection is pivotal in increasing the likelihood of successful treatment and patient survival. However, the accurate diagnosis of melanoma remains a challenge, even for seasoned dermatologists. Consequently, there has been a growing interest in leveraging Machine Learning (ML) algorithms to augment the accuracy of melanoma diagnosis. Typically, melanoma is identified through dermoscopic imaging. Numerous previous studies have proposed the automated analysis of skin lesions using both traditional classification techniques and deep learning models. These analyses often involve the feeding of designed functions into traditional categorization systems. Nonetheless, the high visual similarity between different skin lesion types and the complexity of skin diseases often renders manual features insufficiently discriminative, leading to failure in various scenarios. Recent research suggests that convolutional networks with short connections between layers near the input and the output can be deeper, more precise, and more efficient in training. This paper adopts this approach and introduces the application of Hadoop's HdiDenseNet techniques. DenseNets offer several notable advantages: they alleviate the vanishing-gradient problem, enhance feature propagation, encourage feature reuse, and substantially reduce the number of parameters. The performance of our proposed architecture is evaluated against four highly competitive benchmark object identification challenges using a dataset comprising over 40,000 images sourced diversely. The results demonstrate that the most effective method is a densely connected distributed convolutional network, particularly when applied to patient metadata. Ultimately, this paper aims to contribute to the field of medical image analysis and potentially enhance the accuracy of melanoma diagnosis. By doing so, it could play a crucial role in improving patient prognosis and saving lives.

Keywords: 

NLTK, early detection, melanoma, skin cancer

1. Introduction

Skin cancer, characterized by the abnormal proliferation of melanocyte cells in the skin, can metastasize via lymphoid tissue, causing damage to adjacent tissues [1]. The three primary forms of skin cancer are basal cell carcinoma, squamous cell carcinoma, and melanoma, of which melanoma is the most dangerous malignant skin tumor. The prevalence of melanoma skin cancer is exhibiting a daily increase, necessitating immediate treatment upon discovery to enhance patients' chances of recovery.

Skin cancer, predominantly preventable and treatable if detected early, is mainly attributable to sun exposure. Therefore, it is imperative to employ preventive measures during outdoor activities, irrespective of the season. Overexposure to the sun can significantly escalate the risk of skin cancer and precipitate premature skin aging.

Without early detection, melanoma has the potential to proliferate and penetrate the epidermis, the skin's uppermost layer, until it encounters a lymph vessel and ultimately infiltrates the bloodstream.

Professional visual evaluations typically facilitate melanoma diagnoses. However, this process can be time-consuming and challenging, potentially leading to misdiagnoses [2]. To devise a machine learning (ML)-based system capable of efficiently detecting melanoma, numerous researchers have collaborated over time [3-6].

Melanoma, a perilous affliction, necessitates immediate detection. The diagnosis process, traditionally manual, is both time-consuming and expensive. However, advancements in machine learning offer potential solutions. Machine learning can simplify the identification of malignant cells, leading to the adoption of convolutional neural networks, a type of machine learning, to expedite and enhance the efficiency of cancerous cell detection (Figure 1).

Figure 1. Workflow of melanoma detection

The pre-processing stage includes performing primary operations such as noise reduction, feature extraction, resizing, grayscale conversion or illumination modifications, binarization and, most importantly, concentration and edge development [7]. The segmentation process is a controversial topic and a challenging task. This stage is the part of the algorithm that allows the image to be divided into different sets of pixels [8], identifying areas of interest as a consequence of an automated or semi-automated procedure [9, 10]. Algorithms based on neural networks are among the most widely used techniques for identifying and segmenting melanoma. In general, if feature retrieval is successful, ACC detection will increase dramatically. Ramezanis et al. [11] have previously used the A-Asymmetry, B-Edge C-Color, D-Differential Structure Rule as an approach built on deriving characteristics to detect melanoma, but others now apply deep learning techniques to improve feature retrieval. The qualification phase is the last and most discussed element of our exam. In the case of the identification of melanoma by AI, significant results have been obtained, as visual examination of Skin Lesions is no longer a reliable approach. Machine Learning mainly uses previous experience to increase the results given [12]. Traditional Machine Learning based approaches have worked well, but there have been some drawbacks.

The motivation of the skin cancer melanoma diagnosis machine learning paper is to develop a model that can accurately diagnose melanoma using images of skin lesions. This paper aims to contribute to the existing body of research on melanoma diagnosis using ML and to potentially provide a practical solution for improving diagnosis accuracy in clinical settings.

The scope of the paper may include data collection, pre-processing, and feature extraction, as well as the development, evaluation, and validation of the ML model. The researchers may use various image processing techniques to extract features such as color, texture, and shape, which can be used to train the ML model. The ML model could be a supervised learning algorithm that can classify skin lesions into malignant or benign classes, or it could be an unsupervised learning algorithm that can detect anomalies in skin lesion images.

The image's pixels are supplied into the convolutional layer, which executes the convolution function. It produces a jumbled map. The convolved map is fed into a ReLU function, which produces a corrected feature map. To locate the features, the image is processed with numerous convolutions and ReLU layers. A CNN can contain numerous layers that train to recognise distinct features in an input image. A filter or kernel is performed to every image to produce output that improves and becomes more detailed with each layer. Filters in the bottom layers can begin as simple features.

2. Literature Survey

2.1 Datasets used previously

The systems are intended to be learned from one or more Data Sets. Based on existing publications in the literature, our research intends to demonstrate the expanding tendency of such self-diagnosing automated systems, segmenting or detecting certain Skin Lesions (especially melanoma) available in Table 1 and Table 2.

Clinicians examine the lesions visually using the ABCDE criteria [13], followed by histological examination. Due to the comprehensive understanding of biological models, many algorithms [14] relied mainly on craft feature sets that had low aggregation for dermoscopic images.

Table 1. Frequently used datasets of skin lesions

Dataset name

References

SL

ME

PH2

[15]

200

40

ISIC 2018, HAM10000

[16, 17]

10,015

1113

ISIC 2019

[18-20]

25,333

4522

ISIC 2020

[20]

33,126

584

Dermquest

[21]

126

66

Med-node

[22]

170

100

Dermnet

[23, 24]

22,500

635

Dermofit

[25]

1300

76

2.2 Melanoma detection, segmentation, and classification using neural networks

Deep Convolutional Neural NetworksCNNs, NNs and TL for NNs are used majorly. According to the most recent research on Skin Disease prediction, segmentation, and classification publications in the literature. As a result, NNs are used in the vast majority detection of Melanoma, segmentation, and classification systems.

Table 2. Neural networks used majorly

NN family

Representative

Inception/GoogLeNet

Google Net (Inception v2), InceptionResNet-v2, Inception v3, Inception v4

DenseNet

DenseNet121, DenseNet161, DenseNet169, DenseNet201

Xception

Xception

2.2.1 GoogLeNet/Inception

GoogLeNet, commonly known as Inception v1. GoogLeNet's simplified architecture has twenty-two layers using a Softmax layer of Thousand neurons. Another essential point to note is that ReLU is used as an activation function by all convolutions within the design.

2.2.2 Xception network

Another CNN used to diagnose skin lesions is Xception, which is designed to Replace standard Inception modules with depth-separable curves to boost performance. The superior to Inception v3 Xception architecture consists of 36 convolutional layers grouped in 14 modules, all of which are surrounded by residual connections [26].

2.2.3 DenseNet

DenseNet is a family of CNNs commonly used in the diagnosis of skin lesions, all of which were published in 2020, are examples of publications that use DenseNet (especially DenseNet-201). Thanks to its efficiency and higher ACC, DenseNet is a recent trend in recently published articles. The reason for this is that in the early work [27] the authors included tightly bound layers, which changed the usual design of CNN. DenseNet networks published in various studies include DenseNet-121, DenseNet-169, DenseNet-201 and DenseNet-264.

2.2.4 NASNet

“A convolutional neural network named NASNet-Large was developed and trained using more than a million photos from the ImageNet collection” [1]. The network has 1000 item categories it may use to categorise photos. NASNet (Neural Architecture Search Network) models using weights which have already been trained on ImageNet. For the NASNetLarge model, the default input image size is 331×331; for the NASNetMobile model, it is 224×224.

In the year 2019, the main goal is to Pipeline architecture for SL segmentation, combining YOLO v3 and the GrabCut algorithm. This novelty can be attained by combining YOLO v3 and the GrabCut Algorithm for SL segmentation. Here the type of NN used is YOLOv3/ detection and segmentation. This was taken from the dataset PH2, ISIC 2017. In 2019, The goal is New FCNN architecture for SL segmentation-DermoNet because it describes that FCNN contains densely connected convolutional blocks and skip connections. Here type of NN used is FCNN-DermoNet/ segmentation from the dataset PH2, ISIC 2016, ISIC 2017.

In the year 2020, New deep CNN-based model for face skin disease classification using a triplet loss function is the novelty that can be described by Fine-tuning layers of ResNet152 and InceptionResNet-v2 by using ResNet152, Inception ResNetv2/classification function of NN. This dataset was taken from a hospital in Wuhan China.

Also in 2020, the novelty is Me detection using an optimized set of Gabor-based features and a fast MNN classifier. Here Gabor features combined with a fast (Multi-Level Neural Network) MNN and the type of NN used is MNN/classification. This dataset was taken from PH2.

In the year 2021, the novelty is to Design of a new DCNN model with multiple filter sizes-Classification of Skin Lesions Network (CSLNet). Here Fewer filters, parameters, and layers to improve SL classification performances. The NN function used is DCNN (CSLNet)/classification. This dataset was collected from ISIC 2017, ISCI 2018, ISIC 2019.In 2021, the goal is to Test different NN for recognition of pigmented SL by using ResNet50, DenseNet121, VGG16/classification. This dataset was collected from ISIC, HAM10000, PH2, BCN20000, SKUNK2.

In the year 2021, Combining the MobileNetV2 with the Spiking Neural Network (SNN) into a DCNN for the classification is the main goal where Three NNs connected into an intelligent decision support system for skin cancer detection. Here the type of NN used is Autoencoder, MobileNetv2, SNN/classification. This dataset was collected from ISIC [17].

To propose a fully automatic method for classifying several skin cancers by fine-tuning the deep learning models VGG16, ResNet50, and ResNet101. Prior to model creation, the training dataset should undergo data augmentation using traditional image transformation techniques and Generative Adversarial Networks (GANs) to prevent class imbalance issues that may lead to model overfitting [14]. With appropriate data augmentation, the proposed models attained an accuracy of 92% for VGG16, 92% for ResNet50, and 92.25% for ResNet101, respectively. The ensemble of these models increased the accuracy to 93.5%.

To stated that the limited size and shortage of diversity of accessible datasets of dermatoscopic images hampered intensive training of neural networks for automatic detection and classification of pigmented skin lesions, which had been addressed by the release of the HAM10000 [15].

To proposed method made use of transfer learning with a pre-trained AlexNet. The parameters of the original model were used as initial values, and the weights of the last three replaced layers were randomly initialised. The proposed method was a huge success because it accurately classified skin problems into seven classes. For accuracy, sensitivity, specificity, and precision, the attained percentages are 98.70%, 95.60%, 99.27%, and 95.06%, respectively [18].

To propose a unique category of fully convolutional network in this paper, along with new dense pooling layers, for segmentation of lesion regions in non-dermoscopic images. This proposed network, with exception of other established convolutional networks, is intended to generate dense feature maps [19]. This network results in highly accurate lesion segmentation. The produced dice score is 91.6%, which outclasses state-of-the-art algorithms in skin lesion segmentation using the Dermquest dataset.

To propose a systematic overview of recent advances in an area of growing involvement for cancer detection, with a concentrate on a contrasting perspective of cancer detection using artificial intelligence, particularly neural network-based systems. Such structures can be thought of as intelligent dermatologist support systems [20]. Theoretical and applied contributions to the major development trends of multiple neural network architecture based on decision fusion were investigated.

The study proof of dermoscopy's accuracy in diagnosing pigmented skin lesions, notably melanoma. The authors go through the numerous dermoscopy diagnostic criteria for melanoma as well as the value of dermoscopy training and experience [14]. In addition to highlighting the limits of dermoscopy in the identification of nonmelanocytic pigmented lesions, the study also discusses the possibility of computer-aided diagnosis (CAD) systems to increase diagnostic accuracy.

3. System Architecture

Figure 2. System architecture

3.1 Workflow and algorithm

Image data is taken as input and compared with dataset (Figure 2).

  1. Select the patient’s image block.

  2. Remove the noise from the input image through feature extraction.

  3. Perform normalization.

  4. Apply Convolution blocks (ReLu) and Pooling Blocks.

  5. Use the Dropout block.

  6. Repeat steps 3 through 5 as often as necessary. (Until the precision is at its best)

  7. Make use of certain flattening blocks and some core layers (Dense blocks preferably). Make that the proper output value parameter for the data (output) categories is included in the final core layer before the output block.

  8. Apply the output block.

  9. Set the model's constraints (Hyper parameter setting).

  10. Finally, check that the Model construction box says Green "OK"; if not, adjust your choices.

4. Methodologies and Workflow

A skin cancer melanoma diagnosis machine learning project typically involves the following steps:

4.1 Data collection

The first step is to gather a dataset of skin lesion images, which includes both malignant and benign cases.

ISIC is an academic industry collaboration aimed at facilitating the use of skin image to help reduce melanoma death rate. ISIC achieves its goals by developing and promoting digital skin imaging standards and by collaborating with the dermatological and computer science communities to ensure diagnosis. ISIC is establishing suggested standards to improve the quality, privacy, and interoperability of digital skin scans ISIC is techniques for the computer science communities and dermatological, such as a massive and developing open source public access skin imaging archive.

4.2 Data pre-processing

The skin lesion images are pre-processed to enhance the contrast, remove artifacts, and standardize the size of the images.

4.2.1 Multi crop

To multi-crop, divide the input image into multiple sub-images, feed them into a network for classification, then average the results to improve accuracy.

  • Five Crop divides the image into the four corners and a core crop.

  • Ten Crop divides it all into 4 corners and the centre crop, as well as their inverted counterparts.

4.3 Feature extraction

Various image processing techniques are applied to extract features such as color, texture, and shape from the skin lesion images.

4.4 Model development

ML models are trained on the extracted features to classify the skin lesion images into malignant or benign classes. There are several ML algorithms that can be used for this purpose, including deep learning neural networks.

4.4.1 Algorithms

We will implement the DenseNet-161 version of the model

Bn-Relu-Conv Function

Algorithm:

Algorithm(bn_rl_conv(x,filters,kernel_size)

{

Int X;

      X=BatchNormalization(x)

      X=ReLU(x)

      X=Conv2D(filters=filters,

                          kernel_size=kernel_size,

                          padding=same)(x)

return X

Dense Block

Algorithm:

Algorithm(dense_block(tensor,k,reps):

{

Int x;

     for_in range(reps):

     x=bn_rl_conv(tensor,filters=4*k,kernel_size=1)

     x=bn_rl_conv(x,filters=k,kernel_size=3)

     tensor=Concatenate()([tensor,x])

}

Return tensor

4.4.2 Multi model

A multimodel of machine learning algorithms typically refers to the combination of multiple models, each utilizing a different algorithm, to create a more accurate and robust overall model. Algorithms like DenseNet and Xception are both popular deep learning architectures that can be used as base models in a multimodel approach.

In the case of algorithms like using DenseNet and Xception in a multimodel approach, one possible strategy would be to train each model independently on the same dataset, and then combine their predictions using an ensemble approach. Alternatively, the models could be combined into a single multimodal architecture, with each model feeding into a shared output layer. This approach is called a multi-input model and can be useful when the models have complementary strengths and weaknesses.

4.5 Model evaluation and validation

The performance of the ML model is evaluated using metrics such as accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC-ROC). The model is also validated using a separate dataset to test its generalization capability.

4.5.1 Measurement units

Loss Function

For a classification model that provides raw scores for each class, Cross Entropy Loss is the appropriate loss function. Despite the fact that our samples will be appropriately balanced at each cycle, we want to punish melanoma loss by 50%.

Precision-Recall

Precision and recall are performance measurements used in pattern recognition, information retrieval, and classification (machine learning) that apply to data recovered from a collection, corpus, or sample space.

P=Div {TP, TP+FP}

R=Div {TP, TP+FN}

P-Precision

R-Recall

TP-True Positive

FP-False Positive

FN-False Negative

AUC ROC

ROC (Receiver Operating Characteristic) curve and AUC (Area Under the Curve) are commonly used evaluation metrics in machine learning, especially for binary classification problems.

ROC curve is a graphical representation of the performance of a binary classifier system as its discrimination threshold is varied. The curve is created by plotting the True Positive Rate (TPR) against the False Positive Rate (FPR) at various threshold settings. The TPR (also known as recall or sensitivity) is the proportion of actual positive cases that are correctly identified as positive by the classifier. The FPR is the proportion of negative cases that are incorrectly classified as positive.

AUC is a numerical value that represents the overall performance of a classifier system based on its ROC curve. The AUC value ranges from 0 to 1, where an AUC of 1 indicates a perfect classifier, while an AUC of 0.5 represents a random classifier that is no better than chance. The AUC value can be interpreted as the probability that a randomly chosen positive example is ranked higher by the classifier than a randomly chosen negative example.

4.6 Deployment

The final step is to deploy the ML model in a clinical setting to assist dermatologists in diagnosing skin lesions accurately.

5. Experimental Results and Prediction

We apply different model methods to train multi crops, such as DenseNet, Inception3_3_0.9051, Inception 3_3_0.9089, NASNetALarge, Xception 1 and plot the accuracy and maximum ROC AUC.

5.1 Single model using 1 crop

Table 3. Single model using 1 crop

Algorithm

Crops

Accuracy

ROC AUC

DenseNet

1

0.75

0.909

Inception3_3_0.9051

1

0.777

0.897

Inception3_3_0.9089

1

0.727

0.906

NASNetALarge

1

0.712

0.884

Xception_1

1

0.798

0.908

Table 3 describes the accuracy and ROC AUC (Area under Curve) when a single model is trained with 1 crop respectively. It gives an accuracy of 75.2% and ROC AUC of 90.8% as shown in Figure 3.

Graph Analysis: The maximum accuracy and ROC AUC was yielded by the DenseNet model.

Figure 3. Single model using 1 crop

5.2 Single model using 5 crop

Table 4 describes the accuracy and ROC AUC (Area under Curve) when a single model is trained with 5 crops respectively. It gives an accuracy of 77.6% and ROC AUC of 90.4% as shown in Figure 4.

Table 4. Single model using 5 crops

Algorithm

Crops

Accuracy

ROC AUC

DenseNet

5

0.757

0.911

Inception3_3_0.9 051

5

0.788

0.905

Inception3_3_0.9089

5

0.753

0.909

NASNetALarge

5

0.765

0.885

Xception_1

5

0.817

0.912

Figure 4. Single model using 5 crops

Graph Analysis: The maximum accuracy and maximum ROC AUC was yielded by the Xception_1 model.

5.3 Single model using 10 crop

Table 5 describes the accuracy and ROC AUC (Area under Curve) when a single model is trained with 10 crops respectively as shown in Figure 5. It gives an accuracy of 77.8% and maximum ROC AUC of 91%.

Table 5. Single model using 10 crops

Algorithm

Crops

Accuracy

ROC AUC

Algorithm

10

0.755

0.914

DenseNet

10

0.79

0.907

Inception3_3_0.9051

10

0.752

0.911

Inception3_3_0.9089

10

0.778

0.909

NASNetALarge

10

0.817

0.913

Figure 5. Single model using 10 crops

Graph Analysis: The maximum accuracy was yielded by the Xception_1 model and maximum ROC AUC was yielded by the DenseNet model.

5.4 Single model using 20 crop

Table 6 describes the accuracy and ROC AUC (Area under Curve) and maximum ROC AUC was yielded by the DenseNet model. when a single model is trained with 20 crops respectively. It gives an accuracy of 78.3% and ROC AUC of 91% as shown in Figure 6.

Table 6. Single model using 20 crops

Algorithm

Crops

Accuracy

ROC AUC

DenseNet

20

0.753

0.914

Inception3_3_0.9051

20

0.797

0.907

Inception3_3_0.9089

20

 0.76

0.912

NASNetALarge

20

0.783

0.888

Xception_1

20

0.822

0.911

Figure 6. Single model using 20 crops

Graph Analysis: The maximum accuracy was yielded by the Xception_1 model.

5.5 Combination of two models

Table 7. Combination of two models

Models(n-crops)

F1. melanoma threshold

Accuracy

ROC AUC

M1(1-crops)+M4(1-crops)

0.094

0.785

0.931

M1(1-crops)+M4(5-crops)

0.327

0.782

0.931

M1(1-crops)+M4(10-crops)

0.333

0.783

0.931

M4(1-crops)+M5(1-crops)

0.043

0.835

0.933

M4(5-crops)+M5(1-crops)

0.043

0.83

0.934

M4(10-crops)+M5(1-crops)

0.061

0.835

0.934

M4(20-crops)+M5(1-crops)

0.037

0.835

0.935

M4(1-crops)+M5(5-crops)

0.042

0.827

0.936

M4(5-crops)+M5(5-crops)

0.069

0.828

0.936

M4(10-crops)+M5(5-crops)

0.053

0.83

0.937

M4(20-crops)+M5(5-crops)

0.188

0.828

0.937

Table 7 describes the accuracy and ROC AUC (Area under Curve) and F1. melanoma threshold, when combinations of two models are trained with various crops respectively as shown in Figure 7.

M1=DenseNet

M2=Inception3_3_0.9051

M3=Inception3_3_0.9089

M4=NASNetALarge

M5=Xception1

It gives an accuracy of 81.8% and ROC AUC of 93.4%.

Figure 7. Combination of 2 models

Graph Analysis: The maximum accuracy and maximum ROC AUC was yielded by the combination of M4(NASNetALarge) and M5(Xception_1) (1-crops) models.

5.6 Combination of three models

Figure 8. Combination of 3 models

Table 8. Combination of three models

Model(n-crops)

F1-Melanoma threshold

Accuracy

ROC AUC

M1(1-crops)+M2(1-crops)+M4(1-crops)

0.063

0.787

0.937

M1(1-crops)+M4(1-crops) +M5(10-crops)

0.063

0.82

0.937

M1(1-crops)+M4(1-crops) +M5(20-crops)

0.063

0.815

0.937

M1(1-crops)+M4(5-crops) + M5(5-crops)

0.066

0.818

0.937

M1(1-crops)+M4(10-crops) + M5(5-crops)

0.178

0.822

0.938

M2(1-crops)+M4(1-crops) +M5(1-crops)

0.061

0.807

0.939

M2(1-crops)+M4(10-crops) +M5(1-crops)

0.098

0.808

0.939

M2(1-crops)+M4(1-crops) + M5(5-crops)

0.061

0.813

0.941

M2(1-crops)+M4(10-crops)+M5(5-crops)

0.1

0.817

0.941

Table 8 describes the accuracy and ROC AUC (Area under Curve) and F1. melanoma threshold, when combinations of three models are trained with various crops respectively. It gives an accuracy of 82% and ROC AUC of 94.1%.

In Figure 8, Graph Analysis: The maximum accuracy was yielded by the combination of M1(DenseNet Model)+M4(NASNet5Large_3 Model)+M5(Xception_1 model) and maximum ROC AUC was yielded by the combination of InceptionV4_2(1-crops)+PNASNet5Large_3(1-crops)+Xception_1 (5-crops) models.

6. Conclusions

The data augmentation and data set studies mentioned in the "Experimental results and observation" section reflects that combinations of crops can help improve accuracy performance, for example, getting data training fully balanced does not always result in a better model. Neural networks are increasingly being researched as part of Artificial Intelligence algorithms in picture capture for detecting Skin Lesion and identifying Melanoma. New Databases and even challenges to the categorization of Skin Lesion develop on a regular basis. That is why there is a lot of interest in upgrading these classifiers so that they can detect and follow the emergence of Skin Lesion even from a long distance with high accuracy. Multiple Neural Networks for diverse functions and fusion combinations produced the greatest results. Observing the importance of artificial intelligence networks in identifying Melanoma, we conclude methods of problem solving are important goals in the medical field. The use of Networks in the prediction of melanoma supports systems for the dermatologist, who must ultimately conclude whether that is cancerous lesion. In this situation, the algorithm can be trained to recognise various sorts of cancerous SLs. Given the evolution of Neural Networks, such systems are predicted to improve their performance by utilising upgraded, modified, and coupled networks.

A potential future analysis is to employ these methods to detect Melanoma that grows under the nails, which is therefore the most difficult analysis. We are not aware of such an algorithm, nor have we recognised it in literature surveys. If the nail remains visible, an image enhancing algorithm and methods can be retained to remove the Melanoma from the nail.

  References

[1] Reichrath, J., Leiter, U., Eigentler, T., Garbe, C. (2014). Epidemiology of skin cancer. Sunlight, Vitamin D and Skin Cancer, 120-140. https://doi.org/10.1007/978-1-4939-0437-2_7

[2] Gaana, M., Gupta, S., Ramaiah, N.S. (2019). Diagnosis of skin cancer melanoma using machine learning. Available at SSRN 3358134.

[3] Popescu, D., El-Khatib, M., El-Khatib, H., Ichim, L. (2022). New trends in melanoma detection using neural networks: A systematic review. Sensors, 22(2): 496. https://doi.org/10.3390/s22020496

[4] Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639): 115-118. https://doi.org/10.1038/nature21056

[5] Olugbara, O.O., Taiwo, T.B., Heukelman, D. (2018). Segmentation of melanoma skin lesion using perceptual color difference saliency with morphological analysis. Mathematical Problems in Engineering, 2018: 1-19. https://doi.org/10.1155/2018/1524286

[6] Gutman, D., Codella, N.C., Celebi, E., Helba, B., Marchetti, M., Mishra, N., Halpern, A. (2016). Skin lesion analysis toward melanoma detection: A challenge at the international symposium on biomedical imaging (ISBI) 2016, hosted by the International Skin Imaging Collaboration (ISIC). arXiv Preprint arXiv: 1605.01397. https://doi.org/10.48550/arXiv.1605.01397

[7] Oliveira, R.B., Mercedes Filho, E., Ma, Z., Papa, J.P., Pereira, A.S., Tavares, J.M.R. (2016). Computational methods for the image segmentation of pigmented skin lesions: A review. Computer Methods and Programs in Biomedicine, 131: 127-141. https://doi.org/10.1016/j.cmpb.2016.03.032

[8] Guo, Y., Ashour, A.S. (2019). Neutrosophic sets in dermoscopic medical image segmentation. In Neutrosophic Set in Medical Image Analysis. Academic Press, pp. 229-243. https://doi.org/10.1016/B978-0-12-818148-5.00011-4

[9] Alcón, J.F., Ciuhu, C., Ten Kate, W., Heinrich, A., Uzunbajakava, N., Krekels, G., Siem, D., de Haan, G. (2009). Automatic imaging system with decision support for inspection of pigmented skin lesions and melanoma diagnosis. IEEE Journal of Selected Topics in Signal Processing, 3(1): 14-25. https://doi.org/10.1109/JSTSP.2008.2011156

[10] Capdehourat, G., Corez, A., Bazzano, A., Alonso, R., Musé, P. (2011). Toward a combined tool to assist dermatologists in melanoma detection from dermoscopic images of pigmented skin lesions. Pattern Recognition Letters, 32(16): 2187-2196. https://doi.org/10.1016/j.patrec.2011.06.015

[11] Ramezani, M., Karimian, A., Moallem, P. (2014). Automatic detection of malignant melanoma using macroscopic images. Journal of Medical Signals and Sensors, 4(4): 281-290.

[12] Mendonça, T., Ferreira, P.M., Marques, J.S., Marcal, A.R., Rozeira, J. (2013). PH 2-A dermoscopic image database for research and benchmarking. In 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 5437-5440. https://doi.org/10.1109/EMBC.2013.6610779

[13] Codella, N.C., Gutman, D., Celebi, M.E., Helba, B., Marchetti, M.A., Dusza, S.W., Kalloo, A., Liopyris, K., Mishra, N., Kittler, H., Halpern, A. (2018). Skin lesion analysis toward melanoma detection: A challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), hosted by the International Skin Imaging Collaboration (ISIC). In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 168-172. https://doi.org/10.1109/ISBI.2018.8363547

[14] Tschandl, P., Rosendahl, C., Kittler, H. (2018). The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific Data, 5(1): 1-9. https://doi.org/10.1038/sdata.2018.161

[15] ISIC Challenge. Available online: https://challenge.isic-archive.com/data/, accessed on Oct. 29, 2021.

[16] ISIC 2018: Skin lesion analysis towards melanoma detection. https://arxiv.org/abs/1902.03368, accessed on May 9, 2021.

[17] Kassem, M.A., Hosny, K.M., Fouad, M.M. (2020). Skin lesions classification into eight classes for ISIC 2019 using deep convolutional neural network and transfer learning. IEEE Access, 8: 114822-114832. https://doi.org/10.1109/ACCESS.2020.3003890

[18] Nasr-Esfahani, E., Rafiei, S., Jafari, M.H., Karimi, N., Wrobel, J.S., Najarian, K., Samavi, Soroushmehr, S.M.R. (2017). Dense fully convolutional network for skin lesion segmentation. arXiv Preprint arXiv, 1712.10207: 2. https://arxiv.org/pdf/1712.10207

[19] Sultana, N.N., Mandal, B., Puhan, N.B. (2018). Deep residual network with regularised fisher framework for detection of melanoma. IET Computer Vision, 12(8): 1096-1104. https://doi.org/10.1049/iet-cvi.2018.5238

[20] Goel, S. (2020). Dermnet-image data for 23 categories of skin diseases. Available online: https://www.kaggle.com/shubhamgoel27/, accessed on May 9, 2021.

[21] Bajwa, M.N., Muta, K., Malik, M.I., Siddiqui, S.A., Braun, S.A., Homey, B., Dengel, A., Ahmed, S. (2020). Computer-aided diagnosis of skin diseases using deep neural networks. Applied Sciences, 10(7): 2488. https://doi.org/10.3390/app10072488

[22] Krizhevsky, A., Sutskever, I., Hinton, G.E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25.

[23] He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778.

[24] Redmon, J., Divvala, S., Girshick, R., Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779-788. https://doi.org/10.48550/arXiv.1506.02640

[25] Leonardo, M.M., Carvalho, T.J., Rezende, E., Zucchi, R., Faria, F.A. (2018). Deep feature-based classifiers for fruit fly identification (Diptera: Tephritidae). In 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, pp. 41-47. https://doi.org/10.1109/SIBGRAPI.2018.00012

[26] Al-Masni, M.A., Kim, D.H., Kim, T.S. (2020). Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Computer Methods and Programs in Biomedicine, 190: 105351. https://doi.org/10.1016/j.cmpb.2020.105351

[27] Huang, W., Feng, J., Wang, H., Sun, L. (2020). A new architecture of densely connected convolutional networks for pan-sharpening. ISPRS International Journal of Geo-Information, 9(4): 242. https://doi.org/10.3390/ijgi9040242