Deep Neural Networks with Transfer Learning Model for Brain Tumors Classification

Deep Neural Networks with Transfer Learning Model for Brain Tumors Classification

Premamayudu Bulla* Lakshmipathi Anantha Subbarao Peram

Vignan’s Foundation for Science Technology and Research Deemed to be University, Guntur 522213, A.P., India

Malla Reddy Engineering College, Secunderabad 500100, Telangana, India

Corresponding Author Email: 
drbpm_it@vignan.ac.in
Page: 
593-601
|
DOI: 
https://doi.org/10.18280/ts.370407
Received: 
8 July 2020
|
Revised: 
9 August 2020
|
Accepted: 
16 August 2020
|
Available online: 
10 October 2020
| Citation

© 2020 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

To investigate the effect of deep neural networks with transfer learning on MR images for tumor classification and improve the classification metrics by building image-level, stratified image-level, and patient-level models. Three thousand sixty-four T1-weighted magnetic resonance (MR) imaging from two hundred thirty-three patient cases of three brain tumors types (meningioma, glioma, and pituitary) were collected and it includes coronal, sagittal and axial views. The average number of brain images of each patient in three views is fourteen in the collected dataset. The classification is performed in a model of cross-trained with a pre-trained InceptionV3 model. Three image-level and one patient-level models are built on the MR imaging dataset. The models are evaluated in classification metrics such as accuracy, loss, precision, recall, kappa, and AUC. The proposed models are validated using four approaches: holdout validation, 10-fold cross-validation, stratified 10-fold cross-validation, and group 10-fold cross-validation. The generalization capability and improvement of the network are tested by using cropped and uncropped images of the dataset. The best results for group 10-fold cross-validation (patient-level) are obtained on the used dataset (ACC=99.82). A deep neural network with transfer learning can be used to classify brain tumors from MR images. Our patient-level network model noted the best results in classification to improve accuracy.

Keywords: 

brain tumor, deep learning, inceptionV3, MR imaging, multi-class classification, transfer learning

1. Introduction

The effective decision-support system is very much essential for radiologists in medical diagnostics. According to the World Health Organization (WHO) [1] cancer is the second leading cause of death. Globally, 9.6 million deaths in 2018 due to the cancer disease and about 1 in 6 deaths in the global population. Death from cancer can be prevented with early detection. Brain cancer is much critical than other types of cancer and clinical diagnostic of brain cancer is difficult. In general, unlike cancer, tumor could be benign, pre-carcinoma, or malign. Benign can be removed surgically and it does not affect other organs and tissues. However, brain tumors are meningiomas, gliomas, and pituitary. Meningiomas tumor occurs from membranes (the area that protects the brain and spinal cord), gliomas arise from brain tissues and pituitary tumors are lumps that sit inside the skull at the pituitary gland area. In fact, benign tumors do not spread to other tissues, cells, and organs of the body [2]. The difference between these three types of tumors is that meningioma and pituitary are benign, and glioma is malignant. One of the ways to detect the brain tumor is to examine the MRI images. An experienced radiologist can examine the MRI images and decide the type of brain tumor. This decision depends on the experience of the radiologist and the available data of the patient. Since the identification of tumor depends on the experience of the radiologist and a large amount of data observation is difficult for radiologists or humans. Brain tumor biopsy must require brain surgery. Therefore, it is to develop a computer-based decision-making tool for tumor classification and segmentation from MR imaging [3].

Advanced Machine Learning models based on deep learning and neural networks will be optimized to perform at the edge [4-7]. These models provide useful assistance for many medical applications and medical imaging. Several deep learning methods for classifying images and detecting regions in MR images. The advanced decision support system for machine learning can serve as a second opinion for radiologists before going for biopsy of the tumor.

Many researchers have been implemented tumor segmentation and classification on various datasets, which are publicly available in internet. Most of the datasets are very small in size. In addition, implemented models are concluded in small datasets. But advanced machine learning models based on deep learning and neural networks demands large volume of data to perform optimally. Since, researchers are claiming that better results with small number of input observations. In the literature, image classification, segmentation, and image analysis are implemented in various advanced deep learning techniques and modified pre-trained networks. Different approached are developed in MR imaging datasets of brain tumors. These approaches are discussed in the Literature Review section of this paper. This paper focuses mainly on the deep learning networks. In particularly, convolutional neural networks (CNN) with transfer learning with the same dataset, which is used in this paper.

Convolutional Neural Network based models need large volume of data to note the optimal results in the taken application area. For MR imaging, it is required to have different planes (views) of same patient to increase the size of dataset. Further, data augmentation and pre-processing are used on the images before feeding into CNN models [8-11]. The main advantage of the deep convolutional neural network (DCNN) is that data augmentation and pre-processing do not required. DCNN can be able to extract the specific to subject feature from the images without pre-processing techniques. Pre-processing procedures may take extra resource to perform the segmentation and classification. This paper proposed DCNN with transfer learning. The advantage with transfer learning is that pre-trained model provides the generalized features and these features are combined with specific features of MRI images.

The objective of this research is to investigate the effect of deep neural networks with transfer learning on MRI images for tumor classification and improve the classification metrics by building image-level, stratified image-level, and patient-level models. In addition, our results are compared with other works which are carried out on same database of images and in same approaches. We have tested the designed model on pre-processed and original image database. In addition, we examined the performance of network to show how the various validation strategies yield the performance metrics results.

In this paper, we proposed DCNN based transfer learning (InceptionV3 pre-trained model from imagenet) architecture for brain tumor classification of three kinds (meingioma, glioma and pituitary) from T1-weighted MRI images. The performance of the architecture is tested using hold out, 10-fold cross validation, stratified 10-fold cross validation and group 10-fold cross validation in the combination of cropped image dataset and uncropped image dataset. The performance evaluation is done using the classification metrics such as accuracy, loss (error), precision, recall, kappa and AUC. Finally, we made a comparison study with state-of-the-art methods to discuss our results.

2. Literature Review

Several approaches to the classification of brain tumors using MRI images have been developed in recent years. Unlike basic machine learning architectures, we have used transfer learning to concentrate on DCNN-related literature. Cheng et al. [12] implemented tumor classification using augmented tumor region of interest. He is first person presented the image database, which is used in our research work. He also presented the work in the augmented tumor region is split into increasingly fine ring-form subregions. Finally, he was made the accuracy of 91.28% for MRI image tumor classification. Badza et al. [13] present a new CNN based model for brain tumor classification of three tumor types. The performance of the network is evaluated in four approaches in the combination of two 10-fold cross validations and two databases (original database and augmented database). This work presented the accuracy of 96.56% in tumor classification from MRI images. Hossain et al. [14] proposed a method to extract brain tumor from MRI images using Fuzzy C-mean clustering algorithm followed by CNN model. This work was implemented in keras & tensor flow and gained an accuracy of 97.87%. Özyurt et al. [15] presented a hybrid method using Neutrosophy and Convolutional Neural Network (NS-CNN). In this proposal, features of brain tumor are extracted using CNN and classified using KNN and SVM. The network was evaluated in 5-fold cross validation and gained the classification accuracy of 95.62%. Kaur et al. [16] presented explored the capabilities of various pre-trained models with transfer learning for brain tumor images. This work was evaluated using holdout validation (in the ratio of 60:40, training, and testing). The networks were tested on three types of databases and gained the results of 100%, 94% and 95.92% using pre-trained Alexnet. Phaye et al. [17] propose Dense Cap- sule Networks (DCNet) and Diverse Capsule Networks (DCNet++) by replacing CNN, which leads to learning of discriminative feature maps. DCNet achieves state-of-the-art performance an accuracy of 99.75% on MNIST dataset and DCNet++ performs better than CapsNet on SVHN dataset an accuracy of 96.90%. Balasooriya et al. [18] proposed CNN based model for brain tumor type identification. This work is claimed an accuracy of 99.68% for tumor recognition. Sobhaninia et al. [19] developed tumor segmentation using CNN and gained dice score of 0.79. Seetha and Raja [20] proposed an automatic brain tumor detection using convolutional neural networks and achieved an accuracy of 97.5%. Sultan et al. [21] presented brain tumor classification using deep learning based CNN network on two databases of MRI images. For two databases, they achieved overall accuracy of 96.13% and 98.7%. Afshar et al. [3] presented modified CapsNet for brain tumor classification with access to the tumor surrounding tissues, without distracting it from the main target. In this mode, they used tumor coarse boundaries as extra inputs to achieve the outperformance. Gumaei et al. [22] proposed an accurate brain tumor classification with hybrid feature extraction method and regularized extreme learning. They evaluated the model using holdout validation and achieved an accuracy of 94.23%. Kurup et al. [23] analysed the effect of pre-processing in disease classification. Hence, they proved that data pre-processing improves the brain tumor classification from MRI images in capsulenet architectures.

In this research, we proposed brain tumor classification using deep neural network with convolutional neural network and achieved an accuracy of 99.82% from MRI image database. We developed multi class classification, which is more challenging and complex task than binary classification. We have adopted the T1-weighted MRI image database from figshare and used by Cheng et al. in 2017. This database of MRI images is publicly available to everyone and it is very small in size compared to general image datasets. Therefore, working with small image datasets in deep learning is more challenging and leads to overfitting when the networks are trained from scratch. To overcome these issues from the deep learning models, we have adopted transfer learning pre-trained models, which are trained from natural images (ImageNet). Finally, based on the generalisation capabilities to extract the features of images of InceptionV3 pre-trained model. We used InceptionV3 for taking general image features and combined with specific features of MRI image features by adding last layers in the network. The full-length details of the network are discussed in 3.2.3 proposed CNN architecture model.

3. Proposed Methodology

3.1 Dataset

The T1-weighted MRI brain tumor dataset is publicly available for research community at https://figshare.com/articles/brain_tumor_dataset/151242. This image data was initially used by Cheng. et al. in 2017 [12] for tumor type classification. The MRI image dataset contains 2-D images of three brain tumor types (1. menigioma, 2. glioma, and 3. Pituitary). In addition, the database consists three plane views (i.e. axial view, coronal view, and sagittal view) of three types of brain tumors, which are shown in the Figure 1). As shown in the Table 1, the dataset contains 3064 MRI images from 233 patients over all three views and three tumor types. The dimensions of each MRI image are 512x512 pixels. All the statistical details of the MRI image dataset are given in the Table 1.

Figure 1. The various types of tumors in different plane views

Figure 2. Uncropped images for tumor (a) meningioma (b) glioma (c) pituitary

Figure 3. Cropped images for tumor (a) meningioma (b) glioma (c) pituitary

Table 1. The details of dataset (T1-weighted MRI Images)

Type of Tumor

MRI View

Number of MR images

Number of patients

Meningioma

Axial

209

82

Coronal

268

Sagittal

231

Glioma

Axial

494

89

Coronal

437

sagittal

495

Pituitary

axial

291

62

coronal

319

sagittal

320

Total

2064

233

3.2 Methodology

3.2.1 Pre-processing

We have generated two datasets from the original image dataset. One dataset with cropping operation around the brain view in the MRI image and another one without cropping operation. The variation between two datasets shown in the Figures 2 and 3. Two datasets are normalized and resized to 256x256 pixels. The original size of the image is 512x512 pixels. We have used two size of images for input layer of the network. We did not identify any noted improvement in the accuracy rate of classification and consumed more resources in terms of memory and processing time for the dataset of 512x512 pixels compared to the dataset of 256x256 pixels.

3.2.2 Convolutional Neural Networks

CNNs or ConvNets are widely used to do image classification, image recognition and object detection. Deep learning CNN (DCNN) models contain series of layers with filters to perform feature extraction and dimensional reduction. Technically, DCNN passes an input through series of convolution layers with kernels (filters) to classify an object. Models make a probabilistic value between 0 and 1 for given input. In general, DCNN architectures contain a series of convolution layers with filters, pooling, DropOut, ReLu and fully connected layers and finally softmax or sigmoid function to do classification. Sigmoid function is used for binary classification. Softmax function used in multi class classification problems. The CNN models perform automatic feature detection and extraction with high performance.

In this research, we used DCNN with transfer learning with pre-trained InceptionV3. The mechanism of transfer learning is briefly illustration in Figure 4. IncpetionV3 model contains 311 layers and 1,60,36,416 trained weights. The model needs large dataset for training and result optimization. But getting big datasets in the medical applications is very difficult and usually, small datasets will suffer from overfitting. Hence, weights for the model are initialized from IncpetionV3 pre-trained model. Therefore, weights will be transferred from pre-trained model for fine tune.

There are three versions of inception deep convolutional architectures are developed by Szegedy et al. [24]. InceptionV1 is also galled GoogLeNet. InceptionV1 was refined using batch normalization to decrease the computational time and error rate (by Szegedy et al. [24]) and it is named as InceptionV2. Later, they added factorization with second version and named as IncpetionV3. The architecture of InceptionV3 shown in the Figure 5. InceptionV3 focuses on to use less computational power compared to previous versions. This version of deep convolutional architecture adopted various convolutions to burn less computational resources such as factorized convolutions, smaller convolutions, asymmetric convolutions, auxiliary classifier and grid size reduction [24].

Figure 4. Deep Neural Network with transfer learning

Figure 5. InceptionV3 architecture

Figure 6. Illustration of transfer learning-based model

3.2.3 Proposed CNN Architecture Model

The proposed CNN architecture was developed in Keras and Tensorflow. The network architecture consists of pre-trained InceptionV3 model trained from ImageNet (311 layers), Fully connected layer with 256 neurons (ReLu activation function), Dropout layer with 20%, softmax classification layer and ouput. From pre-trained model general image features extracted to the fully connected layer. This extraction gives good discriminative representation for trained images. These general image features combined with MRI image features from the fully connected layer. Later, 20% dropout will be applied on the values of ReLu activation of previous layer. In last layer, the number of hidden units is equal to the number of the classes of brain tumor. The schematic representation of proposed CNN architecture shown in the Figure 6. The last layer of InceptionV3 is composed of 1000 hidden units corresponds to classes in the ImageNet dataset. Therefore, the last layer of InceptionV3 replaced with a layer with 3 hidden units according to classes in the MRI image dataset.

Transfer Learning has become increasingly popular, as it greatly decreases training time and needs much less data to train in order to improve efficiency. We used all the layers in the pre-trained model except the last fully connected layer as it is specific to the ImageNet contest. Staring layers of pre-trained models contain the generic feature and last layers contain domain specific features. We made all the layers of pre-trained model frozen to learn MRI tumor image specific features. The loss metric and optimizer in the model are categorical_crossentropy and RMSprop with a learning rate of 0.0001 respectively. While passing the input to the model, images were resized to 256x256 pixels form original size (512x512 pixels). we have fine-tuned the network with above said hyperparameters and achieved an accuracy of 98.82% from selected dataset.

3.2.4 Performance evaluation

Classification is one of the most used machine learning problems for various industrial applications, face recognition, YouTube video categorization, content alteration, medical diagnosis, text classification, hate speech detection on twitter. Some of the most popular classification models such as logistic regression, decision trees, support vector machine (SVM), random forest, convolutional neural network, recurrent neural network. There are various ways to evaluate a classification model. The classification accuracy is interrelated with sensitivity and specificity which utilize the terms: true positive (TP), true negative (TN), false negative (FN), and false positive (FP). One of the important concepts in classification performance is confusion matrix. It is tabular visualization of the model predictions and truth labels. In confusion matrix, rows represented with predicated class and columns represented with actual class. The diagonal elements of confusion matrix denote correct predictions of classification and non-diagonal elements denote the misclassification. The dimension of confusion matrix is number of classes x number of classes in an application for classification. All the terms such as true positive (TP), true negative (TN), false negative (FN) and false positive (FP) are derived from the elements of confusion matrix. These terms are used to calculate accuracy, precision, recall, sensitivity, and specificity to estimate classification performance.

Classification accuracy: The primary performance evaluation metric for classification is accuracy. It is the number of correct predictions divided by the total number of predictions and multiplied with 100.

Accuracy$=\frac{(T P)+(T N)}{(T P)+(F P)+(T N)+(F N)}$                  (1)

Precision: If the data set contains imbalanced observation points then the accuracy of the classification is not a good indicator of model performance. In this case, even if you predict all samples as the most top class, you 'd get a high accuracy rate which makes no sense at all. The model, therefore, does not know anything, so it only forecasts anything as the highest level. Hence, class specific performance metric must require for validation. Precision is one of such metrics, which is defined as:

$Precision=\frac{\text {True Positive }(T P)}{\text {True Positive }(T P)+\text {False Positive }(F P)}$              (2)

The above mathematical formula applied for each class of the model and validate the performance. When the precisions of all classes are almost the same then it can be inferred that model has trained equally for all classes.

Recall: It is another essential metric, defining as the fraction of observation points from a class that the model correctly predicts.

$Recall=\frac{\text {True Positive }(T P)}{\text {True positive }(T P)+\text {False Negative }(F N)}$                (3)

F1-Score: Combining precision and recall into a single metric is another important measure. It is the harmonic mean of precision and recall. Mathematical definition of F1-Score is defined as:

$\mathrm{F} 1-$ Score $=\frac{2 * \text { precision } * \text { recall }}{\text { precision }+\text { recall }}$              (4)

Kappa: Cohen Kappa tells you how much better your model is than the random classifier that predicts class frequencies based on it.

Kappa $=\frac{\text { Accuracy }-\text { randomAccuracy }}{1-\text { randomAccuracy }}$              (5)

$randomAccuracy=\frac{(\mathrm{TN}+\mathrm{FP}) \times(\mathrm{TN}+\mathrm{FN})+(\mathrm{FN}+\mathrm{TP}) \times(\mathrm{FP}+\mathrm{TP})}{\text { Total } \times \text { Total }}$            (6)

The area under the curve (AUC): It is an aggregate measure of a binary classifier 's performance over all possible threshold values. AUC calculate the area under the ROC (recursive operating characteristic curve). ROC is a plot which shows the performance of a binary classifier of tis cut-off threshold. AUC is as the probability that the model ranks a random positive observation more highly than a random negative observation. The higher AUC value of a model can give better results.

4. Results Analysis

Brief information about the experimental setup and the results obtained is presented in this section. The experimental setup includes the details used in the present work about the model training and the software platform.

4.1 Experimental setup

We made four types of validation on two datasets (cropped and uncropped) of the proposed pre-trained model with transfer learning. Validation types are list in the Table 1 and 2. The data set was partitioned into a ratio of 70:30, in holdout validation. 70 percent of the dataset is used for training, and the remaining 30 percent are used for testing. 10-fold validation was done in three variations such as image level only, stratified image level and patient level. Further, the comparison study with the current state-of-the-art research is made in the discussion section.

The proposed model was developed with tensorflow platform in keras and implemented in colab pro environment. Google Colab is a valuable resource for the online sharing of work for data scientists and AI researchers. Users can write and execute Python on the web with zero configuration, free GPU access, and easy sharing using the very same Google Drive interface, within the collaborative environment of a Colaboratory. The transferred models were trained using loss metric and optimizer in the model are categorical_crossentropy and RMSprop with a learning rate of 0.0001 respectively. The mini-batch size was taken as 20 and the maximum number of epochs was 25. We validated the design model with same hyperparameters on two datasets in four validation types.

4.2 Results

The results of our model are summarized in Table 2, Table 3, graphical Figure 6, and graphical Figure 7. Four validation type performance metrics on cropped dataset of designed transfer model are illustrated in Table 2. Similarly, Table 3 illustrates metrics on uncropped dataset. These results indicate that uncropped dataset with group10-fold patient level cross validation achieves the best results over all other combinations. The attained values of accuracy, precision, recall, f1 score, kappa, and AUC are 99.82, 97.57, 99.47, 98.40, 94.60, and 99.50 respectively. The second-best combination over other is uncropped stratified 10-fold image level cross validation. The attained values are shown in the Table 3 3rd row. In our case of research on Figshare dataset, precision and recall values are little better in uncropped image level 10-fold cross validation and stratified 10-fold cross validation. The performance metrics value indicates the benefit of transfer learning in reducing overfitting and increasing the convergence speed. We can state that DCNN-based transfer learning models may not require the cropping of images to have classification of tumors.

The graphical Figure 7 and 8 show performance metrics of four types of validations on cropped and uncropped datasets. Figure 7 (a1, a2, a3) presents the holdout validation confusion matrix, training accuracy vs validation accuracy and training loss vs validation accuracy for cropped dataset. Similarly, Figure 7(b1, b2, b2), 7(c1, c2, c3), and 7(d1, d2, d3) presents the graphical representation for 10-fold, stratified 10-fold, and group 10-fold cross validation training progress curves on cropped dataset. All the training and validation progress curves in accuracy and loss clearly shows that there is no overfitting in the model. In four types of validations, our classification model does not suffer from overfitting. Figure 8(a1, a2, a3), 8(b1, b2, b3), 8(c1, c2, c3) and 8(d1, d2, d3) present the training and validation progress curves for classification performance metrics on uncropped MRI images dataset.

Table 2. Performance metrics from 4 types of validations on cropped images dataset

Validation Type

Accuracy

Precision

Recall

F1- Score

Kappa

AUC

Hold out

95.07

91.53

92.05

91.77

88.38

98.89

10-fold (Image Level)

99.10

98.85

98.38

98.57

97.92

99.88

Stratified 10-fold (Image Level)

99.23

98.60

98.82

98.68

98.20

99.84

Group 10-fold (Patient Level)

99.27

98.67

99.53

99.07

99.70

99.70

 
Table 3. Performance metrics from 4 types of validations on uncropped images dataset

Validation Type

Accuracy

Precision

Recall

F1- Score

Kappa

AUC

Hold out

96.7

95.70

93.98

94.69

92.43

99.55

10-fold (Image Level)

99.10

98.11

98.71

98.33

97.85

99.82

Stratified 10-fold (Image Level)

99.32

98.95

98.87

98.88

98.42

99.85

Group 10-fold (Patient Level)

99.82

97.57

99.47

98.40

94.60

99.50

 

(a1)

(a2)

(a3)

(b1)

(b2)

(b3)

(c1)

(c2)

(c3)

(d1)

(d2)

(d3)

Figure 7. Proposed model performance metrics, accuracy and loss history of cropped images (a1,a2,a3) holdout validation (b1,b2,b3) 10-fold cross validation(image-level) (c1,c2,c3) stratified 10-fold cross validation (image-level) (d1,d2,d3) group 10-fold cross validation ( patient-level)
 

(a1)

(a2)

(a3)

(b1)

(b2)

(b3)

(c1)

(c2)

(c3)

(d1)

(d2)

(d3)

Figure 8. Proposed model performance metrics, accuracy and loss history of uncropped images (a1,a2,a3) holdout validation (b1,b2,b3) 10-fold cross validation(image-level) (c1,c2,c3) stratified 10-fold cross validation (image-level) (d1,d2,d3) group 10-fold cross validation ( patient-level)
5. Discussion

Table 4. Comparison of results with existing state-of-the-art works using Figshare dataset

Existing Reference

Accuracy

Precision

Recall

F1-Score

AUC

Afshar et al. [3]

86.56

-

-

-

-

Phaye et al. [17]

95.03

-

-

-

-

Sultan et al. [21]

96.13

96.06

94.43

-

-

Gumaei et al. [22]

92.16

-

-

-

-

Pashaei et al. [25]

93.68

94.60

91.43

93.00

-

Proposed (cropped dataset)

99.27

98.67

99.53

99.07

99.70

Proposed (uncropped dataset)

99.82

97.57

99.47

98.40

99.50

 
This section presents the comparison of the best performed transferred DCNN models with existing state-of-the-art works on the same dataset for brain tumor classification. To compare our findings with those of previous studies, we selected only those papers that built a neural network based on DCNN, used whole images as classification inputs and checked their networks using holdout and k-fold cross validation methods, as shown in Table 4. Unlike previous studies which recorded the performance in terms of overall accuracy, the best performing transferred model achieves the higher accuracy value under the fixed partition and the cross-validation scenario. This obtained an accuracy value of 96.70% and 99.82% higher than current state-of-the-art systems for holdout and 10-fold cross validations.
6. Conclusion

We compared our model with various DCNN models that focused on the classification of MR brain images. A ceiling classification standard was reached in our approach of using pre-trained InceptionV3 DCNN model. We have shown that no pre-processing or classification segmentation of the tumors was needed. In addition, the network has a very good execution speed of 15s per epoch. To check the network, we used both cropped and uncropped data sets with holdout (70:30 ratio) and 10-fold cross validation at the image level & patient level. The patient-level models increased the classification accuracy dramatically from the four evaluated models. Through making a 99.82% classification accuracy for figshare MRI data set, our model has proven to be the finest.

Future works will be focused on other approaches to database augmentation and other ways to perform pre-processing of data to improve the more generalization capability of the network. We want to work on the best turning hyperparameters of network architecture for tumor classification. Further, it could be used during brain operation, classifying, and accurately locating the tumor. Hence, Tumor detection in the operating room will be performed in real-time and in real-world conditions. In the future, we want to apply our designed DCNN architecture model on various medical images and find the adaptability of the model in the real-world scenarios.

  References

[1] Alwan, A. (2007). World health organization-cancer. Disaster Medicine and Public Health Preparedness, 1(1): 7-8. https://doi.org/10.1097/DMP.0b013e3180676d32

[2] Yamamoto, H., Nishi, S., Tomo, T., Masakane, I., Saito, K., Nangaku, M., Hattori, M., Suzuki, T., Morita, S., Ashida, A., Ito, Y., Kuragano, T., Komatsu, Y., Sakai, K., Tsubakihara, Y., Tsuruya, K., Hayashi, T., Hirakata, H., Honda, H. (2015). Japanese society for dialysis therapy: Guidelines for renal anemia in chronic kidney disease. Renal Replace Therapy, 3: 36. https://doi.org/10.1186/s41100-017-0114-y

[3] Afshar, P., Mohammadi, A., Plataniotis, K.N. (2018). Capsule networks for brain tumor classification based on MRI images and coarse tumor boundaries. 2 018 25th IEEE International Conference on Image Processing (ICIP), Athens, pp. 3129-3133. https://doi.org/10.1109/ICIP.2018.8451379

[4] Dondeti, V., Bodapati, J.D., Shareef, S.N., Naralasetti, V. (2020). Deep convolution features in non-linear embedding space for fundus image classification. Revue d'Intelligence Artificielle, 34(3): 307-313. https://doi.org/10.18280/ria.340308

[5] Bodapati, J.D. (2020). Blended multi-modal deep convnet features for diabetic retinopathy severity prediction. Electronics, 9(6): 1-16. https://doi.org/10.3390/electronics9060914

[6] Bodapati, J.D., Veeranjaneyulu, N. (2019). Facial emotion recognition using deep CNN based features. Int. J. Innov. Technol. Explor. Eng., 8(7): 1928-1931.

[7] Wajeed, M.A., Sreenivasulu, V. (2019). Image based tumor cells identification using convolutional neural network and auto encoders. Traitement du Signal, 36(5): 445-453. https://doi.org/10.18280/ts.360510

[8] Renard, F., Guedria, S., De Palma, N., Vuillerme, N. (2020). Variability and reproducibility in deep learning for medical image segmentation. Scientific Reports, 10(1): 1-16. https://doi.org/10.1038/s41598-020-69920-0

[9] Tang, C., Zhu, Q., Wu, W., Huang, W., Hong, C., Niu, X. (2020). PLANET: Improved convolutional neural networks with image enhancement for image classification. Mathematical Problems in Engineering, 2020: 1-10. https://doi.org/10.1155/2020/1245924

[10] Pashaei, M., Kamangir, H., Starek, M.J., Tissot, P. (2020). Review and evaluation of deep learning architectures for efficient land cover mapping with UAS hyper-spatial imagery: A case study over a wetland. Remote Sensing, 12(6): 959. https://doi.org/10.3390/rs12060959

[11] Praveenkumar, K., Padmaja, T.M. (2019). Computational analysis of differences in Indian and American poetry. In: Chaki N., Devarakonda N., Sarkar A., Debnath N. (eds) Proceedings of International Conference on Computational Intelligence and Data Engineering. Lecture Notes on Data Engineering and Communications Technologies, 28: 115-126. https://doi.org/10.1007/978-981-13-6459-4_13

[12] Cheng, J., Huang, W., Cao, S.L., Yang, R., Yun, Z.Q., Wang, Z.J., Feng, Q.J. (2015). Enhanced performance of brain tumor classification via tumor region augmentation and partition. PLoS One, 10(10): 1-13. https://doi.org/10.1371/journal.pone.0140381

[13] Badža, M.M., Barjaktarović, M.C. (2020). Classification of brain tumors from MRI images using a convolutional neural network. Applied Science, 10(6): 1999. https://doi.org/10.3390/app10061999

[14] Hossain, T., Shishir, F.S., Ashraf, M., Al Nasim, M.A., Muhammad Shah, F. (2019). Brain tumor detection using convolutional neural network. 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), Dhaka, Bangladesh, pp. 1-6. https://doi.org/10.1109/ICASERT.2019.8934561

[15] Özyurt, F., Sert, E., Avci, E., Dogantekin, E. (2019). Brain tumor detection based on Convolutional Neural Network with neutrosophic expert maximum fuzzy sure entropy. Measurement, 147: 106830. https://doi.org/10.1016/j.measurement.2019.07.058

[16] Kaur, T., Gandhi, T.K. (2020). Deep convolutional neural networks with transfer learning for automated brain image classification. Machine Vison and Applications, 31(3): 1-16. https://doi.org/10.1007/s00138-020-01069-2

[17] Phaye, S.S.R., Sikka, A., Dhall, A., Bathula, D. (2018). Dense and diverse capsule networks: Making the capsules learn better. Computer Vision and Pattern Recognition, 1-11. Available: http://arxiv.org/abs/1805.04001

[18] Balasooriya, N.M., Nawarathna, R.D. (2018). A sophisticated convolutional neural network model for brain tumor classification. 2017 IEEE International Conference on Industrial and Information Systems (ICIIS), Peradeniya, pp. 1-5. https://doi.org/10.1109/ICIINFS.2017.8300364

[19] Sobhaninia, Z., Rezaei, S., Noroozi, A., Ahmadi, M., Zarrabi, H., Karimi, N., Emami, A., Samavi, S. (2018). Brain tumor segmentation using deep learning by type specific sorting of images. Available: http://arxiv.org/abs/1809.07786

[20] Seetha, J., Raja, S.S. (2018). Brain tumor classification using Convolutional Neural Networks. Biomed. Pharmacol. J., 11(3): 1457-1461. https://doi.org/10.13005/bpj/1511

[21] Sultan, H.H., Salem, N.M., Al-Atabany, W. (2019). Multi-classification of brain tumor images using deep neural network. IEEE Access, 7: 69215-69225. https://doi.org/10.1109/ACCESS.2019.2919122

[22] Gumaei, A., Hassan, M.M., Hassan, M.R., Alelaiwi, A., Fortino, G. (2019). A hybrid feature extraction method with regularized extreme learning machine for brain tumor classification. IEEE Access, 7: 36266-36273. https://doi.org/10.1109/ACCESS.2019.2904145

[23] Kurup, R.V., Sowmya, V., Soman, K.P. (2020). Effect of data pre-processing on brain tumor classification using capsulenet. In: Gunjan V., Garcia Diaz V., Cardona M., Solanki V., Sunitha K. (eds) ICICCT 2019 – System Reliability, Quality Control, Safety, Maintenance and Management. ICICCT 2019, pp. 110-119. https://doi.org/10.1007/978-981-13-8461-5_13

[24] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z. (2016). Rethinking the inception architecture for computer vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, pp. 2818-2826. https://doi.org/10.1109/CVPR.2016.308

[25] Pashaei, A., Sajedi, H., Jazayeri, N. (2018). Brain tumor classification via convolutional neural network and extreme learning machines. 2018 8th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, pp. 314-319. https://doi.org/10.1109/ICCKE.2018.8566571