A Novel Fusion Approach to Detect Brain Tumor Using Machine Learning for MRI Images

A Novel Fusion Approach to Detect Brain Tumor Using Machine Learning for MRI Images

Srisabarimani Kaliannan Arthi RengarajAlex Prabhu Daniel 

ECE, SRM Institute of Science and Technology, Ramapuram, Chennai 600089, Tamil Nadu, India

Chettinad Hospital and Research Institute, Kelambakkam, Chennai 603103, Tamil Nadu, India

Corresponding Author Email: 
arthir2@srmist.edu.in
Page: 
1363-1370
|
DOI: 
https://doi.org/10.18280/ts.390430
Received: 
25 May 2022
|
Revised: 
15 July 2022
|
Accepted: 
23 July 2022
|
Available online: 
31 August 2022
| Citation

OPEN ACCESS

Abstract: 

Medical Imaging challenges the recent researchers with variability of potential structures, positions and appearance strengths of various tumors present among the patients. The proposed work presents an effective brain tumor watershed segmentation technique created on 2D image followed by statistical feature extraction. Machine Learning models such as SVM, KNN, and XG boost were used to inspire the network design in order to extract tumor existence. The proposed segmentation algorithm has been tested and evaluated on original images that consist of an aggregate of 52 normal MRI volumes of distinctive patients with the presence of tumors or not signifying distinctive structures that obtains outcomes near to physical segmentation implementations. The novelty present in the proposed work classifies whether the tumor is present or not with an accuracy of approximately 98%.

Keywords: 

brain tumor, XG boost algorithm, segmentation algorithm, machine learning

1. Introduction

The human brain acts as a command centre. Brain is the higher most centre in the body that is dependable to execute movements of every sort through an enormous number of associations and countless neurons. The hard skull bone protects the brain. Problem arises in the brain when any abnormalities occur in this restricted space. The Brain tumor is perhaps the most genuine illness, happened because of an unusual development of cells in the cerebrum, influencing the elements of the sensory system.

Both malignant (cancerous) and the benign (non-cancerous) tumor can occur within the brain. Dangerous (malignant) cerebrum growths are diseases that normally develop faster than harmless (benign) cancers, and as soon as possible penetrate the encompassing designs. The pressure inside the skull might rise as a benign or malignant tumor grows in size. As the tumor expands with increasing pressure within the brain may push the cerebellar tonsils downwards, which is life-threatening. The Brain tumor can be classified as a primary type that begins in the brain itself, whereas the secondary spreads from an additional organ like lung or breast also known as the metastatic brain tumor. Migraines, vision difficulties, seizures, cognitive decline, a transformation in character, trouble in fixation, lack of management, development on speech, damage of equilibrium, and emotional episodes are some of the physical symptoms that people with a cerebrum growth suffer.

MRI is a non-invasive technique for diagnosing any problems that occurs in the brain. MRI notoriety it brings out reality of utilizing radiation that isn't ionising during the sweep. Added with its unrivalled delicate tissue goal in addition to the capacity to procure various pictures utilizing different imaging boundaries or by utilizing contrast-improved specialists. MRI gives massive information about our brain and also picks the problem in our brainstem and cerebellum. There are three orientations for brain MRI representation as axial, sagittal and coronal.

MRI is performed with several acquisitions with different weights, i.e., T1-weighted, T2-weighted, and Flair (Fluid Attenuated Inversion Recovery), T2 gradient-echo, diffusion, and T1 after gadolinium injection. The T1 weighted brain scan is used to view anatomical in three dimensions: axial, coronal, and sagittal. The MRI head image comes in an axial manner from the neck to the head. Beginning at the nose tip and end at the back of head, the coronal direction is used. From one ear to the other, the sagittal direction extends. T1-weighted pictures will be transferred using TE and TR values. The difference and splendor of the picture not set in stone by T1 assets of tissue. T2-weighted images, on the other hand, are provided using longer TE and TR times. T1 sequences will have grey matter being darker than white matter. The most well-known neuro-imaging conventions utilized for treatment to reveal follow about illness perspective is Magnetic Resonance Imaging (MRI), which can give point by point pictures of the brain. The Precise division of cerebrum cancers from MR pictures would be of huge possible incentive for further developed conclusion, development rate forecast and therapy arranging.

Machine Learning (ML) algorithms was broadly arisen for the clinical imaging field as a piece where Artificial Intelligence was used. It tends to separate two principle classifications, Supervised and unsupervised. In managed strategies, a calculation is utilized to observe a planning capacity of information factors and their connected result marks to foresee new subject names. The essential objective is to learn intrinsic examples for the preparation information utilizing calculations like Artificial Neural Network (ANN) [1], Support Vector Machine (SVM), and K-Nearest Neighbors (KNN).

The picture combination can be partitioned into single-mode combination (mono-methodology) which alludes to a similar imaging methodology picture combination, for example, the CT-CT picture combination, the MR-MR picture combination and the other multimode combination (multi-methodology), which is a combination of various imaging modalities, for example, CT and an MRI picture combination, CT and a PET picture combination.

2. Literature Review

Gumaei et al. fostered a component extraction strategy utilizing Regularized Extreme Learning Machine (RELM) [1] for ordering various kinds of brain tumors. The images are computed using principle component analysis that based on the covariance matrix obtained from images using hybrid feature extraction method. The results of 91 to 94% of accuracy can be obtained using this approach.

Sun et al. made an assessment for feature selection approaches and classifiers that learn from data for brain tumor examination [2]. The feature is selected using ten-fold cross validation and a split by percentage modes. The precision till the bend has been measured along with other roles of different features, which is used in tumor prediction based on radiomics.

Khan et al. presents a thorough examination of machine learning approaches and deep learning models for classifying images of various sorts, such as CT, ultrasound, an MRI, and X-ray [3]. Florimbi et al. Classifies the brain images using the multi-GPU platform for classification, Principal Component Analysis (PCA) algorithm, Support Vector Machine (SVM) and K-Nearest Neighbors (KNN) results in real time, computing improving the classification accuracy [4].

Bahadure et al. used the Berkeley wavelet transformation (BWT) for brain tumor segmentation and SVM based classification [5] for the brain images with 96% accuracy, 94% specificity, and 98% sensitivity. Kabir et al. introduced a five-step system for detecting and extracting features from brain MRI images [6]. To remove undesired artifacts, the input MRI picture is preprocessed utilizing a principle component-based grayscale conversion and an anisotropic diffusion filter. The contrast limited adaptive histogram equalization (CLAHE) is then used to enhance the image contrast. The tumor is later segmented using the Chan-Vese algorithm and multivariant thresholding. Statistical features, texture features, and wavelet features continue to be assessed in order to determine the segmented objects. Ultimately, a genetic algorithm is used to choose the suitable characteristics and an artificial neural network has been expended to classify the segmented item.

Saman and Jamjala Narayanan proposed an assessment for Segmentation and feature extraction of a brain tumor MR images and discussed about various preprocessing, Various segmentation, Various feature extraction method [7].

Ma et al. developed the automated segmentation concatenated and connected random forests (ccRFs) for gliomas structure inference, this technology involves random forests and active contour models [8]. Finally, using sparse representation approaches, a novel multiscale patch driven active contour (mpAC) model was used to enhance the inferred structure.

Ghaffari et al. presented an assessment model using BraTS 2012–2018 challenges [9]. The goal has been to examine the computerized brain tumor evolution for segmentation models using multimodal MR images. The BraTS tasks from 2012 to 2018 are examined, as well as the state-of-the-art automated models used annually.

Tang et al. developed MAS is to segment a new brain picture by collecting and combining label data from a variety of typical brain atlases [10]. Initially, MAS framework uses

new low-rank approach to recover a brain that appears to be normal image from an MR brain tumor image using information from normal brain information. Then, without regard for malignancies, normal brain information was stored for developing reconstructed image for the second stage. These two procedures are repeated iteratively until convergence was achieved, results on the brain tumor final segment.

Zhou et al. used a new approach for segmenting brain tumors with missing modalities [11]. Because multi-modalities have a high association, to accurately represent the latent multi-source correlation, a correlation model is built.

Wu et al. developed a patch-based meagre representation approach to extract the features using training dictionary [12]. An iterative sparse representation-based feature selection was utilized to choose several feature representations. By means of cross-validation with one left out, to evaluate the projection precision of candidate weight subsets, multi-feature collaborative classification has been wrapped around the weight training structure (LOOCV).

Venkatachalam et al. used Gabor Walsh-Hadamard Transform for a Content-Based Medical Image Retrieval (CBMIR) method to collect all the brain tumor affected images from the huge database [13]. To begin, use multiple filtering approaches to eliminate noise from MRI pictures. After that, devise a feature extraction scheme for determining typical characters from MRI images that combines the Gabor filtering technique with the Walsh-Hadamard transform (WHT). After that, use the Fuzzy C-Means clustering Minkowski distance metric to retrieve accurate and dependable image. This metric can extent the comparison between the enquiry image and catalogue images.

Arthi et al. explained about various segmentation and detection classification model [14]. Alhassan and Zainon used a modern learning-based technique to process automatic the Bat Algorithm with Fuzzy C-Ordered Means (BAFCOM) clustering algorithm advocated segmenting the tumour and developed a Five-step based algorithm for detection and feature extraction of brain MRI images [15]. The Bat method was used to determine the primary centroids and distance inside pixels in BAFCOM’s clustering process, as well as collecting the tumour by calculating the distance between the tumour and non-tumor Regions of Interest (RoI). Using Enhanced Capsule Networks (ECN) technique, the MRI picture has been segmented as a normal or brain tumor.

Arthi et al. discussed about various method used for segmentation of images [16].

Chaddad et al. proposed a model using multimodal MRI characteristics for identifying status of a gene and lasting life of the person with low-grade glioma. For lower grade glioma (LGG) tumors, radiomic analysis is used to present a new class to represent fine-grained texture characteristics using a joint intensity matrix for extracting features from multimodal images [17].

Chaddad et al. introduces a new set of image texture features that uses joint intensity matrices to generalize traditional grey-level co-occurrence matrices (GLCM) to multimodal image data (JIMs) [18]. These are used to forecast glioblastoma multiformed (GBM) patients survival based on multimodal MRI data. Sri Sabarimani and Arthi discussed about different segmentation and classification for MRI Brain tumor images [19].

Lather et al. presented different techniques available for segmentation for brain tumor detection [20].

Huang et al. presented brain tumor segmentation is treated as a classifying problem in this method [21]. The local independent projection-based classification (LIPC) method is also used to categorize each pixel into a number of different groups. By merging the local independent projection with the traditional classification method, a new classification methodology is created. The computation of projections for local independent LIPC, the vicinity is crucial. When considering whether local anchor embedding or additional coding methods are better for decoding linear projection weights, proximity was also considered. Furthermore, LIPC learns a SoftMax regression model that takes into account the data distribution of distinct classes, which can boost classification performance even more.

Srisabarimani et al. discussed about different segmentation and classification for MRI Brain tumor images at various stages using different Machine Learning concepts [22]. Vijay Wasule and Sonar classified brain tumor using 251 images of clinical database [23]. The accuracy obtained for SVM is 96% and KNN is 86%. Vankayalapati and Muddana discussed the classification of tumor and non-tumor cells [24] using a Double-Weighted Feature Extraction Labelling Model with Priority Weighted Feature Selection (DWLM-PWFS).

3. Proposed Work

The existing method for Machine learning algorithm without fusion for SVM and KNN has made to propose the Novel approach of fusion method to detect the accuracy of brain tumor using Machine learning algorithm. The proposed work makes use of Machine learning algorithm such as SVM, KNN and XG Boost to detect the accuracy of Brain tumor. The proposed work as shown in Figure 1 clearly demonstrates the process of detecting the presence of Brain tumor.

Figure 1. Proposed fusion block diagram to detect the presence of brain tumor

The input MRI image is sliced to build up the number of several smaller levels of images. Sliced images under goes the filtering process by producing the output of median, Anisotropic, Gaussian and wavelet smoothed to increase the quality of an image. The improved quality images are fed as input for the segmentation process. The proposed work uses watershed algorithm that compares with ROI for segmentation. Once the feature extraction is done, the image is trained with the Machine learning algorithm to obtain the accuracy of the tumor. SVM, KNN and XG boost algorithms are used to attain the accuracy with proper training. Out of these, the XG boost gives the logloss details of the image by giving the inaccuracy of the image so that the best accuracy of the tumor can be obtained.

The original MRI scan image from numerous patients comprising of T1, T1 flair, T2, T2 flair, DWI (Diffused weighted Image) are taken for the medical imaging process as shown in Figure 2 for the proposed fusion method using DWT method. Initially, the annotations are removed to maintain privacy.

Figure 2. Proposed fusion method

Once the model is trained, classifies presence of a tumor of not.

Step 1: The images that will be grouped is then organized in order to ensure that the corresponding pixels are modified.

Step 2: Due to the wavelet change, representations are degraded into wavelet modified pictures independently. The changed pictures incorporate one low recurrence segment and three high recurrence segments.

Step 3: Variation in two image values is viewed as a combination of a low and high sub band based on combination rules.

Step 4: The merged image is formed in step 3 by conducting a reverse wavelet change and evaluating the resulting change coefficients.

Figure 3. Sliced images

3.1 Pre-processing stage

3.1.1 Slicing process

The slicing procedure was used to increase the size of the image pixel by pixel. The Gradient Slicing Principle was used to change the image’s intensity from black to white levels, as illustrated in Figure 3 and the procedure narrowed down to determine the existence of a tumor with 80 dynamic levels.

3.1.2 Contrast enhancement

The Flat Histogram equalization has been used for contrast enhancement to increase the pixel level as shown in Figure 4 for making the image clarity better for both with and without fusion.

Figure 4. Contrast enhanced images

3.1.3 Filtering

The median filtering (3x3) is applied to the image to remove the spiky noise and also to maintain the sharp edges as shown in Figure 5 for both with and without fusion.

Figure 5. Median filtering

The Anisotropic filter has been used to smoothen the image in order to enhance the image quality of texture at different angles. It is also used to remove noise without removing major components of the image such as lines, the edges for both with and without fusion as shown in Figure 6.

Figure 6. Anisotropic filter

The Bayes shrinking method has been used for denoising and restoring purpose based on wavelet filtering. As shown in Figure 7 this approach also turns the image into wavelets and removes noise from the wavelet signal by cutting the image’s detailed coefficients for fusion and without fusion.

Figure 7. Wavelet filtering

As illustrated in Figure 8 Gaussian filtering is used to increase the signal to noise ratio by lowering the image noise and improving the signal quality and it does not combine any other new filtering method.

Figure 8. Gaussian filtering

Figure 9. Watershed segmentation process

3.2 Segmentation stage

The watershed segmentation process has been shown in Figure 9 and the applicable scenarios such as a Background, the Foreground image, segmentation of the region and the final segmented stage for fusion and without fusion images. The steps show to narrow down the presence of the tumor based on the watershed segmentation process by giving clarity in the scan for the fused image with T1, T1 flair, T2, T2 flair to obtain the accuracy of the tumor.

Figure 10. Distributions of first order features for both classes

Figure 11. Distributions of second order features for both classes

3.3 Feature extraction stage

The 1st order and 2nd order statistical feature extraction has been done using GLCM method. The mean value is set to zero and variance to one, the appropriate normalized image quality is achieved. The distributions of 1st order such as Mean, Variance, Standard Deviation, Skewness, Kurtosis and 2nd order features such as Entropy, Contrast, Energy Angular Second Moment (ASM), Homogeniety, Dissimilarity, Correlation for both Tumor and No Tumor cases and is as shown in Figure 10 and Figure 11. The shaded portion in the graph conveys the representation of both cases.

It’s clear, from the figure above, that the distributions of the majority of the second order features corresponding to the ‘Tumor’ classes are different that the ones corresponding to the ’No Tumor’ class, which suggest that these features are very useful in determining the class of a certain image. Whereas, the distributions of the first order features are less different with respect to both classes.

3.4 Heat map

To support our observation, we are going to plot the correlation heat map of all the important features present in the dataset. The heat map conveys the various feature extraction parameters of the image representing the lighter shades as high values and darker shades as low values.

The following input has been considered to train, test and validate the model to find accuracy and F1 score. The train image set shape was considered as (2708, 14400), the test image set shape as (753, 14400) and the validation image set shape as (301, 14400). Various parameters such as training accuracy, testing accuracy and F1 score are shown in Table 1. The Cross Entropy Loss is used to find the loss function. And the heat map representation of these models is depicted in Figure 12.

Figure 12. Heat map

The best training model with hyper-parameters and the best F1 score are as shown in Table 2, graphical representation and confusion matrix as shown in Figure 13. The Confusion Matrix highlights that the best prediction accuracy attains at 88%. The best model has accuracy of 92% is shown in Figure 14 The train and validation losses evolution are measured using Loss Vs. Epochs.

The graphs represent XG Boost log loss, RMSE and Classification error. The training and validation data were considered from the model. The XG Boost Log loss model as shown in Figure 15 gives the best accuracy of 98%. The confusion matrix for XG Boost was shown in Figure 16. The model predicts all the possible outcome with the accuracy of 98% for true positives (1,1) and 100% for true negatives (0,0). Based on the observations made by the various Machine Learning Algorithm such as SVM, KNN and XG Boost method as shown in Table 3, the performance metrics has been calculated to compare different model prediction with our proposed model.

Table 1. Parameters of MLP model

Training Model

Learning Rate

Training Duration

Number of Epochs

Train Accuracy

Test Accuracy

F1 Score

1

0.00016867746043656146

28.702775891621908 min

10000

0.8242245199409158

0.8047808764940239

0.78026905829596

2

0.00016867746043656146

28.443351165453592 min

10000

0.803175775480059

0.7888446215139442

0.76788321167883

3

0.00016867746043656146

28.335543791453045 min

10000

0.7957902511078286

0.7928286852589641

0.78212290502793

4

0.0008019269130161032

15.57481650908788 min

5510

0.9102658788774003

0.851261620185923

0.82926829268292

5

0.0008019269130161032

18.21194083293279min

6379

0.9276218611521418

0.8552456839309429

0.82888540031397

6

0.0008019269130161032

23.30605591535568min

8064

0.98301329394387

0.900398406374502

0.88687782805429

7

0.0004945683082968041

21.366606267293296min

7147

0.9361152141802068

0.8552456839309429

0.83707025411061

8

0.0004945683082968041

4.547182802359263min

1447

0.7957902511078286

0.7928286852589641

0.79255319148936

9

0.0004945683082968041

5.0644926230112715min

1657

0.8762924667651403

0.8326693227091634

0.81194029850746

Figure 13. Graphical representation of training and validation

Table 2. The best training model

Best F1 score

0.8868778280542987

Best hyper-parameters

Learning rate: 0.0008019269130161032

Regularization coefficient: 1.5384958704104337

Grid search training duration

173.5572984814644 min

Table 3. Comparison of XG Boost, SVM and KNN

Classes

Accuracy (%)

Precision (%)

NPV (%)

Sensitivity (%)

Specificity (%)

F1-score (%)

MCC (%)

XG Boost

98

98.1354269

99.6948118

99.7008973

98.0980981

98.9632

97.6353

SVM

95

93.4579439

99.7920998

99.8003992

93.2038835

96.5250965

93.136037

KNN

97

98.9690722

94.1747573

94.1176471

98.9795918

96.4824121

93.121

Figure 14. Best model

Figure 15. XG Boost log loss

Figure 16. XG Boost confusion matrix

4. Conclusion

The proposed work uses SVM, KNN and XG Boost for MRI brain tumor classification for both fusion and without fusion MRI images. The system has been supported with 16GB RAM, a 64-bit operating system with a CPU of 2.10 GHz clock speed, and a GPU of 16 GB capacity Geforce RTX 2080 supermax design. The clinical dataset has been used to train and test the models. The data were collected from 53 patients with varying weighed as T1 Flair, T2 Flair and DWI (Diffusion Weighted Image). The slicing, contrast enhancement, various filtering methods like Median filtering, Anisotropic filtering, Wavelet filtering and Gaussian filtering were used to classify the presence of the tumor in MRI image. Further, watershed segmentation and GLCM feature extraction methods are carried out for classifying the MRI images. After numerous trainings and testing the accuracy of 98% for the XG Boost method, 95% for SVM method and 97% for the KNN method for with fusion has been obtained. The innovativeness of the proposed work has been evaluated by the fusion method incorporated with real time MRI Images. Deep learning algorithms may be utilized in the future to identify different forms of brain tumors such as benign or malignant.

Declarations

The proposed work uses the real-time database collected from 52 patients and processed to detect the brain tumor using machine learning technique.

  References

[1] Gumaei, A., Hassan, M.M., Hassan, M.R., Alelaiwi, A., Fortino, G. (2019). A hybrid feature extraction method with regularized extreme learning machine for brain tumor classification. IEEE Access, 7: 36266-36273. https://doi.org/10.1109/ACCESS.2019.2904145

[2] Sun, P., Wang, D., Mok, V.C., Shi, L. (2019). Comparison of feature selection methods and machine learning classifiers for radiomics analysis in glioma grading. IEEE Access, 7: 102010-102020. https://doi.org/10.1109/ACCESS.2019.2928975

[3] Khan, S., Sajjad, M., Hussain, T., Ullah, A., Imran, A.S. (2020). A review on traditional machine learning and deep learning models for WBCs classification in blood smear images. IEEE Access, 9: 10657-10673. https://doi.org/10.1109/ACCESS.2020.3048172

[4] Florimbi, G., Fabelo, H., Torti, E., Ortega, S., Marrero-Martin, M., Callico, G.M., Danese, G., Leporati, F. (2020). Towards real-time computing of intraoperative hyperspectral imaging for brain cancer detection using multi-GPU platforms. IEEE Access, 8: 8485-8501. https://doi.org/10.1109/ACCESS.2020.2963939

[5] Bahadure, N.B., Ray, A.K., Thethi, H.P. (2017). Image analysis for MRI based brain tumor detection and feature extraction using biologically inspired BWT and SVM. International Journal of Biomedical Imaging, 2017: 1-12. https://doi.org/10.1155/2017/9749108

[6] Kabir, M.A. (2020). Automatic brain tumor detection and feature extraction from MRI image. GSJ, 8(4):695-711. 

[7] Saman, S., Jamjala Narayanan, S. (2019). Survey on brain tumor segmentation and feature extraction of MR images. International Journal of Multimedia Information Retrieval, 8(2): 79-99. https://doi.org/10.1007/s13735-018-0162-2

[8] Ma, C., Luo, G., Wang, K. (2018). Concatenated and connected random forests with multiscale patch driven active contour model for automated brain tumor segmentation of MR images. IEEE Transactions on Medical Imaging, 37(8): 1943-1954. https://doi.org/10.1109/TMI.2018.2805821

[9] Ghaffari, M., Sowmya, A., Olive R. (2019). Automated brain tumor segmentation using multimodal brain scans: a survey based on models submitted to the BraTS 2012–2018 challenges. IEEE Reviews in Biomedical Engineering, 13: 156-168. https://doi.org/10.1109/RBME.2019.2946868

[10] Tang, Z., Ahmad, S., Yap, P.T., Shen, D. (2018). Multi-atlas segmentation of MR tumor brain images using low-rank based image recovery. IEEE Transactions on Medical Imaging, 37(10): 2224-2235. https://doi.org/10.1109/TMI.2018.2824243

[11] Zhou, T., Canu, S., Vera, P., Ruan, S. (2021). Latent correlation representation learning for brain tumor segmentation with missing MRI modalities. IEEE Transactions on Image Processing, 30: 4263-4274. https://doi.org/10.1109/TIP.2021.3070752

[12] Wu, G., Chen, Y., Wang, Y., Yu, J., Lv, X., Ju, X., Shi, Z., Chen, L., Chen, Z. (2017). Sparse representation-based radiomics for the diagnosis of brain tumors. IEEE Transactions on Medical Imaging, 37(4): 893-905. https://doi.org/10.1109/TMI.2017.2776967

[13] Venkatachalam, K., Siuly, S., Bacanin, N., Hubálovský, S., Trojovský, P. (2021). An efficient Gabor Walsh-Hadamard transform based approach for retrieving brain tumor images from MRI. IEEE Access, 9: 119078-119089. https://doi.org/10.1109/ACCESS.2021.3107371

[14] Arthi, R., Ahuja, J., Kumar, S., Thakur, P., Sharma, T. (2021). Small object detection from video and classification using deep learning. In: Bhoi, A.K., Mallick, P.K., Balas, V.E., Mishra, B.S.P. (eds) Advances in Systems, Control and Automations. ETAEERE 2020. Lecture Notes in Electrical Engineering, vol 708. Springer, Singapore. https://doi.org/10.1007/978-981-15-8685-9_10

[15] Alhassan, A.M., Zainon, W.M.N.W. (2020). BAT algorithm with fuzzy C-ordered means (BAFCOM) clustering segmentation and enhanced capsule networks (ECN) for brain cancer MRI images classification. IEEE Access, 8: 201741-201751. https://doi.org/10.1109/ACCESS.2020.3035803

[16] Arthi, R., Kishan, A.R., Abraham, A., Sattenapalli, A. (2020). Centralized intelligent authentication system using deep learning with deep dream image algorithm. In: Priyadarshi, N., Padmanaban, S., Ghadai, R.K., Panda, A.R., Patel, R. (eds) Advances in Power Systems and Energy Management. ETAEERE ETAEERE 2020 2020. Lecture Notes in Electrical Engineering, vol 690. Springer, Singapore. https://doi.org/10.1007/978-981-15-7504-4_18

[17] Chaddad, A., Desrosiers, C., Abdulkarim, B., Niazi, T. (2019). Predicting the gene status and survival outcome of lower grade glioma patients with multimodal MRI features. IEEE Access, 7: 75976-75984. https://doi.org/10.1109/ACCESS.2019.2920396

[18] Chaddad, A., Daniel, P., Desrosiers, C., Toews, M., Abdulkarim, B. (2018). Novel radiomic features based on joint intensity matrices for predicting glioblastoma patient survival time. IEEE Journal of Biomedical and Health Informatics, 23(2): 795-804. https://doi.org/10.1109/JBHI.2018.2825027

[19] Sri Sabarimani, K., Arthi, R. (2021). A brief review on brain tumour detection and classifications. In: Bhoi, A., Mallick, P., Liu, CM., Balas, V. (eds) Bio-inspired Neurocomputing. Studies in Computational Intelligence, vol 903. Springer, Singapore. https://doi.org/10.1007/978-981-15-5495-7_4

[20] Lather, M., Singh, P. (2020). Investigating brain tumor segmentation and detection techniques. Procedia Computer Science, 167: 121-130. https://doi.org/10.1016/j.procs.2020.03.189

[21] Huang, M., Yang, W., Wu, Y., Jiang, J., Chen, W., Feng, Q. (2014). Brain tumor segmentation based on local independent projection- based classification. IEEE Transactions on Biomedical Engineering, 61(10): 2633-2645. https://doi.org/10.1109/TBME.2014.2325410

[22] Srisabarimani, K., Arthi, R., Parihar, N., Priya, K., Nair, S. (2020). Detection and classification of different stages of benign and malignant tumor of MRI brain using machine learning. In: Priyadarshi, N., Padmanaban, S., Ghadai, R.K., Panda, A.R., Patel, R. (eds) Advances in Power Systems and Energy Management. ETAEERE ETAEERE 2020 2020. Lecture Notes in Electrical Engineering, vol 690. Springer, Singapore. https://doi.org/10.1007/978-981-15-7504-4_31

[23] Wasule, V., Sonar, P. (2017). Classification of brain MRI using SVM and KNN classifier. Third International Conference on Sensing, Signal Processing and Security, pp. 218-223. https://doi.org/10.1109/SSPS.2017.8071594

[24] Vankayalapati, R., Muddana, A.L. (2021). Accurate brain tumor recognition using double-weighted feature extraction labelling model with priority weighted feature selection. Traitement du Signal, 38(5): 1377-1383. https://doi.org/10.18280/ts.380513