BioSwarmNet: A Revolutionary Approach to Brain Tumour Detection Using Fractional Order Differential Particle Swarm Optimisation and Recurrent Neural Networks

BioSwarmNet: A Revolutionary Approach to Brain Tumour Detection Using Fractional Order Differential Particle Swarm Optimisation and Recurrent Neural Networks

Indu Gorrepati Pavan Kumar Pagadala*

Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Hyderabad 500075, Telangana, India

Computer Science and Engineering, Institute of Aeronautical Engineering, Dundigal, Hyderabad 500043, Telangana, India

Corresponding Author Email: 
ppagadala125@gmail.com
Page: 
1263-1273
|
DOI: 
https://doi.org/10.18280/ria.380420
Received: 
12 January 2024
|
Revised: 
1 March 2024
|
Accepted: 
14 May 2024
|
Available online: 
23 August 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Brain tumours are a major public health concern, and early and accurate detection is critical in treatment. Early and precise detection of brain tumors is paramount, yet current technologies often struggle to achieve the necessary level of accuracy due to inherent limitations in image processing and classification methodologies. While approaches like convolutional neural networks and optimization techniques have shown promise, they often fall short in capturing intricate patterns and textures or achieving sufficient sensitivity, emphasizing the need for more advanced and integrated solutions like the proposed BioSwarmNet model. The system includes a meticulously designed image processing pipeline that ensures data consistency and quality. BioSwarmNet, a novel combination of Fractional Order Differential Particle Swarm Optimisation (FODPSO) and Recurrent Neural Networks (RNNs), uses swarm intelligence and deep learning to revolutionize medical image classification. Using the well-known BRATS dataset, this study provides a promising avenue for improving diagnostic accuracy and efficiency in brain tumour detection, which has the potential to benefit both healthcare professionals and patients. Notably, the proposed system outperformed in key metrics such as 99.12% accuracy, 98.62% sensitivity, and 99.86% specificity.

Keywords: 

brain tumors, medical image analysis, BioSwarmNet model, accuracy, sensitivity, specificity, image processing

1. Introduction

As of 2023, cancer statistics reveal a significant toll on public health, with an estimated 11,020 deaths attributed to brain and other nervous system cancers in males and 7,970 in females. Among children, brain tumors, including benign and borderline malignant types, account for 26% and 21% of diagnoses, followed by lymphoma at 12% and 19%. Excluding benign and borderline malignant brain tumors, which boast a 5-year relative survival rate of over 97% in children and adolescents, mortality rates remain concerning. Brain tumours are a significant and urgent healthcare concern that necessitates an accurate and timely diagnosis [1-3]. These abnormal cell growths within the brain can manifest in a variety of ways, posing a serious risk to an individual's health [4]. The importance of detecting and treating brain tumours stems from their proclivity to evolve rapidly, frequently resulting in severe neurological complications and even life-threatening conditions [5-7]. The prevalence of brain tumours is not only a medical issue, but also a societal one, as the consequences can be devastating for patients and their families [8].

Brain tumours can develop as a result of a variety of factors, including genetic predisposition and environmental influences [9]. While the exact causes are unknown, certain risk factors have been identified, including radiation exposure, a family history of brain tumours, and certain genetic syndromes [10]. Despite advances in medical science, the exact aetiology of brain tumours is frequently unknown, making early detection and diagnosis critical [11].

The ability of brain tumours to grow and infiltrate brain tissues, resulting in a variety of neurological symptoms, characterizes their evolution [12]. Headaches, seizures, changes in cognitive function, and motor deficits are all possible symptoms. Brain tumours can progress quickly, exacerbating these symptoms and necessitating immediate medical attention. In some cases, a tumour can grow to such a size or location that it causes a sudden neurological crisis, necessitating emergency treatment [13, 14].

The analysis identifies several shortcomings in current brain tumor detection methods, including low sensitivity, interpretability issues, and a reliance on meticulous parameter tuning. Existing approaches, such as optimization techniques and deep learning models, frequently struggle to capture intricate patterns and textures in brain tumor images, resulting in suboptimal performance and limiting high-level abstraction. Furthermore, while some models achieve commendable accuracy, they may be insufficiently robust and fail to meet current standards. BioSwarmNet addresses these challenges by integrating bio-inspired optimization techniques with Recurrent Neural Networks (RNNs), leveraging their collective capabilities to enhance accuracy, sensitivity, and interpretability in brain tumor detection.

Our research addresses the critical need for accurate and timely detection and diagnosis of brain tumours. Given the urgency of brain tumour cases, our contribution is focused on the development of an advanced medical image analysis system based on the novel "BioSwarmNet" model. This system is intended to improve the accuracy and efficiency of brain tumour identification significantly. Our research aims to provide a reliable tool for healthcare professionals by leveraging the renowned BRATS dataset and harnessing swarm intelligence and deep learning. This tool can help with the early detection and diagnosis of brain tumours, which can improve patient outcomes and reduce the burden on both patients and the healthcare system. In the following sections, we will go over our methodology in detail and present compelling evidence of our approach's efficacy in addressing this pressing healthcare challenge.

The paper is divided into five major sections: The introduction establishes the context by emphasizing the critical need for early and precise brain tumour detection, introducing our advanced medical image analysis system based on the "BioSwarmNet" model, and outlining the paper's structure. We provide context in the second section with a comprehensive literature review that summarizes recent research in brain tumour detection and its limitations. The third section delves into the architecture and operation of our proposed system, emphasizing the importance of each step and introducing the unique "BioSwarmNet" model, which combines Fractional Order Differential Particle Swarm Optimisation (FODPSO) and Recurrent Neural Networks (RNNs). The fourth section presents our research's findings and analysis, highlighting the impressive results obtained when applying our approach to the BRATS dataset, including superior accuracy, sensitivity, and specificity when compared to previous works. Finally, the paper summarizes our contributions, highlighting the transformative potential of "BioSwarmNet" in brain tumour detection, and providing comments on broader implications in medical image analysis and healthcare. The paper is supplemented by a references section that includes citations for further research on the topic.

2. Literature Review

Researchers have made significant advances in the quest for more accurate and efficient brain tumour detection and segmentation in the ever-changing landscape of medical image analysis. This review of the literature delves into several pivotal contributions in this field, shedding light on the advantages and disadvantages of various approaches. These studies collectively shape the ongoing pursuit of improved diagnostic tools, ranging from optimization techniques to deep learning models and GAN-based innovations.

Biratu et al. [15] investigated optimizing brain tumour detection, recognizing the potential of optimization techniques to improve performance. Their study, however, did not investigate the capabilities of machine learning models in capturing intricate patterns and textures within brain tumour images.

In a similar effort, Malathi et al. [16] proposed "Brain Tumour Segmentation Using Convolutional Neural Network with Tensor Flow," though it revealed a significant limitation with an 82% sensitivity. This method was criticised for relying too heavily on low-level decisions, which hampered high-level abstraction researchers. Ibtehaz and Rahman [17] presented a "U-Net Architecture for Multimodal Biomedical Image Segmentation" with an accuracy of 91.65%, which is considered low in modern contexts.

Deng et al.'s [18] paper represents a significant milestone in medical image analysis, particularly in brain tumour segmentation. Their HCNN and CRF-RRNN models, which combine deep learning with advanced post-processing, demonstrate how technology is constantly evolving, providing healthcare professionals with precision tools for diagnosing and treating brain tumours. While Deng et al.'s [18] work is critical, it is critical to address its potential limitations. These limitations can be addressed by acquiring larger and more diverse datasets, refining annotation processes, improving interpretability, rigorous clinical validation, and maintaining a consistent focus on ethical and regulatory considerations, all of which are critical for the widespread adoption and efficacy of such models in clinical practice.

Amin et al. [19] proposed "Deep Convolutional Neural Networks for Brain Tumour Detection" with a sensitivity of 95%, but their discussion of validation accuracy leaves room for improvement. They achieved commendable accuracy, specificity (90%), and sensitivity (91%) in their subsequent work, "Brain Tumour Detection Using Statistical and Machine Learning Methods" [20], though these metrics fall short when compared to modern algorithms.

Nema et al. [21] make significant contributions to brain tumour segmentation by introducing the RescueNet model, an unpaired GAN-based approach. While this novel approach has potential, it is critical to recognize its limitations [22]. These constraints include data scarcity, clinical validation, interpretability, and model robustness, all of which are critical in fostering broader adoption and impact in the critical domain of medical image analysis.

Sharif et al. and Menze et al. [23, 24] presented two research projects: "Particle Swarm Optimisation (PSO) with Feature Fusion for Brain Tumour Detection" and "Active Deep Neural Network Feature Selection for Segmentation and Recognition of Brain Tumours Using MRI Images." These efforts resulted in commendable outcomes and improved metrics. The efficacy of PSO, on the other hand, is dependent on meticulous parameter tuning, and the deep learning article reported a relatively lower average accuracy of 92%.

Finally, the reviewed studies cover a wide range of methodologies, each of which provides valuable insights into brain tumour detection and segmentation. While some emphasize Optimisation procedures and others investigate the potential of deep learning, they all emphasize the importance of ongoing refinement in this critical domain of medical imaging. Addressing limitations, expanding datasets, improving interpretability, and ensuring clinical validation will be critical in moving these innovative approaches towards wider adoption and greater impact in clinical practice. The unwavering pursuit of accuracy and efficiency in brain tumour diagnosis and treatment remains a driving force behind these research efforts.

3. Proposed System

The proposed system diagram is depicted in Figure 1 based on the methodology described in the text. The workflow starts with user interaction, which allows the user to select an image from the BRATS dataset [24]. A series of pre-processing steps are then applied to the selected image. To begin, it is resized to a standard 256x256 pixel dimension to ensure consistency. The luminance method is used to convert color images to grayscale [25]. A Wiener filter in the frequency domain is used to reduce noise. The power spectral density (PSD) and noise PSD of the image are used to calculate this filter, which results in a filtered image [26]. Following that, an Adaptive Histogram Equalization (AHE) step improves image quality, which is followed by the use of the Discrete Wavelet Transform (DWT) with the Haar wavelet [27]. To reconstruct the image, the DWT coefficients are combined. Finally, features from the Grey Level Co-occurrence Matrix (GLCM) are extracted, and the image is classified using a pretrained neural network, leading to tailored actions based on the classification outcome, such as tumour localization and performance evaluation metrics.

In supposition, Figure 1 depicts a comprehensive image processing pipeline for medical image analysis. The process starts with data selection and pre-processing, which includes noise reduction and contrast enhancement. The image is then transformed with DWT and Haar wavelet, and key features are extracted for classification. This proposed system aims to improve diagnostic accuracy while also providing valuable information to medical professionals, making it an important tool in healthcare.

Figure 1. Block diagram of proposed system

The "BioSwarmNet" model addresses medical image classification challenges. This novel method integrates Fractional Order Differential Particle Swarm Optimisation (FODPSO) and Recurrent Neural Networks (RNNs). Unlike traditional methods that rely on manual hyperparameter tuning, FODPSO optimizes the architecture and parameters of RNNs using swarm intelligence and Darwinian principles. The incorporation of swarm intelligence and deep learning in BioSwarmNet provides a one-of-a-kind and integrated solution for medical image classification, which has the potential to improve diagnostic accuracy while reducing computational resources. The healthcare-focused application of this model suggests that it has the potential to aid medical professionals in disease detection and monitoring, making it a promising and innovative contribution to the field of medical image analysis.

The workflow shown in Figure 1 begins with a user-friendly dialogue that allows you to select an image file from the BRATS dataset [14] in formats such as jpg, bmp, gif, or png. Let f (x, y) represent the selected image file from the BRATS dataset, where x and y are pixel coordinates. Once chosen, the image is pre-processed, resizing to a standard 256×256 pixel dimension.

If f (x, y) is of size M×N, it is resized to a standard 256×256 pixel dimension as follows:

$f^{\prime}(x, y)=Resize(f(x, y),\ 256,256)$                  (1)

where, Resize (f(x, y), 256, 256) denotes the resizing operation.

Resizing images to a uniform size is a standard practice in image processing, especially in machine learning applications. This ensures that all images fed into the model have consistent dimensions, which is essential for many algorithms to function correctly. Uniformity in image size facilitates batch processing during neural network training and reduces computational load, which speeds up processing and improves efficiency.

Conversion to grayscale is meticulously performed in the case of colour images with three channels. In the case of color images with three channels (R, G, B), grayscale conversion is performed using the luminance method:

$f^{\prime \prime}(x, y)=0.2989 \cdot R+0.5870 \cdot G+0.1140 \cdot B$                  (2)

where, f′′(x, y) represents the grayscale image, and R, G, and B are the color channels at pixel (x, y).

Grayscale conversion simplifies the data by reducing it from three color channels to a single channel. This reduction lowers computational complexity, and shifts focus to intensity variations, which are often sufficient for feature detection in medical images.

Following that, the image goes on a process of noise reduction via the use of a Wiener filter. The Wiener filter operates in the frequency domain, and it can be represented mathematically as follows:

Let F(u, v) represent the two-dimensional Fourier transform of the pre-processed image f′′(x, y), where u and v are the frequency domain coordinates.

The power spectral density (PSD) of the noise in the image can be estimated as N(u, v).

The Wiener filter H(u, v) is computed as:

$H(u, v)=\frac{1}{H(u, v)}\left(\frac{|F(u, v)|^2}{|F(u, v)|^2+|N(u, v)|^2}\right)$                  (3)

where, H(u, v) is the Wiener filter in the frequency domain, ${\mid}F\left( u,v \right){{\mid }^{2}}$ is the squared magnitude of the Fourier transform of the pre-processed image, ${\mid}N\left( u,v \right){{\mid }^{2}}$ is the squared magnitude of the noise PSD.

Noise reduction is crucial in medical imaging where high-quality images are necessary for accurate diagnosis. The Wiener filter, a statistical approach, adjusts its effect based on the local image variance—performing minimal smoothing where variance is high, and more where it is low. This capability allows it to preserve essential edge details while reducing noise, a critical factor in medical image analysis for accurately delineating feature boundaries such as tumors.

The filtered image f′′′(x, y) is obtained by taking the inverse Fourier transform of the product of H(u, v) and F(u, v):

$f^{\prime \prime \prime}(x, y)=F^{-1}\{H(u, v) \cdot F(u, v)\}$                  (4)

where, F-1 represents the inverse Fourier transform operation. This equation describes the application of a Wiener filter to reduce noise in the pre-processed image.

Subsequent that, an Adaptive Histogram Equalization step improves the image's quality, optimizing it for subsequent processes that culminate in the display of the filtered image [25].

AHE is applied to the pre-processed image f′′′(x, y) to improve its quality. It involves the following steps:

Compute the local histogram hi(k) for each pixel (x, y) in a small neighborhood window of size w×w:

$h i(k)=\sum_{p, q \in W i} \delta\left(f^{\prime \prime \prime}(p, q)-k\right)$                  (5)

where, Wi represents the set of pixels in the neighborhood window centered at pixel (x, y), and δ(⋅) is the Dirac delta function.

Compute the cumulative distribution function (CDF) for each local histogram Hi(k):

$H i(k)=\sum_{i=0}^k h i(j)$                  (6)

Calculate the transformation function Ti(k) for each local window:

$Ti(k)=\frac{(L-1)}{w^2} \sum_{j=0}^k \frac{\left(H_i(j)\right.}{H\left(w^2\right)-1}$                  (7)

Apply the transformation function Ti(k) to the pixel (x, y) in the neighborhood window:

$f^{\prime \prime \prime \prime}(x, y)=Ti\left(f^{\prime \prime \prime}(x, y)\right)$                  (8)

Repeat these steps for all pixels in the image to obtain the final enhanced image f′′′′(x, y).

AHE enhances image contrast, improving the visibility of features by better utilizing the dynamic range of intensities. This technique is especially valuable in medical imaging, where subtle contrast differences between tissues can be crucial for accurate diagnoses. AHE enhances these differences, aiding in the detection of features such as tumors or other anomalies.

Succeeding these preliminary steps, the pre-processed image is transformed using the Discrete Wavelet Transform (DWT) [25] and the Haar wavelet. The DWT yield consists of four coefficients denoted as ll, lh, hl, and hh, which are combined and displayed as a complete image [26].

Because of its simplicity and effectiveness, the Haar wavelet is commonly used for the DWT. The DWT splits the pre-processed image f′′′′(x, y) into four coefficient sets: ll (low-low), lh (low-high), hl (high-low), and hh (high-high). DWT is commonly represented as a series of convolution operations and down sampling.

First, define the Haar wavelet functions ψ(x) and ϕ(x) as:

$\psi(x)=\left\{\begin{array}{cc}\frac{1}{\sqrt{2}} & \text { if } 0 \leq x<1 / 2 \\ -\frac{1}{\sqrt{2}} & \text { if } 1 / 2 \leq x<1 \\ 0 &  Otherwise \end{array}\right.$                  (9)

$\phi(x)=\left\{\begin{array}{cc}

\frac{1}{\sqrt{2}} & \text { if } 0 \leq x<1 / 2 \\

\frac{1}{\sqrt{2}} & \text { if } 1 / 2 \leq x<1 \\

0 &  Otherwise

\end{array}\right.$                  (10)

Apply the DWT to f′′′′(x, y) using Haar wavelet to provide ll(x, y), lh(x, y), hl(x, y), and hh(x, y) which represent the low-low, low-high, high-low, and high-high coefficients, respectively [27, 28].

Combine these coefficients to reconstruct the image:

$\begin{gathered}f^{\prime \prime \prime \prime \prime}(x, y)=  Combine (\operatorname{ll}(x, y), \operatorname{lh}(x, y), \operatorname{hl}(x, y), \operatorname{hh}(x, y))\end{gathered}$                  (11)

The function Combine (⋅) represents the inverse DWT operation, which combines the coefficients to reconstruct the image f′′′′′(x, y).

The result f′′′′′(x, y) is a composite image formed by combining the four sets of DWT coefficients, ready for further processing or display. In terms of specifics, the DWT's ll (low-low) coefficients hold the key to feature extraction [29, 30].

The Grey Level Co-occurrence Matrix (GLCM) is computed using the coefficients obtained from the Discrete Wavelet Transform (DWT) as follows:

Let ll(x, y) represent the low-low coefficients obtained from the DWT. Define the GLCM for a specific distance d and direction θ as P_d,θ (i, j), where i and j are gray levels.

Compute the GLCM for the given distance d and direction θ by counting the occurrences of pairs of gray values in ll(x, y) that meet the specified conditions:

${{P}_{d,\theta }}\left( i,j \right)=\mathop{\sum }_{x}\mathop{\sum }_{y\text{ }\!\!~\!\!\text{ }}\left\{ \begin{matrix}   1,~~if~ll\left( x,y \right)=i~and~ll\left( x+\Delta x,y+\Delta y \right)=jfor~the~specified~\Delta x,\Delta y,~and~\theta   \\0~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~elsewhere  \\\end{matrix} \right.\text{ }\!\!~\!\!\text{  }\!\!~\!\!\text{  }\!\!~\!\!\text{  }\!\!~\!\!\text{  }\!\!~\!\!\text{ }$                  (12)

Normalize the GLCM by dividing each element by the total number of valid pairs:

$P_{d, \theta}(i, j)=\frac{P_{d, \theta}(i, j)}{\sum_i \sum_j P_{d, \theta}(i, j)}$                  (13)

Once the GLCMs are computed, a set of statistical properties, often referred to as "Query Features," can be extracted. These features include:

$\text{Energy}E=\sum i \sum j P_{d \theta}(i, j)^2$                  (14)

$Contrast\ (C): C=\sum i \sum j(i-j)^2 P_{d, \theta}(i, j)$                  (15)

$Correlation\ (CO): C O=\frac{\left[\sum i \sum j(i-\mu)(j-v) P d, \theta(i, j)\right]}{\sigma i \sigma j}$                  (16)

where, μ and ν are the means of i and j respectively, and σi and σj are their standard deviations.

$Homogeneity\ (H): H=\sum_i \sum_j \frac{P_{d, \theta}(i, j)}{1+|i-j|}$                  (17)

$\begin{gathered} Entropy\ (E N T): ENT=-\sum i \sum j P_{d \theta}(i, j) \log _2\left(P_{d, \theta}(i, j)+\epsilon\right)\end{gathered}$                  (18)

where, ϵ is a small positive constant to avoid the logarithm of zero.

This group of features is then assigned to the proposed BioSwarmNet model. A pretrained neural network model takes centre stage in the classification process. This neural network categorises the image as normal, abnormal tumour, benign tumour, or malignant tumour.

Tailored actions take place based on the outcome of this classification (C). If C is 1 or 3, a user-friendly message dialogue appears, explaining whether the image represents a normal case for the relevant organ. If C is 2 or 4, a warning message appears on the screen, while tumour localization and area calculations are meticulously carried out behind the scenes. If a tumour is detected, the original image begins a segmentation process.

The boundaries of the segmented tumour are accurately delineated during this segmentation process, and the original image is adorned with these distinct contours, making the tumor's presence visually apparent. The tumour region is effectively isolated and displayed independently, and its size is precisely calculated.

Finally, the workflow delves into performance evaluation, calculating a number of key performance metrics such as True Positive (TP), True Negative (TN), False Positive (FP), False Negative (FN), Sensitivity, Specificity, and Accuracy. These metrics are extremely useful in determining the system's efficacy and accuracy.

3.1 Proposed model

To address the challenges of medical image classification, the BioSwarmNet model as provided in Figure 2 combines two distinct paradigms, Fractional Order Differential Particle Swarm Optimisation (FODPSO) and Recurrent Neural Networks (RNN).

The term "Bio" in BioSwarmNet refers to the FODPSO bio-inspired optimization technique. FODPSO is inspired by Darwinian principles and swarm intelligence, making it an efficient optimization method for complex problems. PSO and other swarm intelligence algorithms are inspired by the collective behavior of social organisms. FODPSO, a PSO variant, applies a swarm-based optimization strategy to medical image classification. The word "Net" in the title emphasizes the incorporation of neural networks, specifically RNNs. The capabilities of neural networks in feature learning and classification tasks are well known.

Figure 2. Proposed BioSwarmNet model architecture

By combining Fractional Order Differential Particle Swarm Optimisation (FODPSO) and Recurrent Neural Networks (RNNs), BioSwarmNet presents a novel and integrated approach to medical image classification. Its uniqueness stems from FODPSO, a bio-inspired optimisation strategy that optimises RNN architectures and parameters, as opposed to traditional methods that frequently rely on manual hyperparameter tuning. This combination of swarm intelligence and deep learning addresses an important need in healthcare, with the potential to improve diagnostic accuracy while optimising computational resources. Its healthcare-focused application highlights its potential to assist medical professionals in disease detection and monitoring, making it a promising and innovative contribution to the field.

3.2. Algorithm of the proposed BioSwarmNet model

BioSwarmNet provides a holistic approach by seamlessly combining optimisation and feature learning, potentially improving efficiency and accuracy in medical image analysis and its stepwise algorithm is provided below.

Algorithm: BioSwarmNet model

# Step 1: FODPSO Optimization

// Initialize FODPSO parameters (population size, iterations, objectives, constraints, etc.)

// Initialize RNN hyperparameter search space (learning rate, architecture, etc.)

 

For each FODPSO iteration:

Initialize FODPSO population with random solutions

Evaluate fitness of each solution using the optimization objective

While stopping criterion is not met:

For each particle in the population:

Calculate fractional order velocity using Darwinian PSO equations

Update particle position based on velocity

Evaluate fitness of the new position

If position improves fitness, update particle's best-known position

Update global best position among all particles

End While

 

# Step 2: RNN Hyperparameter Tuning

For each RNN hyperparameter (e.g., learning rate, architecture):

Set RNN hyperparameter to a value selected using FODPSO global best position

Train an RNN model with the selected hyperparameter

Evaluate RNN model's performance on validation data

 

Select the RNN hyperparameter configuration with the best validation performance

 

# Step 3: RNN Training

Initialize an RNN model with the selected hyperparameters

Preprocess the training data (e.g., scaling, normalization)

Train the RNN model on the preprocessed training data

Monitor training progress (e.g., loss and accuracy) and save checkpoints

# Step 4: Model Evaluation

Pre-process the testing data using the same pre-processing steps

Evaluate the trained RNN model on the pre-processed testing data

Calculate performance metrics (e.g., accuracy, RMSE, etc.)

# Step 5: Post-processing and Analysis

Visualize and interpret the results

Save the trained RNN model for future use

# End of BioSwarmNet Algorithm

4. Results and Analysis

The input MRI image of a brain tumour in Figure 3 comes from the BRATS Medical Image database via Kaggle, a well-known and widely used repository for medical imaging research. It is the starting point for the entire workflow.

Following that, the image goes through critical preprocessing stages:

To ensure uniformity, the input image is resized to a consistent 256x256 pixel dimension. Grayscale conversion of color images with three channels (R, G, B) is meticulously performed using the luminance method, resulting in the representation shown in Figure 4. Grayscale conversion streamlines subsequent processing while preserving critical image information.

Figure 3. Input MRI image

Figure 4. Resized and gray scale converted image

Noise reduction is the next critical step. In the frequency domain, the Wiener filter effectively reduces noise. As shown in Figure 5, this process significantly improves image quality, which is critical for accurate tumour detection.

Figure 5. Wiener filtered image

Figure 6. Adaptive histogram equalized image

Adaptive Histogram Equalisation (AHE) provides additional enhancement. The result of this process is depicted in Figure 6, with an emphasis on improved contrast and visual detail within the image. This enhancement prepares the image for further analysis.

The image is preprocessed and enhanced before being subjected to the Discrete Wavelet Transform (DWT) with the Haar wavelet. This transformation yields the segmented image shown in Figure 7. The DWT divides the image into four sets of coefficients (ll, lh, hl, and hh), each of which captures a different aspect of the image.

Figure 7. Segmented image via DWT using Haar

Figure 8 depicts the DWT coefficient recombination used to reconstruct the segmented image. This step improves specific aspects of the image, allowing for a more in-depth analysis.  The image is classified using the BioSwarmNet model, which was trained using the extracted features. Figure 9 is a message or dialogue box that displays the classification result. It denotes the presence of an abnormal brain tumour in this context.

Figure 8. Segmented output image

Figure 9. Dialog box image

Figure 10 most likely depicts clusters or regions within the image that were identified during the segmentation process. These clusters correspond to areas of particular interest, such as tumor-related regions with distinct image properties. Tumour localization occurs after classification. The result of this step is depicted visually in Figure 11, which shows the precise location of the tumour within the brain image. This vital data aids in accurate diagnosis and treatment planning.

Finally, Figure 12 shows the tumour segmented region, which is clearly separated from the rest of the image. This visual representation highlights the tumor's boundaries and allows for precise size measurement. In summary, this workflow, along with the accompanying figures, outlines a comprehensive and systematic process for detecting and analysing brain tumours. Each stage, beginning with the input MRI image, contributes to image enhancement, feature extraction, and precise tumour detection and localization.

Figure 10. Segmented clusters

Figure 11. Tumor localization image

Figure 12. Tumor segmented region

4.1 Features extracted

We began our experiment with a small sample set of 200 samples extracted from the BRATS dataset [2]. We used data augmentation techniques to improve our dataset and training, resulting in a significant increase to a total of 2000 samples.

Following that, we divided the dataset into two distinct subsets to facilitate training and evaluation of our proposed BioSwarmNet Model: a training set, which contained 80% of the total 2000 samples (1600 images), and a testing set, which contained the remaining 20% (400 images). This partitioning was accomplished by shuffling the dataset at random and then allocating images to their respective subsets.

Let us use an example to demonstrate this division: In Table 1, we present a sample selection of five images. This table is an illustrative representation of our dataset and gives an idea of the variety of samples included.

We concentrated on extracting important features from the images, such as energy, entropy, contrast, correlation, and homogeneity, during our analysis. These characteristics are important in characterising the images and in evaluating the performance of our proposed model.

Figure 13 depicts key image characteristics such as energy, contrast, correlation, and homogeneity. These characteristics are critical for describing the content and properties of the images in our dataset.

Table 1. Features extracted for samples of brain tumor images

Features Extracted

Sample 1

Sample 2

Sample 3

Sample 4

Sample 5

Energy

0.01404

0.01113

0.01075

0.01606

0.01057

Contrast

0.41

0.483

0.592

0.6312

0.69

Correlation

0.9762

0.9341

0.9843

0.954

0.99

Homogeneity

0.4016

0.5643

0.434

0.53128

0.4541

Entropy

9.8523

9.7573

9.7594

9.7783

9.6932

The overall intensity or magnitude of the pixel values in an image is measured by energy. Each point in the plot represents an image, and the position of the point on the vertical axis represents the energy value for that image. Images with higher energy values have more intense pixel variations or patterns. Lower values indicate images with more uniform pixel distributions.

The difference in pixel intensities within an image is measured by contrast. Each image is represented by a point in the plot, with the vertical position indicating the contrast value. Higher contrast images have distinct variations in pixel intensity, resulting in higher contrast values. Images with more uniform pixel intensities, on the other hand, have lower contrast values.

The linear relationship between pixel intensities in an image is quantified by correlation. Correlation plot points represent individual images, with their vertical position indicating correlation values. Images with high correlation values have pixel intensities that change consistently in relation to one another, often indicating structured patterns. Lower correlation values indicate that pixel intensities are less linearly related. The homogeneity of pixel intensities within an image is measured. Each image in Figure 13 corresponds to a point on the plot, with the vertical position indicating homogeneity. Images with high homogeneity have relatively uniform pixel intensities, implying a consistent texture or pattern. Images with lower homogeneity values have more varied pixel intensities.

It's worth noting that Figure 13 provides a visual summary of the feature values for the images in our dataset. The distribution of points on the plot reveals information about the image's diversity and the range of feature values present. This information is critical for comprehending the dataset's properties and their potential impact on the performance of our proposed BioSwarmNet Model. In summary, Figure 13 provides an insightful look at how these four important image features (Energy, Contrast, Correlation, and Homogeneity) vary across our dataset, allowing us to better understand the underlying patterns and characteristics of the images we're working with.

Figure 13. Plot of features (Energy, contrast, correlation and homogeneity)

Figure 14. Plot of entropy

Entropy, as a measure of randomness or information content, has a significantly different scale than the other features mentioned. While the values of energy, contrast, correlation, and homogeneity are generally within a certain range, entropy values can vary greatly, often spanning a much larger scale. Including entropy in the same plot would result in a distorted visualisation, making it difficult to effectively interpret variations in other features. By removing entropy from the plot in Figure 13, we can create a separate visualisation or analysis for entropy in Figure 14. This method provides a clearer understanding of the distribution and variation of entropy values within the dataset, without the interference of other features' scale differences.

4.2 Accuracy

The accuracy of the brain tumor detection can be calculated using the following formula:

$Accuracy=\frac{(T P+T N)}{(T P+T N+F P+E N)}$                    (19)

where, TP – True Positive (identified Tumors), TN – True Negative, FP – False Positive, FN – False Negative (not identified).

Table 2 provides a detailed overview of the accuracy scores obtained by various methods in the field of brain tumour detection. The proposed BioSwarmNet model, in particular, stands out with an exceptional accuracy rate of 99.12%. This outstanding performance puts it ahead of other noteworthy approaches such as deep convolutional neural networks, MultiRes U-net, statistical and machine learning methods, Particle Swarm Optimisation (PSO), Active Deep Learning, and Enhanced Region Growing. The BioSwarmNet model's ability to consistently achieve superior accuracy demonstrates its efficacy in identifying brain tumours from medical images.

Table 2. Accuracy parametric comparison for brain tumor detection

S.No

Techniques Used

Accuracy (%)

1

Deep convolutional neural networks [20]

95.1

2

MultiRes U-net [17]

91.65

3

statistical and machine learning method [19]

90

4

Particle swarm optimization (PSO) [22]

97

5

Active Deep Learning [23]

92

6

Enhanced Region Growing [15]

98

7

Proposed Method

99.12

Figure 15. Comparison plot for accuracy

By visually depicting accuracy comparisons, Figure 15 reinforces the BioSwarmNet model's superiority. The various methods are listed on the X-axis, and the corresponding accuracy percentages are shown on the Y-axis. The plot clearly shows that the BioSwarmNet model outperforms its competitors, with an impressive accuracy score of 99.12%. This visual representation cements the model's position as a leading solution for brain tumour detection, promising more accurate and reliable results than existing methods.

4.3 Sensitivity

Sensitivity is the proportion of true positives that were identified by the model. It indicates the model's ability to correctly classify tumour or cancer cases.

$Sensitivity=\frac{T P}{T P+F N}$                   (20)

Table 3 provides a detailed comparison of the sensitivity values obtained by various methods in the context of brain tumour detection. Surprisingly, the proposed BioSwarmNet model has an excellent sensitivity score of 98.62%. This high sensitivity indicates that the model can detect true positive cases of brain tumours while minimising false negatives. Its superior sensitivity demonstrates its robustness and effectiveness in accurately identifying brain tumours.

Table 3. Sensitivity parametric comparison for brain tumor detection

S.No.

Techniques Used

Sensitivity (%)

1.

Deep convolutional neural networks [20]

95

2.

Convolutional Neural Network with Tensor Flow [16]

82

3.

statistical and machine learning method [19]

91

4.

HCNN and CRF-RRNN Model [18]

97.8

5.

Unpaired GAN [21]

94.89

6.

Active deep neural network features selection [23]

98.39

7.

Enhanced Region Growing [15]

86.7

8.

Proposed Method

98.62

Figure 16 depicts sensitivity comparisons between the proposed BioSwarmNet model and other methods, which are listed on the X-axis. The corresponding sensitivity percentages are shown on the Y-axis.

Figure 16. Comparison plot for sensitivity

With a sensitivity score of 98.62%, this plot visually emphasises the BioSwarmNet model's superior sensitivity. The model outperforms other methods in terms of sensitivity, demonstrating its ability to correctly identify true positive cases in brain tumour detection tasks.

In conclusion, both Table 3 and Figure 16 highlight the proposed BioSwarmNet model's remarkable sensitivity in the detection of brain tumours. Its performance outperforms conventional methods.

4.4 Specificity

Specificity is the proportion of true negatives identified correctly by the model. It indicates the model's ability to correctly classify non-tumor or non-cancer cases.

$Specificity=\frac{T N}{(T N+E P)}$                    (21)

Table 4. Specificity parametric comparison for brain tumor detection

S.No.

Techniques Used

Specificity (%)

1.

Deep convolutional neural networks [20]

97.2

2.

statistical and machine learning method [19]

90

3.

Particle swarm optimization (PSO) [22]

98.1

4.

Active Deep Learning [23]

96.06

5.

Enhanced Region Growing [15]

99.7

6.

Proposed Method

99.86

Table 4 compares the specificity values obtained by various methods in the domain of brain tumour detection. Notably, the proposed BioSwarmNet model has a high specificity score of 99.86%. This high specificity reflects the model's ability to correctly identify true negative cases, effectively reducing false positives in the detection of brain tumours. In terms of specificity, the BioSwarmNet model consistently outperforms the methods listed in the table, which include deep convolutional neural networks, statistical and machine learning techniques, Particle Swarm Optimisation (PSO), Active Deep Learning, and Enhanced Region Growing. This outstanding performance demonstrates the model's robustness and effectiveness in distinguishing healthy brain scans from those with tumours.

Figure 17 depicts specificity comparisons between the proposed BioSwarmNet model and other methods, which are listed on the X-axis. The corresponding specificity percentages are shown on the Y-axis.

With a specificity score of 99.86%, this plot visually emphasises the BioSwarmNet model's superior specificity. The model's specificity outperforms other methods, demonstrating its ability to correctly identify true negatives in brain tumour detection tasks.

Figure 17. Comparison plot for specificity

In conclusion, both Table 4 and Figure 17 highlight the exceptional specificity attained by the proposed BioSwarmNet model in the context of brain tumour detection. Its performance outperforms established methods such as deep learning, statistical and machine learning approaches, optimisation techniques, and region-based methods, making it a dependable and accurate tool for identifying healthy brain scans.

The "BioSwarmNet" model, while innovative in tackling medical image classification, acknowledges its limitations such as data sensitivity, potential overfitting, and high computational demands. These challenges are well within reach of being addressed through strategic future adjustments and studies. Enhancements in data diversity, algorithm optimization, and model interpretability are not only feasible but are actively planned, ensuring the model's continuous improvement and robustness in clinical applications.

5. Conclusions

Finally, this research paper provides a thorough examination of an advanced medical image analysis system, which is supported by a novel and innovative model known as "BioSwarmNet." The proposed methodology includes an image processing pipeline that has been meticulously designed to ensure data uniformity and quality. The system starts with user interaction to select a dataset and then moves on to pre-processing steps such as resizing, grayscale conversion, noise reduction, and feature extraction using the Discrete Wavelet Transform (DWT) and Grey Level Co-occurrence Matrix (GLCM). This comprehensive workflow concludes with the classification of medical images, allowing for tailored actions based on the results, such as tumour localization and performance evaluation metrics. Notably, the proposed system has achieved remarkable results in key metrics, including 99.12% accuracy, 98.62% sensitivity, and 99.86% specificity. Achieving high accuracy, sensitivity, and specificity with the "BioSwarmNet" model translates into substantial real-world clinical benefits. Enhanced accuracy ensures reliable diagnoses, reducing unnecessary treatments and improving patient outcomes. High sensitivity aids in early disease detection, increasing treatment effectiveness, while high specificity minimizes false positives, preventing undue patient stress and reducing wasteful healthcare spending. Collectively, these improvements streamline healthcare workflows and elevate the standard of patient care.

The revolutionary "BioSwarmNet" model is a hybrid of Fractional Order Differential Particle Swarm Optimisation (FODPSO) and Recurrent Neural Networks (RNNs). By combining the power of swarm intelligence and deep learning for automated medical image classification, this model has the potential to revolutionise the field of medical image analysis. The integration of FODPSO for automated hyperparameter tuning with RNNs in BioSwarmNet demonstrates its ability to improve diagnostic accuracy in a healthcare-focused application. Furthermore, the study made use of the well-known and preferred BRATS dataset, which added credibility and relevance to the research. Adopting a multi-disciplinary approach is crucial for the further development of the "BioSwarmNet" model. Collaboration between computer scientists, radiologists, and other healthcare professionals will enhance the model's accuracy and clinical applicability. Such teamwork ensures algorithmic improvements are clinically relevant and aligned with healthcare standards, leading to a robust, universally effective diagnostic tool.

Acknowledgment

I would like to express my sincere gratitude to my supervisor, Dr. Pavan Kumar Pagadala for his invaluable guidance, unwavering support, and insightful feedback throughout the completion of the research work especially in the proposed algorithm and in the results section. His expertise and encouragement have been instrumental in shaping the direction of this work and ensuring its success. I am deeply grateful for her mentorship and dedication.

Nomenclature

x, y

Pixel coordinates

u, v

Frequency domain coordinates

W

Size of window

d

Distance

TP

True Positive

TN

True Negative

FP

False Positive

FN

False Negative

Ti(k)

Transformation function

Greek symbols

$\psi(\mathrm{x}),\ \phi(\mathrm{x})$

Haar wavelet functions

$\delta(\cdot)$

Dirac delta function

$\sigma i,\ \sigma j$

Standard deviations

  References

[1] Behin, A., Hoang-Xuan, K., Carpentier, A.F., Delattre, J.Y. (2003). Primary brain tumours in adults. The Lancet, 361(9354): 323-331. https://doi.org/10.1016/S0140-6736(03)12328-8

[2] Louis, D.N., Perry, A., Reifenberger, G., Von Deimling, A., Figarella-Branger, D., Cavenee, W.K., Ohgaki, H., Wiestler, O.D., Kleihues, P., Ellison, D.W. (2016). The 2016 World Health Organization classification of tumors of the central nervous system: A summary. Acta Neuropathologica, 131: 803-820. https://doi.org/10.1007/s00401-016-1545-1

[3] El-Dahshan, E.S.A., Mohsen, H.M., Revett, K., Salem, A.B.M. (2014). Computer-aided diagnosis of human brain tumor through MRI: A survey and a new algorithm. Expert Systems with Applications, 41(11): 5526-5545. https://doi.org/10.1016/j.eswa.2014.01.021

[4] Meng, Y., Tang, C., Yu, J., Meng, S., Zhang, W. (2020). Exposure to lead increases the risk of meningioma and brain cancer: A meta-analysis. Journal of Trace Elements in Medicine and Biology, 60: 126474. https://doi.org/10.1016/j.jtemb.2020.126474

[5] Abeloff's Clinical Oncology (Sixth Edition), 2020, pp. 1975-2037. https://doi.org/10.1016/B978-0-323-47674-4.00130-4

[6] Adult central nervous system tumors treatment (PDQ) — Patient version. National Cancer Institute. https://www.cancer.gov/types/brain/patient/adult-brain-treatment-pdq/, accessed on Sept. 27, 2022.

[7] Brain tumor. Cancer.Net. https://www.cancer.net/cancer-types/brain-tumor/view-all/, accessed on Nov. 1, 2022.

[8] Nalbalwar, R., Majhi, U., Patil, R., Gonge, S. (2014). Detection of brain tumor by using ANN. International Journal of Research in Advent Technology, 2(3): 7.

[9] Watson, C., Kirkcaldie, M., Paxinos, G. (2010). The brain: An introduction to functional neuroanatomy. Academic Press.

[10] Nuñez, M.A., Miranda, J.C.F., de Oliveira, E., Rubino, P.A., Voscoboinik, S., Recalde, R., Akiyama, O., Jawar, S.S., Neto, M.R., Fernandes, D., Salas, E. (2019). Brain stem anatomy and surgical approaches. In Comprehensive overview of modern surgical approaches to intrinsic brain tumors, pp. 53-105. https://doi.org/10.1016/B978-0-12-811783-5.00004-5

[11] Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510-4520.

[12] Bobbillapati, S., Rani, A.J. (2014). Automatic detection of brain tumor through magnetic resonance image. International Journal of Scientific and Research Publications, 3(11): 1-5.

[13] Firke, O.K., Phalak, H. (2016). Brain tumor detection using CT scan images. International Journal of Engineering Science and Computing, 6(8): 2568-2570.

[14] Javed, A.R., Sarwar, M.U., Beg, M.O., Asim, M., Baker, T., Tawfik, H. (2020). A collaborative healthcare framework for shared healthcare plan with ambient intelligence. Human-centric Computing and Information Sciences, 10(1): 40. https://doi.org/10.1186/s13673-020-00245-7

[15] Biratu, E.S., Schwenker, F., Debelee, T.G., Kebede, S.R., Negera, W.G., Molla, H.T. (2021). Enhanced region growing for brain tumor MR image segmentation. Journal of Imaging, 7(2): 22. https://doi.org/10.3390/jimaging7020022

[16] Malathi, M., Sinthia, P. (2019). Brain tumour segmentation using convolutional neural network with tensor flow. Asian Pacific Journal of Cancer Prevention, 20(7): 2095-2101. https://doi.org/10.31557/APJCP.2019.20.7.2095

[17] Ibtehaz, N., Rahman, M.S. (2020). MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Networks, 121: 74-87. https://doi.org/10.1016/j.neunet.2019.08.025

[18] Deng, W., Shi, Q., Wang, M., Zheng, B., Ning, N. (2020). Deep learning-based HCNN and CRF-RRNN model for brain tumor segmentation. IEEE Access, 8: 26665-26675. https://doi.org/10.1109/ACCESS.2020.2966879

[19] Amin, J., Sharif, M., Raza, M., Saba, T., Anjum, M.A. (2019). Brain tumor detection using statistical and machine learning method. Computer Methods and Programs in Biomedicine, 177: 69-79. https://doi.org/10.1016/j.cmpb.2019.05.015

[20] Amin, J., Sharif, M., Yasmin, M., Fernandes, S.L. (2018). Big data analysis for brain tumor detection: Deep convolutional neural networks. Future Generation Computer Systems, 87: 290-297. https://doi.org/10.1016/j.future.2018.04.065

[21] Nema, S., Dudhane, A., Murala, S., Naidu, S. (2020). RescueNet: An unpaired GAN for brain tumor segmentation. Biomedical Signal Processing and Control, 55: 101641. https://doi.org/10.1016/j.bspc.2019.101641

[22] Sharif, M., Amin, J., Raza, M., Yasmin, M., Satapathy, S.C. (2020). An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor. Pattern Recognition Letters, 129: 150-157. https://doi.org/10.1016/j.patrec.2019.11.017

[23] Sharif, M.I., Li, J.P., Khan, M.A., Saleem, M.A. (2020). Active deep neural network features selection for segmentation and recognition of brain tumors using MRI images. Pattern Recognition Letters, 129: 181-189. https://doi.org/10.1016/j.patrec.2019.11.019

[24] Menze, B.H., Jakab, A., Bauer, S., Kalpathy-Cramer, J., Farahani, K., Kirby, J., Burren, Y., Porz, N., Slotboomy, J., Wiest, R., Lancziy, L., Gerstnery, E., Webery, M., Arbel, T., Avants, B., Ayache, N., Buendia, P., Collins, D.L., Cordier, N., Van Leemput, K. (2014). The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging, 34(10): 1993-2024. https://doi.org/10.1109/TMI.2014.2377694

[25] Talukder, K.H., Harada, K. (2011). Enhancement of discrete wavelet transform (DWT) for image transmission over internet. In 2011 Eighth International Conference on Information Technology: New Generations, pp. 1054-1055. https://doi.org/10.1109/ITNG.2011.184

[26] Susrutha, G., Mallikarjun, K., Kumar, M.A., Ashok, M. (2019). Analysis on FFT and DWT transformations in image processing. In 2019 International Conference on Emerging Trends in Science and Engineering (ICESE), pp. 1-4. https://doi.org/10.1109/ICESE46178.2019.9194662

[27] Tan, Y. (2007). A wavelet thresholding image enhancement method based on edge detection. Information Development and Economy, 17(18): 206-208.

[28] Dorothy, R., Joany, R.M., Rathish, R.J., Prabha, S.S., Rajendran, S., Joseph, S.T. (2015). Image enhancement by histogram equalization. International Journal of Nano Corrosion Science and Engineering, 2(4): 21-30.

[29] Patel, O., Maravi, Y.P., Sharma, S. (2013). A comparative study of histogram equalization based image enhancement techniques for brightness preservation and contrast enhancement. arXiv preprint arXiv:1311.4033. https://doi.org/10.5121/sipij.2013.4502

[30] Nithyananda, C.R., Ramachandra, A.C. (2016). Review on histogram equalization based image enhancement techniques. In 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), pp. 2512-2517. https://doi.org/10.1109/ICEEOT.2016.7755145