A Cloud Edge Based Heart Disease Detection Using DenseNet Convoluted Radial Basis Neural Network for Diabetic Patients

A Cloud Edge Based Heart Disease Detection Using DenseNet Convoluted Radial Basis Neural Network for Diabetic Patients

G. Nanda Kishor Kumar* Bhuvan Unhelkar K. Suvarna Vani Prasun Chakrabarti Kayam Saikumar

Department of Computer Science and Engineering, Malla Reddy University, Hyderabad 500043, India

Muma College of Business, University of South Florida, Tampa 33620, United States

CSE Department, Siddhartha Academy of Higher Education, Vijayawada 520007, India

Department of Computer Science and Science, Sir Padampat Singhania University, Udaipur 313601, India

Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation, Hyderabad 500075, India

Corresponding Author Email: 
drgnkishor@mallareddyuniversity.ac.in
Page: 
1481-1492
|
DOI: 
https://doi.org/10.18280/ts.420321
Received: 
15 October 2024
|
Revised: 
27 March 2025
|
Accepted: 
13 June 2025
|
Available online: 
30 June 2025
| Citation

© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Heart disease (HD) is a complex medical condition that has the potential to affect a vast number of people globally. The quick and precise identification of HD is crucial in healthcare, particularly in the cardiology field. In the pre-processing stage of the data mining process, a high-dimensional database is employed to classify HD. This unprocessed dataset contains redundant and inconsistent data, which expands the search space as well as data storage. Using deep-learning methods, the suggested research tries to recognize significant cardiac complaint prediction properties. This research proposed novel heart disease detection techniques by feature extraction and classification through the DL (Deep Learning) architectures. Data collection of 1 Lakh samples has carried out from Cleveland, UCI open-source repository, which has 74 features with a balanced instance rate. Here the input heart disease data has been pre-processed and segmented for filtering and edge normalization. The input image has been processed based on contrast-based histogram equalization (CHE) and segmented based on a threshold of the image. Then the segmented image was extracted to obtain the in-depth features and classifying the features using DenseNet with a Convoluted radial basis neural network. Several clinical measures are used to measure the risk contour in patients, which aid in early diagnosis. In the proposed model, various regularization methods are applied to avoid overfitting. On the dataset, the suggested model attains 72% sensitivity, 90% recall, 88% F-measure, 94% throughput, 92% training accuracy, and 95% testing accuracy. This is compared to other deep learning (DL) methods using a variety of performance metrics, demonstrating the effectiveness of the proposed approach.

Keywords: 

heart disease, deep learning, IoT-cloud data, feature extraction, classification

1. Introduction

HD is a severe health problem that has affected millions of people worldwide. Breathlessness, physical bodily weakness, and swollen feet are prominent signs of HD. Due to the ineffectiveness of current HD diagnostic techniques; scientists are working to develop a more effective early detection approach. 3.6 million Individuals are diagnosed with HD every year, in accordance to the European Society of Cardiology [1]. In addition, majority of people in United States have HD [2]. A non-potential finding scheme employed on ML classifiers is being developed to address these difficulties. The HD is significantly diagnosed utilizing an expert decision method based on ML classifiers as well as artificial fuzzy logic [3]. To put it simply, chronic diseases are a major killer throughout the Maghreb region, including in Algeria. Nonetheless, addressing the problem of chronic illness is one of the most important things that needs to be done to improve the health of humans. In more recent years, cardiovascular disease (often referred to as CVD) has emerged as the primary killer of both men and women worldwide. But, real-world specialists may be capable to foresee the illness by doing a large number of tests and needing a massive amount of processing time, their prediction might at times be inaccurate due to a lack of expert knowledge [4]. In the meantime, the development of AI and ML has assisted in the process of extracting useful data from enormous datasets that are available in hospitals for the purpose of making an informed choice. Using data mining strategies to conduct an analysis of patient records is a part of it. Data mining has been increasingly famous for this purpose because its technologies can identify data trends and turn them into information that can support study [3]. To avoid CVD, the main goal of CVD prevention research is to create accurate prediction tools. Due to the high CVD mortality rate and the availability of a lot of patient data, they assumed data mining would help healthcare professionals diagnose CVD. It isn't the purpose of this study activity to take the position of a specialised physician; rather, it is to provide assistance to physicians in the process of seeking an alternative opinion and exploring the different ways in which it is feasible in emergency circumstances.

As its methods have gained popularity and are increasingly readily accessible, ML has greatly progressed. Several ground-breaking applications, including as face identification, system security, disease diagnostics, drug research, and many others, have a profound impact on people's lives. The foundation for creating ML apps is distinct from that of the majority of conventional programming techniques. Prediction techniques are frequently developed using past data to make predictions about potential future events [5]. The Cleveland HD data set, utilized in this research and could been downloaded in the UCI ML Repository, originally had 76 attributes and 303 cases. [6]. The most important public health problem today is (HD), which has affected a significant number of people in a wide variety of countries. The HD manifests itself with the classic signs of shortness of breath, physical body weakness, and swollen feet. The present diagnostic procedures for heart disease early detection are difficult for many reasons, including accuracy and execution time. As a result, experts are working on developing a method that is more reliable for the early detection of HD. When current technology and trained professionals in the medical field are not readily available, it is highly challenging to diagnose and treat cardiac disease [7]. Numerous people's lives can be saved if they are given an accurate diagnosis and given the appropriate therapy [8]. Almost 26 million people have HD, and 3.6 million are recognized annually, based on the European Society of Cardiology [6]. Researchers have previously discussed the attempts to develop the most accurate model for anticipating cardiovascular illness. The use of ML methods and fuzzy logic methods to forecast cardiac disease is still under investigation by numerous studies. The study that are pertinent to the suggested method are examined in this section. In reference [9], a ML model that combines 5 separate methods has been suggested. In reality, integrating a ML model with medical data networks can detect HF and other conditions using real-time patient data. They have suggested a brand-new hybrid method for predicting cardiac disease that incorporates all existing methodologies into a single algorithm. The outcome demonstrates that precise diagnostic. Experts then classified using a variety of methods, including K-Nearest Neighbour, SVM, RFs, and a Multilayer Perception algorithm that was tuned using (PSO) paired by Ant Colony Optimization (ACO) techniques. In reference [10], there is a discussion on developing a predictive model for the diagnosis of HD by combining a fuzzy decision-tree-based technique. The study's scientists detected 88% of cases with statistical significance of a patient suffering from heart disease-HD. In addition, this approach works better than some of the other approaches that are already in use. In the study by Li et al. [11], there is a proposal for a brand-new approach that has been given the term Fuzzy Neural Network with Hybrid Differential Evolution. By using the genetic method to improve neural network initial weight updating, this method may effectively diagnose heart disease.

HD affects a significant percentage of the population in the United States. The investigation of a patient's medical history, the results of a physical examination, as well as an examination of the patient's symptoms are the three primary components of the diagnostic process for HD. However, the data acquired from this form of diagnosis are not reliable in determining whether or not a patient has HD. In addition to this, the analysis of it is both financially and computationally challenging. To tackle such problems, it is necessary to create a system for non-invasive diagnostics that is dependent on the classifiers of machine learning (ML). Expert decision system that is effective in diagnosing HD and is based on ML classifiers and the applications of artificial fuzzy logic. Hence, there is a fall in the number of deaths per unit of time [12]. The identification problem of HD was tackled by a number of studies [13], who made use of the Cleveland HD data set. Prediction models created by machine learning require accurate data in order to be trained and tested. A balanced dataset could increase ML model performance when utilized for training and testing. In addition, the predictive capabilities of the model could be increased by utilising characteristics that are appropriate and connected to those in the data. Consequently, the balance of the data as well as the selection of the characteristics are extremely critical for the enhancement of the model's performance. In the medical literature, numerous researchers have suggested numerous diagnostic procedures; unfortunately, these methods do not properly diagnose HD [13], the preparation of data and the standardisation of said data are both essential if one wishes to enhance the prediction capacity of a ML model. This is done so that the technique may achieve its goal of improving performance [14, 15]. The genetic algorithm is able to find the neural networks hidden layer weights that will yield the greatest results [16]. Suggests utilising neuro-fuzzy genetics to identify cardiovascular disease risk [17]. With the assistance of a genetic algorithm, the solution that has been suggested contributes to making the system both more precise and efficient [18]. To propose novel technique in heart disease by feature extraction and classification through the DL (Deep Learning) architectures [19]. To process input data based on contrast-based histogram equalization (CHE) and segmented based on threshold of the image. To extract deep features and classify the features using DenseNet with Convoluted radial basis neural network [20].

1.1 Literature survey

Researchers are suggested a number of different diagnostic strategies based on ML throughout the body of published research aimed at HD. This research analysis explains the significance of the proposed work by providing a presentation of several existing diagnosis procedures that are based on machine learning. The HD classifications method that was built by Saikumar et al. [21] made use of ML classification approaches and achieved 77% accuracy. Global evolutionary and features selecting methods were utilized with the Cleveland database. Saikumar and Rajesh [22] used multi-layer Perceptron and SVM algorithms to create an HD classification diagnosis system with an 80.41% accuracy rate. The HD classification system developed by Saikumar and Rajesh [23] makes use of a neural network and incorporates fuzzy logic into its functioning. It was determined that the classification system had an accuracy of 87.4%.

Using the statistical measurement system enterprise miner explained by Garigipati et al. [24] created an ANN ensemble-based HD diagnosis approach with accuracy of 89.01%, sensitivity of 80.09%, and specificity of 95.91%. Akil et al. created an ML-based HD diagnostic system [25]. The ANN-DBP and FS algorithms work well together. Palaniappan et al. proposed an expert’s diagnosis procedure for HD detection [26]. NB, DT, and ANN are used to develop the systems. NB's accuracy was 86.12% is accurate to the tune of 88.12%, and the DT classification hit 80.4%. For HD prediction in Gudadhe et al. [27] created a three-stage method using the artificial neural network method, which reached 88.89% accuracy [28].

2. Related Works

Various algorithms have been utilized to develop an effective cardiovascular disease (CVD) prediction, including LR, KNN, RF Classifier, others. In terms of recording the set targets, the outcomes reveal that each technique has unique strengths [29] Employed GDA to extract non-linear features, a binary classification technique like an extreme ML to reduce over fitting as well as lengthen the period required for training and using Fisher as a universal ranking system. There is a one hundred % success rate in spotting signs of coronary HD. According to Jabbar et al. [30], heart rate variability is an arrhythmia. A multilayer perceptron NN was used for classifications, and full accuracy is achieved by either drastically reducing the number of characteristics used or employing a generalised boosting algorithm [31]. The Support vector machine and Gaussian discriminant analysis were used to decrease the number of HRV signal features to 15. Dimensionality reductions methods such principal component analysis [32] can help us store important data in new elements when dealing with data that is highly variable or has a large number of dimensions. Numerous scientists resort to PCA when dealing with high-dimensional data [33]. NN was one of 5 unsupervised dimensionality reducing methods utilized. Every study uses the categorization to verify whether a person has HD or not, and one of the most consistent patterns in that dataset most generally utilized is Cleveland [34]. Results were quite accurate, with RF achieving 89.2 percent accuracy and DT achieving 89.1 percent accuracy [35].

3. System Model

This section proposed novel heart disease detection techniques by feature extraction and classification through the DL (Deep Learning) architectures.

First, data collection has been carried out based on IoT-Cloud module of the proposed design. Here the input heart disease data has been pre-processed and segmented for filtering and edge normalization. The input image has been processed based on CHE and segmented based on threshold of the image. CHE is a pixel balancing technique, in which density of pixel images have normalized with object segmentation statistics. Then segmented image was extracted for obtaining the in-depth features and classifying the features using DenseNet with Convoluted radial basis neural network. Figure 1 depicts the overall design concept.

(1) Feature selection

The technique of selecting a significant characteristic from among the original features based on certain conditions is known as feature selection. In addition, feature collection algorithms often fall into one of three types [10]: filter, wrapper, or hybrid models. These methods are designed to be used with a variety of evaluation criteria. They mainly used Keel's wrapper method. They purpose is to identify the patient's private data from among the dataset's 14 attributes, the two attributes referring to the patient's age and sex are the ones that are used. Because they contain crucial clinical records, the next 12 qualities are regarded as being very valuable.

(2) Feature extraction

The technique of extracting new characteristics from an existing set of characteristics, typically by using some kind of functional mapping, is referred to as feature extraction. PCA. One of Weka's most popular medical dimensional reducing methods. PCA displays collected data as components or features. PCA is a popular dimensionality reduction method. Using the use of PCA, the number of attributes was narrowed down to six, each of which makes a greater contribution to the diagnostic of CVD.

(3) Pre-processing of the database

The processing of the database beforehand, which is necessary for accurate depiction. Pre-processing the dataset included removing attribute missing values, applying Standard Scalar (SS), and using Min-Max Scalar.

Figure 1. Proposed methodology frame work

3.1 Pre-processing using CHE

CHE is a simple and widely used method for improving image contrast. It can boost image contrast and flatten the density distribution in the produced image. A key aspect of CHE is the process of extending the dynamic picture. This can improve the contrast of the supplied image, resulting in the best possible results. However, the input image's brightness, visual quality, and ability to advance unique items can be altered. As a result, this method isn't suitable for photographs where the extraordinary intensity and aspect must be preserved. The collected dataset is open source and it attained privacy licence from repository. This probability of density is denoted by p when the original image's pixel value is in the ranges from (0≤a≤1). The probability of density b is p. (b). If the enhanced image's pixel values are between b(0≤b≤1). It makes b=T the mapping function (a). A histogram is given in Eq. (1):

$p_a(a) d a=p_b(b) d b$    (1)

For inverse function $b=T^{-1}(s)$, then the Eqs. (2) and (3):

$p_a(a)=\left[p_b(b) \frac{1}{d a / d b}\right]_{b=T^{-1}(a)}$    (2)

$p_a(a)=p_b(b) \frac{1}{p_b(b)}=1$    (3)

The Eqs. (4)-(5) is the relationship between original image I and improved image 'fi'.

$f_i=(l-1) T(b)$    (4)

$f_i=(l-1) \sum_{k=0}^i \frac{t_k}{T n}$    (5)

Tn is total number of pixels in image, and $t_k$ is number of pixels in image with kth intensity. If an image contains n different levels of intensity, the probability rate of the Ith level is pi, hence the entropy is Eq. (6),

$e n(i)=-p_i log p_i$    (6)

The entropy of the entire image is determined using Eqs. (7) and (8):

$E n=\sum_{i=0}^{n-1} e n(i)$    (7)

$E n=\sum_{i=0}^{n-1} p_i log p_i$     (8)

where, $p_0=p_{12}=\cdots=p_{n-1}=\frac{1}{n}$.

3.2 Segmentation using Otsu thresholding method

When the pixels values of the real image falls within the range (0–N–1), the probability of density and is represented by the symbol $\mu$. When the pixels values of the enhanced image are within the range 0 to N-1, the probability of density $X_i$ is denoted by the expression $X_i$. Because of this $\sigma^2$ becomes the mapping function $\mu$. An example of a histogram can be found in the Eq (1):

$\sigma^2=\frac{\sum_{i=0}^N\left(X_i-\mu\right)^2}{N}$    (9)

where, $X_i$ is the image's pixel value, the symbol for mean is "mean," while the total Image pixels are "N." Image processing software uses OSTU. Histogram-based photo thresholding or grayscale-to-binary conversion are performed by such programmes. In order to get the optimal threshold, the image might be segmented into two different intra classes. In order to study the threshold values that minimise the interclass variation, which is determined by the weighted sum of the variations of 2 different classes in Eq. (10), the following will be done:

$\sigma_\omega^2(t)=\omega_0(t) \sigma_0^2(t)+\omega_1(t) \sigma_1^2(t)$     (10)

where, $\omega_0$ and $\omega_1$ the weighted probability of 2 classes is divided by a threshold t, as well as $\sigma_0^2$ and $\sigma_1^2$ are the variance of the two classes, respectively. As stated in Eq. (11), OSTU formulates that minimization of intraclass variance is equivalent to maximization of inter-class variance.

$\sigma_b^2(t)=\sigma^2-\sigma_\omega^2(t)$    (11)

$\sigma_b^2(t)=\omega_1(t) \omega_2(t)\left[\mu_1(t)-\mu_2(t)\right]$    (12)

where, $\omega_1$ and $\omega_2$ are class probabilities, $\mu_i$ is class means.

As indicated in Eq. (13): the class probabilities $\omega_1(t)$ are computed from the t histogram:

$\omega_1(t)=\sum_0^t P(i)$    (13)

Eq. (14) gives the class probabilities $\mu_1(t)$:

$\mu_1(t)=\sum_0^t P(i) x(i)$    (14)

where, x(i) shows the central value of the histogram bin.

3.3 DenseNet with convoluted radial basis neural network-based feature extraction and classification

The DenseNet CNN architecture is a best feature map creating model, which can standalone to dyanamic datasets. Each layer in DenseNet obtains gradients via the input and loss function. Each layer in the architecture is connected to other layers, So DenseNet generates k features for each layer, which are then concatenated with the features acquired by the preceding layers. For L layers in the model have L(L+1/2) connections with other layers. Effect of this concatenation operation is that contributions to the next tier are delivered. In Dense Nets, concatenation of feature maps is represented by the following equation:

$x_l=H_l\left(\left[x_0, x_1, \ldots, x_l-1\right]\right)$    (15)

Transition-Up was the name given to the up-sampling procedure (TU). To construct final label map of the feature extraction. Several changes were made to the proposed network architecture's connectivity topology to enhance efficiency of parameter, convergence rate, and memory path requirement. Figure 2 depicts the proposed DenseNet architecture. The proposed design for semantic segmentation with a Densely Connected Convoluted RBNN was composed of various modular components, as follows: (a) A three-layer Dense Block (DB) sample. The input was supplied to the first layer in a database, which created feature k maps. Then these are combined with input as well as given to 2nd layer, resulting in a new set of feature k maps. This method was performed three times and DB's final output was a concatenation of all three layers' outputs, resulting in Maps with 3*k features. (b) A Layer has been constructed in a DB by combining BN, ELU, and 3×3 Convolution and p=00.2 drop-out rate. (c) As the network's depth grows, TD block diminishes spatial resolution of feature maps. BN, ELU, 2×2 Complication layer, Dropout layer with 3×3 Max-pooling layers made up the TD block. (d) TU block performs 3×3 Transferred layer Convolution with a gain of 2 boost spatial resolution of feature maps.

Figure 2. Proposed DenseNet architecture

Figure 3. DenseNets to mitigate the explosion of feature map

The diagrams of Figure 3 depict the changes made to DenseNets to reduce feature map detonation while expanding to FCN. The following are the most significant changes: (i) A projection layer as well as element-wise addition of feature map are used instead of the typical copy and concatenation, resulting in a decrease in specifications as well as GPU memory footprint. (ii) In up-sampling path, short-cut connections (residual) are introduced. (iii) In first layer, there are parallel paths. The model training is process to give samples and features information to architecture for now the feature map. The proposed architecture is maintained epochs of 50, batch size of 15 and other post processing metrics adjustment. The pipeline is more talking care of model metrics at training, which can be performed on Nvidia GPU. Due to normalization of all pre and post processing techniques overfitting and under fitting problems has cross overed.

The probability prediction   $p\left(t_i \mid x_i ; W\right)$ is given by Eq. (16).

$L_{C E}(X ; W)=-\sum_{x_i \in X} w_{\text {map }}\left(x_i\right) \log \left(p\left(t_i \mid x_i ; W\right)\right)$    (16)

where, X denotes training data, ti is the target class label for each voxel $x_i \in X$, and $w_{{map }}\left(x_i\right)$ represents the weight assigned to each voxel $x_i$, based on its importance or frequency. $p\left(t_i \mid x_i ; W\right)$ is the predicted probability of class ti given voxel xi and model parameters W. N is set of all voxels for each ground-truth image, as indicated in Eq. (17):

$\begin{aligned} w_{ {map }}\left(x_i\right)=\sum_{l \in L} & \frac{|N| * \mathbb{1}_{T_l}\left(x_i\right)}{\left|T_l\right|} \\ & +\sum_{l \in L} \frac{|N| * \mathbb{1}_{C_l}\left(x_i\right)}{\left|C_l\right|}\end{aligned}$     (17)

where, |.| provides the cardinality of the set, and providing a value of 1 provides the gauge function with labels on subclasses of $N$, i.e., $C_l \subset T_l \subset N, \forall l \in L$ in Eq. (18)

$\begin{aligned} \mathbb{1}_{T_l}\left(x_i\right) & := \begin{cases}1 & x_i \in T_l \\ 0 & x_i \notin T_l\end{cases} \\ \mathbb{1}_{C_l}\left(x_i\right) & := \begin{cases}1 & x_i \in C_l \\ 0 & x_i \notin C_l\end{cases} \end{aligned}$     (18)

Mini batch weights were estimated for each training set instance. Eq. (19) calculates dice loss for multi-class segmentation:

$\begin{gathered}l_{D I C E}(X ; W) \\ =\frac{\sum x_{x_i \in X} p\left(t_i \mid x_i ; W\right) g\left(x_i\right)+\epsilon}{\sum x_{x_i \in X}\left(p\left(t_i \mid x_i ; W\right)^2+g\left(x_i\right)^2\right)+\epsilon} \\ L_{D I C E}=\frac{\sum_{l \in L} w_l l_{D I C E}}{\sum_{l \in L} w_l}\end{gathered}$    (19)

A mapping can be considered an RBF_NN: As a mapping, Rr → Rs is defined as follows: Let P∈ R' be the direction vector, and let C i∈ R' (1≤i≤u) be the prototype of input vectors. Example of RBF unit output:

$R_i(P)=\left[R_i\left(\left\|P-C_i\right\|\right)\right] i=1, \ldots, u$    (20)

Gaussian function is usually favored over all other RBF because it is factorizable. As a result, in Eq. (21)

$R_i(P)=\exp \left\lceil-\frac{\left\|P-C_i\right\|^2}{\sigma_i^2}\right\rceil$     (21)

where, width of ith RBF unit is $\sigma_i^2$. An RBF_NN jth output is equal to Eq. (22)

$y_i(P)=\sum_{i=1}^u R_i(P) \times w(j, i)$    (22)

At first, we set number of RBF units to be the same as output, u=s, assuming that each class has just one cluster.

$C^k=\frac{1}{n^k} \sum_{i=1}^{n^k} P_i^k k=1,2, \ldots, u$    (23)

$P_i^k$ is I th sample in class k, and $n^k$ denotes total no. of training models in class k. Calculate Euclidean distance $d_k$ between mean $C^k$ and furthest point $P^k(f)$ belonging to class k for any class k, i.e., Eq. (24)

$d_k=\left\|P^k(f)-C^k\right\|$    (24)

For any class:

As shown in Eq. (25): Calculate the distance $d c(k, j)$ among class k's mean and others'.

$d c(k, j)=\left\|C^k-C^j\right\| j=1,2, \ldots, s, j \neq k$    (25)

Find as shown in Eq. (26):

$d_{\min }(k, l)=\underset{l}{\arg \mathrm{~m}}(d c(k, j)) j=1,2, \ldots, s, j \neq k$    (26)

Examine the connection between $d_{\min }(k, l)$ and $d_k, d_l$.

The inputs are not uniformly distributed and are therefore independent; as a result, every feature of input has its own anticipation µxi and variance, $\delta_{x_i}^2$ in Eq. (27).

$\phi_j(t)=w_j^2(t) e^{\delta_{d_j}(t) / 2 \sigma_j^4(t)-\mu_{d_j}(t) / \sigma_j^2(t)}$    (27)

With as shown in Eq. (28):

$\left\{\begin{array}{l}d_j(t)=\left\|\mathbf{x}(t)-\mathbf{c}_j(t)\right\|^2 \\ \delta_{d_j}(t)=\sum_{i=1}^n \mu\left[\binom{x_i(t)}{-\mu_{x_i}(t)}^4\right]-\left(\delta_{x_i}^2(t)\right)^2+4 \delta_{x_i}^2(t)\left(\mu_{x_i}(t)-c_{j i}(t)\right)^2+4 \mu\left[\left(x_i(t)-\mu_{x_i}(t)\right)^3\right]\left(\mu_{x_i}(t)-c_{j i}(t)\right)\end{array}\right.$    (28)

$\left\{\begin{array}{l}\mu_{d_j}(t)=\sum_{i=1}^n\left[\delta_{x_i}^2(t)+\left(\mu_{x_i}(t)-c_{j i}(t)\right)^2\right] \\ \sigma_j(t)=\phi_j(t) \sum_{i=1}^n\left(\delta_{x_i}^2(t)+\binom{\mu_{x_i}(t)}{-c_{j i}(t)}^2 / \sigma_j^4(t)\right)\end{array}\right.$    (29)

In theory, we don't have to precisely constrain the data distribution as long as input variation is finite. The sensitivity of RBFNN is demonstrated in Eq. (30) by law of large numbers:

$\begin{aligned} & \quad E_{\mathbf{X}_{s, \Delta y^2}}(t) \\ & =\frac{1}{N}\left\{\sum_{a=1}^N \int_{\mathbf{X}_{s, \mathbf{x} n}} \begin{array}{c}{\left[\begin{array}{c}f\left(\mathbf{x}_a(t)+\Delta \mathbf{x}(t)\right) \\ -f\left(\mathbf{x}_a(t)\right)\end{array}\right]^2} \\ p(\Delta \mathbf{x}(t)) \\ d \Delta \mathbf{x}(t)\end{array}\right\} \\ & \approx \sum_{j=1}^m \phi_j(t)\left\{\sum_{i=1}^n\left[\binom{\delta_{\Delta x_i}^2(t)}{\delta_{x_i}^2(t)+\left(\begin{array}{c}\mu_{x_i}(t) \\ -c_{j i}(t) \\ +0.2 \delta_{\Delta x_i}^2(t) \\ / \sigma_j^4(t)\end{array}\right)}^2\right)\right\}\end{aligned}$    (30)

with:

$\phi_j(t)=\sigma_j^4(t) \xi_j(t)$.    (31)

$E_{\mathbf{X}_{S, \Delta y^2}}(t) \approx \frac{1}{45} S^4 n \sum_{j=1}^m \xi_j(t)+\frac{1}{3} S^2 \sum_{j=1}^m \sigma_j(t)$.    (32)

When t → ∞,e(t) → 0 and no. Of Hidden layers neurons in the RBFNN is constant system specifications are updated based on Eqs. (14)-(16), the network is guaranteed to converge. Eqs. (33), (34) define the Lyapunov function:

$\begin{array}{r}V(e, \mathbf{c}, \boldsymbol{\sigma}, \mathbf{w})=\frac{1}{2}\left(e^2+\Delta \mathbf{c}^T \Delta \mathbf{c}+\Delta \sigma^T \Delta \boldsymbol{\sigma}\right. \\ \left.+\Delta \mathbf{w}^T \Delta \mathbf{w}+\Delta \mathbf{v}^T \Delta \mathbf{v}\right)\end{array}$    (33)

with:

$\left\{\begin{array}{l}\Delta \mathbf{c}=\mathbf{c}^*-\mathbf{c} \\ \Delta \sigma=\sigma^*-\sigma \\ \Delta w=w^*-\mathbf{w} \\ \grave{\mathbf{v}}=e \\ \Delta \mathbf{v}=\mathbf{v}^*-\mathbf{v}=\mathbf{w}^{* T}\binom{\varphi_c^T \mathbf{c}^*}{+\varphi_\sigma^T \boldsymbol{\sigma}^*+\Omega}-\mathbf{w}^T\binom{\varphi_c^T \mathbf{c}^*}{+\varphi_\sigma^T \boldsymbol{\sigma}^*}\end{array}\right.$    (34)

$\begin{aligned} V^{\prime}(e, \mathbf{c}, \sigma, \mathbf{w})= & e e^{\prime}+\Delta c^{\prime T} \Delta c+\Delta \sigma^{\prime T} \Delta \sigma \\ & +\Delta \mathbf{w}^{\prime T} \Delta \mathbf{w}+\Delta \mathbf{v}^{\prime T} \Delta \mathbf{v}\end{aligned}$.     (35)

According to Eqs. (36)-(37):

$\begin{aligned} & V^{\prime}(e, c, \sigma, w) \\ & =e e^{\prime}+\left(c^*-c\right)^{\prime T} \Delta c+\left(\sigma^*-\sigma\right)^{\prime T} \Delta \sigma \\ & +\left(\mathbf{w}^*-w\right)^{\prime T} \Delta w+\left(\mathbf{v}^*-\mathbf{v}\right)^{\prime T} \Delta \mathbf{v} \\ & =-e^2 \\ & +e\left\{\begin{array}{c}\mathbf{w}^{* T}\left[\varphi^T\left(\mathbf{c}^*-\mathbf{c}\right)+\varphi_{\mathrm{e}}^T\left(\boldsymbol{\sigma}^*-\boldsymbol{\sigma}\right)\right]+\Delta \mathbf{w}^T \theta- \\ +\Omega \\ \mathbf{w}^T\left[\varphi_c^T\left(\mathbf{c}^*-\mathbf{c}\right)+\varphi_c^T\left(\boldsymbol{\sigma}^*-\boldsymbol{\sigma}\right)\right]-\varphi_w^T \Delta \mathbf{w}\end{array}\right\} \\ & -e\left(\mathbf{v}^*-\mathbf{v}\right)\end{aligned}$     (36)

$\begin{gathered}=-e^2+e\left[\mathbf{v}^*-\mathbf{v}+\Delta \mathbf{w}^T\left(-\varphi_\epsilon^T \mathbf{c}-\varphi_\sigma^T \boldsymbol{\sigma}+\theta\right)\right. \\ \left.-\varphi_w^T \Delta \mathbf{w}\right]-c\left(\mathbf{v}^*-\mathbf{v}\right)=-e^2 \\ V^{\prime}(e, c, \sigma, w) \leq 0\end{gathered}$.     (37)

In the above space, V' is the semi negative definite. We can obtain by Eq. (38) in light of Lyapunov theorem:

$\lim _{t \rightarrow \infty} e(t)=0$.    (38)

m+1 cells make up the Hidden layers, and the equation for RBFNN's outputs error is as follows: Eq. (39):

$e_{m+1}(t)=\frac{1}{2}\left(\grave{y}(t)-\grave{y}_{m+1}(t)\right)^2$    (39)

Eqs. (40)-(41) give us the following:

$\begin{aligned} e_{m+1}(t)=\frac{1}{2}[\grave{y}(t) & -\left(\sum_{j=1}^m w_j(t) \theta_j(t)\right. \left.\left.+w_{m+1}(t) \theta_{m+1}(t)\right)\right]^2=0\end{aligned}$    (40)

$e_{m+1}(t)=0$.     (41)

The network error can be expressed as Eq. (42):

$=g_m(t)-\left(\sum_{l=1}^{e_{m-1}(t)} w_l \theta_l(t)-w_j \theta_j(t)\right)$    (42)

If m neurons are in the buried layers, the equation will look like this: Eq. (43): $\grave{y}_m(t)$ represents the network output.

$\grave{y}_m(t)-\left(\sum_{l=1, \lambda \neq i}^m w_l \theta_l(t)+w_i \theta_i(t)\right)=0$    (43)

4. Performance Analysis

The outcomes of a large number of trials that were used in the calculation of the efficiency of proposed hybrid approaches. On a personal computer, the following requirements for the proposed hybrid architecture were analysed and tested: System requirements: Intel(R) Core i7-8923 processor, Windows 10 operating system, 64-bit operating system, 8 gigabytes of random-access memory (RAM), TensorFlow, Sci-Kit image, PyWin, and SwarmDL package packages, and Python 3.7.0 as the language for programme design.

Table 1. Heart image processing and classification using proposed model

Input Image

Heart Ultrasound Image

Pre-Processed Heart Ultrasound Image

Segmented Heart Ultrasound Image

Extracted Features of Heart Ultrasound Image

Classified Heart Ultrasound Image

The above Table 1 shows processing of heart input image of various datasets. Here the various stages of processing and extraction feature from input with classification of input image in heart disease detection.

(a) sensitivity

(b) Recall

(c) F-measure

(d) Throughput

(e) Training accuracy

(f) Testing accuracy

Figure 4. Comparative analysis of CCF heart disease dataset in terms of (a) sensitivity, (b) recall, (c) F-measure, (d) throughput, (e) training accuracy, (f) testing accuracy

The Table 2 shows comparative analysis of CCF and UCI heart disease datasets among the new methods being suggested and those already in use. The evaluation was carried out with regard to sensitivity, recall, throughput, F-measure, training accuracy, and testing accuracy.

The above Figures 4-5 show comparative analysis comparison among the suggested method as well as the present method with regard to sensitivity, recall, F-measure, throughput, training accuracy, and testing accuracy. The database that is being contrasted below is the UCI one. and CCF heart disease dataset. The proposed method obtained sensitivity of 72%, recall of 90%, F-measure of 88%, throughput of 94.00%, accuracy of training 92% and accuracy of testing 95% for CCF dataset. For UCI dataset analysis, the proposed technique obtained sensitivity of 74%, recall of 92%, F-measure of 89%, throughput of 95%, accuracy of training 95% and accuracy of testing 97%. The no. Of epochs for targets of original datasets varies in results based on parameters, with classification performance increasing dramatically as number of epochs grows. After epoch count 70 for Target 001, 60 for Target 003, as well as 50 for Target 002 and Target 004, the method tends to be stable. According to the findings of the preceding investigation, the suggested method achieved excellent outcomes in identifying HD. utilizing DL methods for various datasets. This means that, when compared to standard methods, DL methods have a better chance of capturing disease-related changes. It had the best 60 features as well as performed well across board for the 4 output targets. The model-based selection strategy, which employed a proposed classifier to estimate the relevance of features in relation to the output targets, yielded this result. Our best picked method, in comparison to the existing technique, filtered key features by training method only once. This did not compel selection of best features in same way as the conventional technique does, which iteratively trains method as well as predicts the best characteristics one by one until it attains a particular level of accuracy.

The above Table 3 clearly explains about computational complexity and real-time performance of various models. In this proposed model attained more improvement compared to all other exiting models.

Figures 6-7 clearly explains about performance measures of existing methods and proposed method in this implemented design has attained more MAP and other metrics.

Table 2. All performance measures comparison with existing and proposed (a) Model metrics (b) Application metrics.

(a)

 

MAP

Sensitivity

Recall

p-Values

CI

TP

SVM

78

65

85

81

90

90

PCA-KNN

82

69

89

85

94

93

ResNet

92

82

90

91

95

94

Vision Transformers

97

76

91

86

95

93

RCNN

96

91

88

96

90

97

DenseNet-RBNN

98

97

95

96

98

98

(b)

 

Throughput

Training Accuracy

Testing Accuracy

SVM

90

88

89

PCA-KNN

92

91

92

ResNet

93

90

94

Vision Transformers

92

93

97

RCNN

96

94

97

DenseNet-RBNN

97

98

98

Table 3. Comparison table of CC and RP

 

Computational Complexity (CC)

Real-Time Performance (RP)

SVM

45.21

40.54

PCA-KNN

47.82

56.23

ResNet

49.48

49.48

Vision Transformers

52.96

54.38

RCNN

56.74

57.23

DenseNet-RBNN

61.54

59.93

(a) sensitivity

(b) Recall

(c) F-measure

(d) throughput

(e) training accuracy

(f) testing accuracy

Figure 5. Comparative analysis of UCI heart disease dataset in (a) sensitivity, (b) recall, (c) F-measure, (d) throughput, (e) training accuracy, (f) testing accuracy

Figure 6. Comparison of metrics with existing models and proposed DenseNet model

Figure 7. Computational complexity vs Real-time performance analysis

5. Conclusion

This research proposes novel heart disease detection techniques by feature extraction and classification through the DL (Deep Learning) architectures. The data collection has been carried out based on IoT-Cloud module of the proposed design. Here the input heart disease data has been pre-processed and segmented for filtering and edge normalization. The input image has been processed based on CHE and segmented based on threshold of the image. Then segmented image was extracted for obtaining the in-depth features and classifying the features using DenseNet with Convoluted radial basis neural network. Several clinical measures are used to measure the risk contour in patients, which aid in early diagnosis. In the proposed model, various regularisation methods are applied to avoid overfitting. On dataset, suggested model attains 72 percent sensitivity, 90 percent recall, 88 percent F-measure, 94 percent throughput, 92 percent training accuracy, and 95 percent testing accuracy. This is compared to other deep learning (DL) methods using a variety of performance metrics, demonstrating the efficacy of the proposed approach.

  References

[1] MAlnajjar, M.K., Abu-Naser, S.S. (2022). Heart sounds analysis and classification for cardiovascular diseases diagnosis using deep learning. http://ijeais.org/wp-content/uploads/2022/1/abs/IJAER220102.html.

[2] Diwakar, M., Tripathi, A., Joshi, K., Memoria, M., Singh, P. (2021). Latest trends on heart disease prediction using machine learning and image fusion. Materials Today: Proceedings, 37: 3213-3218. https://doi.org/10.1016/j.matpr.2020.09.078

[3] Abdeldjouad, F.Z., Brahami, M., Matta, N. (2020). A hybrid approach for heart disease diagnosis and prediction using machine learning techniques. In The Impact of Digital Technologies on Public Health in Developed and Developing Countries: 18th International Conference, ICOST 2020, Hammamet, Tunisia, Proceedings 18. Springer International Publishing. Springer, Cham, pp. 299-306. https://doi.org/10.1007/978-3-030-51517-1_26

[4] Ahsan, M.M., Siddique, Z. (2022). Machine learning-based heart disease diagnosis: A systematic literature review. Artificial Intelligence in Medicine, 128: 102289. https://doi.org/10.1016/j.artmed.2022.102289

[5] Rath, A., Mishra, D., Panda, G., Satapathy, S.C. (2021). Heart disease detection using deep learning methods from imbalanced ECG samples. Biomedical Signal Processing and Control, 68: 102820. https://doi.org/10.1016/j.bspc.2021.102820

[6] Li, H., Wang, X., Liu, C., Zeng, Q., Zheng, Y., Chu, X., Yao, L., Wang, J., Jiao, Y., Karmakar, C. (2020). A fusion framework based on multi-domain features and deep learning features of phonocardiogram for coronary artery disease detection. Computers in Biology and Medicine, 120: 103733. https://doi.org/10.1016/j.compbiomed.2020.103733

[7] Atallah, R., Al-Mousa, A. (2019). Heart disease detection using machine learning majority voting ensemble method. In 2019 2nd International Conference on New Trends in Computing Sciences (ICTCS), Amman, Jordan, pp. 1-6. https://doi.org/10.1109/ICTCS.2019.8923053

[8] Yang, Y., Wang, P., Gao, X. (2022). A novel radial basis function neural network with high generalization performance for nonlinear process modelling. Processes, 10(1): 140. https://doi.org/10.3390/pr10010140

[9] Khened, M., Kollerathu, V.A., Krishnamurthi, G. (2019). Fully convolutional multi-scale residual DenseNets for cardiac segmentation and automated cardiac diagnosis using ensemble of classifiers. Medical Image Analysis, 51: 21-45. https://doi.org/10.1016/j.media.2018.10.004

[10] Li, J.P., Haq, A.U., Din, S.U., Khan, J., Khan, A., Saboor, A. (2020). Heart disease identification method using machine learning classification in e-healthcare. IEEE Access, 8: 107562-107582. https://doi.org/10.1109/ACCESS.2020.3001149

[11] Apostolopoulos, I.D., Apostolopoulos, D.I., Spyridonidis, T.I., Papathanasiou, N.D., Panayiotakis, G.S. (2021). Multi-input deep learning approach for cardiovascular disease diagnosis using myocardial perfusion imaging and clinical data. Physica Medica, 84: 168-177. https://doi.org/10.1016/j.ejmp.2021.04.011

[12] Shah, D., Patel, S., Bharti, S.K. (2020). Heart disease prediction using machine learning techniques. SN Computer Science, 1(6): 345. https://doi.org/10.1007/s42979-020-00365-y

[13] Ketu, S., Mishra, P.K. (2022). Empirical analysis of machine learning algorithms on imbalance electrocardiogram-based arrhythmia dataset for heart disease detection. Arabian Journal for Science and Engineering, 47(2): 1447-1469. https://doi.org/10.1007/s13369-021-05972-2

[14] Olsen, C.R., Mentz, R.J., Anstrom, K.J., Page, D., Patel, P.A. (2020). The clinical applications of machine learning in diagnosing, classifying, and predicting heart failure. American Heart Journal, 229: 1-17. https://doi.org/10.1016/j.ahj.2020.07.009

[15] Abdar, M., Książek, W., Acharya, U.R., Tan, R.S., Makarenkov, V., Pławiak, P. (2019). A new machine learning technique for an accurate diagnosis of coronary artery disease. Computer Methods and Programs in Biomedicine, 179: 104992. https://doi.org/10.1016/j.cmpb.2019.104992

[16] Lih, O.S., Jahmunah, V., San, T.R., Ciaccio, E.J., Yamakawa, T., Tanabe, M., Kobayashi, M., Faust, O., Acharya, U.R. (2020). Comprehensive electrocardiographic diagnosis based on deep learning. Artificial Intelligence in Medicine, 103: 101789. https://doi.org/10.1016/j.artmed.2019.101789

[17] Nilashi, M., Ahmadi, N., Samad, S., Shahmoradi, L., Ahmadi, H., Ibrahim, O., Asadi, S., Abdullah, R., Abumalloh, R.A., Yadegaridehkordi, E. (2020). Disease diagnosis using machine learning techniques: A review and classification. Journal of Soft Computing and Decision Support Systems, 7(1): 19-30.

[18] Ali, M.M., Paul, B.K., Ahmed, K., Bui, F.M., Quinn, J.M., Moni, M.A. (2021). Heart disease prediction using supervised machine learning algorithms: Performance analysis and comparison. Computers in Biology and Medicine, 136: 104672. https://doi.org/10.1016/j.compbiomed.2021.104672

[19] Khan, A.H., Hussain, M., Malik, M.K. (2021). Cardiac disorder classification by electrocardiogram sensing using deep neural network. Complexity, 2021(1): 5512243. https://doi.org/10.1155/2021/5512243

[20] Plati, D.K., Tripoliti, E.E., Bechlioulis, A., Rammos, A., Dimou, I., Lakkas, L., Watson, C., McDonald, K., Ledwidge, M., Pharithi, R., Gallagher, J., Michalis, L.K., Goletsis, Y., Naka, K.K., Fotiadis, D.I. (2021). A machine learning approach for chronic heart failure diagnosis. Diagnostics, 11(10): 1863. https://doi.org/10.3390/diagnostics11101863

[21] Saikumar, K., Rajesh, V., Babu, B.S. (2022). Heart disease detection based on feature fusion technique with augmented classification using deep learning technology. Traitement Du Signal, 39(1): 31-42. https://doi.org/10.18280/ts.390104

[22] Saikumar, K., Rajesh, V. (2020). Coronary blockage of artery for heart diagnosis with dt artificial intelligence algorithm. International Journal of Research in Pharmaceutical Sciences, 11(1): 471-479. https://doi.org/10.26452/ijrps.v11i1.1844

[23] Saikumar, K., Rajesh, V. (2020). A novel implementation heart diagnosis system based on random forest machine learning technique. International Journal of Pharmaceutical Research (09752366), 12: 3904-3916. https://doi.org/10.31838/ijpr/2020.SP2.482

[24] Garigipati, R.K., Raghu, K., Saikumar, K. (2022). Detection and identification of employee attrition using a machine learning algorithm. In Handbook of Research on Technologies and Systems for E-Collaboration During Global Crises, pp. 120-131. https://doi.org/10.4018/978-1-7998-9640-1.ch009

[25] Mythreya, S., Murthy, A.S.D., Saikumar, K., Rajesh, V. (2022). Prediction and prevention of malicious URL using ML and LR techniques for network security: Machine learning. In Handbook of Research on Technologies and Systems for E-Collaboration During Global Crises, pp. 302-315. https://doi.org/10.4018/978-1-7998-9640-1.ch019

[26] Detrano, R., Janosi, A., Steinbrunn, W., Pfisterer, M., Schmid, J.J., Sandhu, S., Guppy, K.H., Lee, S., Froelicher, V. (1989). International application of a new probability algorithm for the diagnosis of coronary artery disease. The American Journal of Cardiology, 64(5): 304-310. https://doi.org/10.1016/0002-9149(89)90524-9

[27] Gudadhe, M., Wankhade, K., Dongre, S. (2010). Decision support system for heart disease based on support vector machine and artificial neural network. In 2010 International Conference on Computer and Communication Technology (ICCCT), Allahabad, India, pp. 741-745. https://doi.org/10.1109/ICCCT.2010.5640377

[28] Kahramanli, H., Allahverdi, N. (2008). Design of a hybrid system for the diabetes and heart diseases. Expert Systems with Applications, 35(1-2): 82-89. https://doi.org/10.1016/j.eswa.2007.06.004

[29] Das, R., Turkoglu, I., Sengur, A. (2009). Effective diagnosis of heart disease through neural networks ensembles. Expert Systems with Applications, 36(4): 7675-7680. https://doi.org/10.1016/j.eswa.2008.09.013

[30] Jabbar, M.A., Deekshatulu, B.L., Chandra, P. (2013). Classification of heart disease using artificial neural network and feature subset selection. Global Journal of Computer Science and Technology Neural & Artificial Intelligence, 13(3): 4-8.

[31] Palaniappan, S., Awang, R. (2008). Intelligent heart disease prediction system using data mining techniques. In 2008 IEEE/ACS International Conference on Computer Systems and Applications, Doha, Qatar, pp. 108-115. https://doi.org/10.1109/AICCSA.2008.4493524

[32] Olaniyi, E.O., Oyedotun, O.K., Adnan, K. (2015). Heart diseases diagnosis using neural networks arbitration. International Journal of Intelligent Systems and Applications, 7(12): 72. https://doi.org/10.5815/ijisa.2015.12.08

[33] Vasanthkumar, P., Senthilkumar, N., Rao, K.S., Metwally, A.S.M., Fattah, I.M., Shaafi, T., Murugan, V.S. (2022). Improving energy consumption prediction for residential buildings using modified wild horse optimization with deep learning model. Chemosphere, 308: 136277. https://doi.org/10.1016/j.chemosphere.2022.136277

[34] Eunice, J., Popescu, D.E., Chowdary, M.K., Hemanth, J. (2022). Deep learning-based leaf disease detection in crops using images for agricultural applications. Agronomy, 12(10): 2395. https://doi.org/10.3390/agronomy12102395

[35] Baskar, M., Anbarasu, V., Balaji, A., Kalyanasundaram, P., Thiagarajan, R., Arulananth, T.S. (2022). Retracted article: Energy efficient congestion free and adaptive mechanism for data delivery in underwater wireless sensor networks using 2H-ACK. Optical and Quantum Electronics, 54(10): 633. https://doi.org/10.1007/s11082-022-04030-x