Feature Convergence for Diabetic Retinopathy Detection Based on Activated Convolution Networks Using Fundus Images

Feature Convergence for Diabetic Retinopathy Detection Based on Activated Convolution Networks Using Fundus Images

Ragumadhavan Ramachandran* Aravind Britto Karupanan Raju Karthic Ramasamy Shunmugaraj Vimala Rayappan

Department of Electronics and Communication Engineering, PSNA College of Engineering and Technology, Dindigul 624622, India

Department of Electrical and Electronics Engineering, PSNA College of Engineering and Technology, Dindigul 624622, India

Corresponding Author Email: 
raguece85@gmail.com
Page: 
1671-1684
|
DOI: 
https://doi.org/10.18280/ts.420337
Received: 
7 November 2024
|
Revised: 
7 January 2025
|
Accepted: 
18 February 2025
|
Available online: 
30 June 2025
| Citation

© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Diabetic retinopathy (DR) causes vision blindness due to retinal impairment over prolonged high blood glucose levels. Pre-diagnosis of glucose levels and detection of impairments reduce the risk of DR. Image-based diagnosis and detection are widely adopted in modern clinical assessments, aided by computerized algorithms. A Converging Feature Classification Method (CFCM) is proposed to reduce the false rates in diagnosing DR using optical eye images. This method utilizes an activated convolution neural network (A-CNN) to reduce false rates. The activation process is the normalization of extracted features by detaining the replicated ones. Such replications are prevented from increasing the false rate through the hidden computing layers of the CNN. The normalized CNN trains the hidden layer by identifying false (replicated) features and extracting unique features for DR detection. Similarly, the extracted unique features are aligned with the training images to find the exact match of DR. The training is improved through replicated and non-replicated features to ensure high precision in DR detection is achieved.

Keywords: 

CNN, diabetic retinopathy, false rate, feature extraction, normalization

1. Introduction

Diabetic retinopathy detection involves the availability of a detailed dataset of retinal images, meticulously annotated by ophthalmologists. The original images are preprocessed and made free of noise, to improve clarity in their application [1]. Features that are representative of the images can be manually and automatically derived. Convolutional Neural Networks (CNNs), having been so successful in image classification, are often applied [2]. Optimization of the models for full training toward eliminating overfitting and ensuring generalization is done during training [3]. Accuracy and specificity are used to determine how good the model is doing. Fine-tuning of the parameters and architecture of the model further refines its performance. It is under such satisfactory results that the deployment in a real-world setting commences, underpinned by continuous monitoring [4]. Periodic updates and refinements sustain the model's relevance and effectiveness. Transparency and adherence to regulatory standards remain paramount for clinical integration [5].

Feature classification is a step of diabetic retinopathy detection that involves extracting characteristics from retinal images. The features may include morphology, texture, vascular, and intensity features [6]. After the extraction of features, the selection of features is executed vigorously using either statistical methods or dimensionality reduction. A chosen algorithm, such as Support Vector Machines (SVM), Random Forest, or k-nearest Neighbors (k-NN), is applied to classify the images from extracted features to eventually classify diabetic retinopathy severity levels [7, 8]. Here, the classifier is trained using a labeled dataset where each image is attached to one of the diabetic retinopathy severity levels. Evaluation of the performance of the classifier using metrics like accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC-ROC) is used to evaluate the effectiveness of the classifier. Further improvement by tuning some of the parameters improves the performance of the classifier [9, 10].

Machine learning techniques, particularly CNNs, are increasingly used for diabetic retinopathy detection by analyzing retinal images [11]. These algorithms can identify key features such as hemorrhages, exudates, and microaneurysms, indicative of the condition [12]. Training datasets, meticulously labeled by experts, are fundamental for model development, ensuring accuracy and reliability. Evaluation metrics like sensitivity, specificity, and AUC-ROC curve assessment are employed to measure the model's performance [13, 14]. Fine-tuning parameters and architecture optimization further refine the model for improved efficacy. Deployed systems facilitate early diagnosis and personalized treatment strategies, ultimately enhancing patient outcomes [15]. Continuous monitoring and periodic updates are necessary to uphold model relevance and effectiveness in real-world scenarios. Compliance with regulatory standards and ethical considerations is imperative for responsible clinical deployment, ensuring patient safety and privacy [16]. The article’s contribution is listed below:

  • The proposal, display, and description of converging feature classification method that improves the DR detection precision through sensitivity improvements of heterogeneous features.
  • The modification of conventional CNN with activation function of forward and reverse training features to prevent feature replications affecting precision.
  • Dataset-based experimental analysis and metric-based comparative analysis to verify, validate, and conclude the proposed method’s performance.

The article is organized as follows: In Section 2, the related works from different authors are discussed with the pros and cons, and a summary of the consolidated problem identified. Section 3 presents the briefing and discussion of the proposed classification method with CNN description and classification. This section presents the explanation, derivations, and illustrations related to the proposed method. Section 4 presents the results and discussion under experimental and comparative analysis with an explanation followed by the conclusion and future scope in Section 5.

2. Related Works

Nasir et al. [17] developed a method for detecting diabetic retinopathy using a faster RCNN with fused features from retina images. The method employs an automatic and intelligent system to detect diabetic retinopathy (DR) early from retina fundus images. A machine learning-based faster RCNN classifier is then employed to classify DR or normal conditions and identify DR lesions. This approach surpasses existing methods, offering a promising solution for early diabetic retinopathy identification.

Wong et al. [18] introduced a method that fine-tunes feature weights and parameters together for better diabetic retinopathy detection and grading. Their approach utilizes pre-trained networks (ShuffleNet and ResNet-18) to extract features from retinal fundus images. The method uses an Error Correction Output Code (ECOC) ensemble for classification, which outperforms traditional deep learning models. The method offers promise for more effective diabetic retinopathy diagnosis and grading.

Khan et al. [19] introduced VGG-NIN, a deep-learning architecture for diabetic retinopathy detection. Their model aims to streamline computational complexity while accurately capturing complex features. By utilizing the SPP layer, the model can process DR images at different scales, while the NiN layer enhances nonlinear representation. The method shows improved accuracy and computational efficiency compared to existing approaches, promising better automatic diagnosis of DR.

Shamrat et al. [20] crafted an advanced deep neural network to scrutinize fundus images and improve diabetic retinopathy detection. Their goal is to automate the classification of DR stages, employing Convolutional Neural Network (CNN) models. The DRNet13 model, along with fifteen pre-trained models, underwent an evaluation to assess their efficiency and accuracy. The method displayed superior speed and efficiency in comparison to other CNN architectures.

Kommaraju et al. [21] suggested using convolutional neural networks with residual blocks for DR detection. The model aims to automatically detect how severe diabetic retinopathy is by using CNNs and residual blocks. The model utilizes CNNs and residual blocks to automatically assess the severity of diabetic retinopathy, leveraging their effectiveness in image analysis tasks. The suggested approach achieves better efficiency for real-time diagnosis.

De Sousa and Camilo [22] developed a new method called HDeep for detecting diabetic retinopathy. The approach hierarchically combines four CNNs to accurately detect and classify DR. With the increasing prevalence of diabetes and its complications, such as DR, accurate detection methods are in high demand. The HDeep method shows promise in effectively detecting and classifying diabetic retinopathy, offering potential benefits for patient care.

Luo et al. [23] introduced MVDRNet, a method for detecting diabetic retinopathy. MVDRNet utilizes multiple deep neural networks and attention mechanisms to fully exploit lesion features from a wide field of view. The method assigns greater importance to crucial network channels to improve feature extraction. The method shows its effectiveness in accurately detecting diabetic retinopathy by utilizing multi-view fundus images.

Saranya et al. [24] developed a deep-learning model to spot non-proliferative diabetic retinopathy by finding exudates in retinal images. The model is made to automatically spot bright areas, vital for early diabetic retinopathy detection, using advanced deep learning methods. The method employs algorithms to remove image backgrounds, eliminate the optic disc (OD), and segment potential lesions. The method effectively detects bright lesions, showing promise for diabetic retinopathy screening.

Oh et al. [25] built a system to detect DR early, using advanced technology on wide-view eye images. The aim is to improve screening efficiency and accuracy, especially in low-income countries where access to healthcare is limited. The model using early treatment DR study 7-standard field images performed better than those focusing on the optic disc and macula. The method helps improve early detection, especially in regions with limited healthcare resources.

Liu et al. [26] suggested a method for diagnosing DR using transfer learning. The method incorporated techniques like CLAHE and grayscale image transformation to enhance diagnostic efficiency despite limited data availability. Data augmentation methods such as random brightness, contrast transformations, and mix-up algorithms were employed for data enhancement. The method demonstrated superior performance in accurately detecting DR.

Zhang et al. [27] presented an automated system for detecting severe DR using deep learning. The system's objective is to enhance screening accessibility and efficiency through artificial intelligence-based technology. The effectiveness of the system heavily relies on a large and diverse dataset for training and validation to ensure robust performance. The method shows promising results in improving screening efficiency.

Das et al. [28] proposed a method for diabetic retinopathy detection using CNNs. Their goal is to improve upon manual DR diagnosis, which is often slow and unreliable due to resource constraints and expert dependence. The method employs deep learning CNNs to learn patterns from fundus images and classify the severity of the disease. The approach has the potential to enhance the accessibility and quality of diabetic retinopathy screening and treatment.

Krishnamoorthy et al. [29] introduced H1DBi-R Net, a hybrid 1D Bidirectional RNN for diabetic retinopathy detection and classification. The goal is to enhance accuracy in detecting and classifying diabetic retinopathy, facilitating early intervention. The proposed method combines various techniques to improve the accuracy of diabetic retinopathy detection from fundus images. The method provides a promising approach for accurately detecting diabetic retinopathy through fundus images.

Modi and Kumar [30] created a smart system for diabetic retinopathy detection and diagnosis using bat-based feature selection and deep forest technique. The aim is to develop an efficient model for early DR detection and diagnosis using fundus images. The Bat Algorithm (BA) is utilized to identify relevant and optimized features for DR detection. The model presents an efficient and effective approach for early diabetic retinopathy detection and diagnosis using fundus images.

Usman et al. [31] proposed utilizing principal component analysis (PCA) for feature extraction and classification in diabetic retinopathy detection. Their method enhanced the efficiency of early detection and classification through machine learning and deep learning algorithms. PCA aids in simplifying data dimensionality while retaining critical information for more effective analysis. The model's performance is assessed using metrics such as accuracy and Hamming loss.

2.1 Problem definition

This proposed method is designed to reduce the impact of replicated feature detection and its effect on the sensitivity factor. The methods discussed above focused on filtering [19, 30] or precise feature selection [22, 27, 31] from which the benchmark for different detection is pursued. The problem is the feature dimension and its presence within the detected region for which the training and validation require either a large number of iterations or adaptable feature training. In the second case, the training metrics are to be changed randomly on encountering an error. Such detections are concise in a larger data training increasing the error rates. Therefore, to prevent such complications, the convergence factor is estimated across different features irrespective of their replicas.

3. Converging Feature Classification Method

Detecting Diabetic retinopathy is a crucial task in fundus images and provides the necessary prevention in the early stage. This is a common problem in today’s work, which is majorly caused by the diabetic patient based on the severity. So, the severity is detected in the initial stage, and from which the computation is carried out appropriately. In this, the computation time is taken into consideration and from which the process comes to an end. In this approach, DR detection illustrates the better analysis and provides the necessary steps to follow up. Here, the examination is processed for the different sets of images from the database, and from that, the output is extracted. In this manner, DR detection is carried out in the medical field to detect the problem and normalize it. The DR detection is analyzed by extracting the data from a huge set of databases and fetching the features. The proposed method is illustrated in Figure 1.

Figure 1. Proposed method illustration

In this stage, the feature extraction runs through the detection of a false rate from which the replications are avoided. The replicated data are avoided by exploring the CNN method in this work. Pre-diagnosis of glucose levels and detection of impairments reduce the risk of DR. Image-based diagnosis and detection are widely adopted in modern clinical assessments, aided by computerized algorithms. Thus, the proposed work introduces the CNN for the reliable computation of DR detection. Here, the CNN is developed to obtain better precision in this proposed work, and from which it explores the fundus image features. The necessary feature is extracted from the large set of databases and finds the severity level of DR. In this phase, DR detection is the major step in which the preliminary level is to gather the data with the labels that define the severity levels and it is equated below.

$\beta =\left\{ \left[ \left( {{i}_{0}}+..+{{i}_{n}} \right)*{{f}_{u}} \right] \right\}+\left( \frac{{}^{\mathop{\sum }_{{{i}_{0}}}{{f}_{u}}}/{}_{{{i}_{n}}}}{{}^{\mu }/{}_{\left( {{g}_{i}}+{{m}_{i}}+{{l}_{w}} \right)}} \right)*\left[ \left( {{g}_{i}}+{{m}_{i}}+{{l}_{w}} \right) \right]+\left( {}^{\left( {{i}_{0}}+{{f}_{u}} \right)}/{}_{\mu } \right)*\left( \frac{\mathop{\prod }_{{{f}_{u}}}{{i}_{0}}}{{}^{\left( {{g}_{i}}+{{m}_{i}}+{{l}_{w}} \right)}/{}_{\mu }} \right)$                      (1)                    

The gathering of data is used to explore the features from the fundus image and it is represented as$~\beta $. Here, the image is $~{{i}_{0}}$, trainingimagesare described as $~{{i}_{n}}$, the features are labeled as$~{{f}_{u}}$. In this equation, the severity levels are observed as higher, medium, and low and they are symbolized as${{g}_{i}},~{{m}_{i}}~and~{{l}_{w}}$, the detection is formulated as$~\mu $. In this case, gathering the data is to identify the severity level that explores the better detection of DR. In this phase, the analysis is used for the different sets of processing false rates in this work. The main concern of this process is to label the severity level among the input fundus images which is the input image. Here, the input image is extracted from the database from which the computation step is carried out in the appropriate time duration. The processing step involves the better detection of images from the gathering method.

The gathering of the image is used to examine the DR levels whether it is higher, medium, or lower. If it is higher the treatment is given to that section and if it is medium the second priority is given which must be not developed. If it is lower there is a minimum chance of affecting the eye. So, the impact is minimal in this lower case, thus the evaluation is observed for the gathering of the data and provides better image processing from the gathered image. Here, the features are extracted from the fundus image and from which the detection is performed and it is represented as $\left( \frac{\mathop{\prod }_{{{f}_{u}}}{{i}_{0}}}{{}^{\left( {{g}_{i}}+{{m}_{i}}+{{l}_{w}} \right)}/{}_{\mu }} \right)$. Thus, the input fundus images are gathered in this derivation and perform a better understanding of the desired features and find the better computation in this work. From this approach, extraction of features computed from the fundus image is equated in the below equation as follows.

${{f}_{u}}\left( X \right)=\left\{ \begin{matrix}\mathop{\sum }_{{{i}_{n}}}^{{{i}_{0}}}\left[ \left( \beta +\mu  \right)*\left( {{t}_{e}}+a' \right) \right]+\left( \frac{\frac{1}{\mathop{\prod }_{{{g}_{i}}}{{i}_{n}}}}{{}^{\mu }/{}_{\left( {{m}_{i}}+{{l}_{w}} \right)}} \right)+\left\{ \left[ \left( \underset{{{i}_{n}}}{\mathop \sum }\,\left( \beta +a' \right) \right)-{{t}_{e}} \right] \right\}  \\=\left[ \left( \left( {{m}_{i}}+{{l}_{w}} \right) \right)*\underset{{{i}_{n}}}{\mathop \prod }\,\left( {{h}_{i}}+\mu  \right) \right]*\left[ \left( {{g}_{i}}+{{i}_{n}} \right)*\frac{\mu }{{}^{\beta }/{}_{{{a}'}}} \right]-{{t}_{e}}  \\\end{matrix} \right.$                     (2)

The feature extraction is performed for the detection of necessary features which are having higher severity. The extraction is$~X$, in this higher, medium, and lower are considered to find the DR, which is processed by mapping with the previous stage and produces the result on the mentioned time interval and it is described as ${{t}_{e}}$, the calculation is labeled as$~a'$. In this stage, a varied range of fundus images is taken into consideration from which it examines the better gathering of data from the database and provides the result with the severity levels. In this approach, the severity levels are processed with the mapping step which defines the better detection of DR on the mentioned time interval.

Here, the necessary features are extracted from the database gathering and from which it produces the DR detection in a better manner. In such a way, the necessary features are examined on the mentioned time interval that deploys the higher, medium, and lower levels. Thus, the severity eases the extraction of features in this derivation where the initial step is taken for the higher severity level, followed up by the medium and lower level of DR in patients. Thus, this examination is carried out for the better label differentiation of levels from which the features are extracted on the fixed time interval and it is represented as $\left[ \left( {{g}_{i}}+{{i}_{n}} \right)*\frac{\mu }{{}^{\beta }/{}_{{{a}'}}} \right]-{{t}_{e}}$. From this fundus image features extraction process analysis runs through the normalization of DR and it is derived below:

$Z=\left( \alpha +X \right)*\underset{{}^{1}/{}_{{{i}_{n}}}}{\mathop \sum }\,\left[ \left( {{g}_{i}}-{{l}_{w}} \right)+\mu  \right]*\left( {}^{\beta +\mu }/{}_{\sqrt{\left( \frac{1}{{{i}_{n}}} \right)*X}} \right)+\underset{a'}{\mathop \prod }\,\left( {{t}_{e}}-X \right)-\left( {{l}_{w}}+\frac{{}^{X*\mu }/{}_{\left( {{i}_{0}}+{{f}_{u}} \right)}}{\mathop{\sum }_{\beta }\left( \alpha +{{i}_{n}} \right)} \right)$                    (3)

The analysis is observed for the normalization of features from which the fundus input images are processed for detection. The normalization is$~\alpha $, the analysis is represented as$~Z$ in this time intervals are considered for the better detection of DR and find the extraction of higher severity levels. The lower severity levels are observed in this category and produce better detection based on similar feature extraction. If the features are similar then observation is reduced and improves the computation time. Here, it states the recognition of the DR among the patients and deliberates with the features from the fundus image. In this approach, the analysis is extracted for the normalization of the levels and reduces the computation cost based on the time duration. Here, the examination is carried out for the different sets of fundus image processing in which it explores the normalized value from the extracted image. The normalization process for ${{f}_{u}}\left( X \right)$ is represented in Figure 2.

Figure 2. Normalization for ${{f}_{u}}\left( X \right)$

In the above Figure 2, the ${{f}_{u}}$ from ${{i}_{o}}$ is extracted at a regular ${{t}_{e}}$ for analysis. The $\left( \beta +\mu  \right)=1~$(true) achieves ${{f}_{u}}\in {{g}_{i}}$ classification and the failing results in ${{f}_{u}}\in {{l}_{w}}$ extraction. In the normalization process, ${{l}_{w}}$ features are matched with$~\alpha $ in different ${{t}_{e}}$ for $\mu $ process. The normalization is performed to validate false rates between multiple training instances. Therefore, the activation process is first used to differentiate ${{g}_{i}}$ and ${{l}_{w}}~\forall {{f}_{u}}\left( X \right)$. Based on this normalization the output detection is pursued using replication/ non-replication features. In this phase, analysis is observed for the different set of processing that defines the normalization. If there is normalization is detected then it deploys the false rate which is discussed in the below section. In this normalization, the detection of the DR is associated with the severity levels and provides a better analysisof this equation. The lower level is discarded in this stage and deploys the better processing for the normalization detection and it is formulated as $\left( {{l}_{w}}+\frac{{}^{X*\mu }/{}_{\left( {{i}_{0}}+{{f}_{u}} \right)}}{\mathop{\sum }_{\beta }\left( \alpha +{{i}_{n}} \right)} \right)$. The evaluation is processed for the normal level extraction from the features. Thus, the analysis is equated and from this, the categorization of normalization is formulated in the below equation.

$\delta =\left. \begin{matrix}\frac{1}{{{i}_{n}}}+\underset{Z}{\mathop \prod }\,\left( \alpha +{{g}_{i}} \right)+\left( {{f}_{u}}*\beta  \right)*\left( \frac{\mathop{\prod }_{\mu }\left( {{i}_{0}}+{{g}_{i}} \right)}{{}^{\alpha }/{}_{Z}} \right)=~R'  \\\mathop{\prod }_{{{i}_{n}}}^{{{i}_{0}}}\left( \mu +\alpha  \right)*\beta +\left( {}^{\mathop{\sum }_{X}Z}/{}_{{{f}_{u}}} \right)*{{g}_{i}}={{N}_{0}}  \\\end{matrix} \right\}$                    (4)

The categorization of normalization is processed in the above equation and it is described as$~\delta $, the replication and non-replication are labeled as ${R}'~and~{{N}_{0}}$. Here the first condition is the replication process that includes the extraction of similar features from the database and provides the detection of DR. In this stage, the examination is carried out for the analysis of the normalization process and finds the better analysis if it has a higher severity level and is equated as $\left( \alpha +{{g}_{i}} \right)+\left( {{f}_{u}}*\beta  \right)$. In this category, the normalization is defined for the replication process and examines the computation in a better manner for this replication. Here, the replication is detected if there is similar data are observed from the extracted features. The proposed method is reliable in handling noisy inputs, apart from imbalanced training sets. The ${{f}_{u}}\left( X \right)$ process differentiates the impact of noise over the input DR image. The noise suppression is performed in the pre-processing step using different filters; in this concept, Gaussian noise filter is used to extract the features without distortion. Besides, the output image is indexed for$~Z$ process wherein the severity level decides rate of noise impact. Therefore, filtering and$~Z$ processes are relevant to ensure noise pixels impact a less in the output image processing.

Whereas, the non-replication states the dissimilar data extraction from the features and provides the detection in this stage is easy. This process maps with the previous step and produces the result. Here, the examination is used to relate the necessary feature extraction with non-similar and it is said to be non-replication. In this normalization method, the extraction of features is used to relate with the input images and that deliberates with the reliable computation and it is formulated as$~\beta +\left( {}^{\mathop{\sum }_{X}Z}/{}_{{{f}_{u}}} \right)$. Thus, the categorization of normalization is performed and from this, the detection of false rate in replication is addressed and it is formulated in the below equation:

$\mu =\frac{{{i}_{0}}+\beta }{\mathop{\sum }_{X}\left( Z+\alpha  \right)}*\underset{\beta }{\mathop \prod }\,\left( \delta *{{i}_{n}} \right)+\left( {R}'+\sigma  \right)+{{t}_{e}}$                    (5)

The false rate is detected and it is represented as$~\sigma $, in this replication data are avoided in this step where the time duration is reduced. In this category, the false rate is addressed for the replication process and degrades the computation process. In this stage, the normalization is used to define the reduction of the replication process and provides reliable processing in this work. The replication is used to relate with the false rate addressing this work and reduced. In this case, the false rate is addressed where the higher level of severity is recognized and where the false rate is reduced. In this process, a false rate is detected and provides the efficient reduction of replication in this computation process. Post to this both the replicated and non-replicated data are processed in CNN where the false rate is addressed and reduced. The false rate features are identified as presented in Figure 3.

Figure 3. False rate feature identification

The false rate feature detection process requires$~\delta $ categorization of ${{f}_{u}}$ through ${{f}_{u}}\left( X \right)$ steps. As the process is continuous, the extraction and feature classification $\left( {{g}_{i}}~and~{{l}_{w}} \right)$ is used for$~\mu $. In the $\delta $ process $\left( {{g}_{i}}*Z \right)$ and $\left( \frac{Z}{{{f}_{u}}} \right)$ for maximum true positives are validated for 1 to ${{l}_{w}}$ instances $\in {{t}_{e}}$. If the first is true then $R'$ is identified and for ${{N}_{o}}$ as well as the $\sigma >{{l}_{w}}$ condition is verified. If this is true then a false rate is observed in $R'$ and ${{N}_{o}}$ are detected. The failing condition generates $\delta ={{m}_{i}}$ case that is used for normalization check (Figure 3).

3.1 CNN for classification

A modified convolutional neural network is used for the fundus image and deploys the extraction process for input images. The modification concerns the conditional split of the hidden layer processes that is different from the conventional CNN. In this conditional analysis, false rate based outputs are extracted using$~\alpha $ and${{\alpha }_{t}}$ parameters. Besides, the activation is provided before the output extraction to increase the chance of feature classification. Here, the varying images are extracted in this case by deploying the detection of normalization. In this method replication and non-replication data are examined and forwarded to the neural layer and from this the training is given in between the layers. The output layer is responsible for reliable precision, in this methodology, the assessment layer is derived and it is equated in the below equation:

$\left. \begin{matrix}{{i}_{0}}\left( X \right)=\left( \frac{Z}{\mathop{\prod }_{\alpha }\left( \beta *\mu  \right)} \right)+\left[ \left( \delta +{R}' \right)+{{f}_{u}}\left( 0 \right) \right]  \\{{i}_{1}}\left( X \right)=\left( \frac{Z}{\mathop{\prod }_{\alpha }\left( \beta *\mu  \right)} \right)+\left[ \left( \delta +{R}' \right)+{{f}_{u}}\left( 1 \right) \right]  \\\vdots   \\{{i}_{n}}\left( X \right)=\left( \frac{Z}{\mathop{\prod }_{\alpha }\left( \beta *\mu  \right)} \right)+\left[ \left( \delta +{R}' \right)+{{f}_{u}}\left( n-1 \right) \right]  \\\end{matrix} \right\}$                    (6)

The assessment layer is equated for the extraction of the necessary features from which the replication is given as the input in this case. In this approach, the categorization is performed for the features and deploys from the initial stage to the $n-1$ layers. The different range of the image is extracted from the database and from this features which are with the replication format are derived in this assessment layer. This assessment layer is responsible for the reliable computation that states the better processing in CNN. From this training inputs are formulated in the below equation as follows:

${{g}_{r}}\left( {{i}_{0}} \right)=\underset{\beta }{\mathop \prod }\,\left( {{g}_{i}}*{{f}_{u}} \right)+\left( \frac{\left( \sigma -R' \right)}{\delta } \right)-{{t}_{e}}$                    (7)

The training input is fetched from this methodology and processed by the labeled images in which it finds the severity levels. The training is described as ${{g}_{r}}$, in this case, normalization’s categorization replication is given as the input in the CNN process. Here, the training is examined in the false rate is addressed in this case. In this approach, the normalization is carried out from the replication and reduces the upcoming computation. Here, the false rate is reduced from the fundus image and the replication method. The time is associated with the replication detection and avoids further layer processing and it is represented as $\left( \frac{\left( \sigma -R' \right)}{\delta } \right)-{{t}_{e}}$. From this methodology, the addressed false rates are trained in the below equation:

$\sigma \left( {{g}_{r}} \right)=\underset{{{i}_{n}}}{\mathop \prod }\,\left( \beta +R' \right)-{{v}_{p}}+{{g}_{i}}-{{t}_{e}}$                    (8)

The false rate is trained and it’s been avoided in the upcoming layers, this overall computation is processed in the mentioned time. In this case, the previous state of the process is mapped with the current case and produces the result. Thus, the examination is provided by a mapping process, the previous state is represented as ${{v}_{p}}$, which is carried out to address the false rate and reduce it. Thus, the training is given to the false rate on the image by mapping with the previous stage. Irrespective of the normalization, the false rates are found a high at some places due to the difference in$~\sigma \left( {{g}_{r}} \right)$ and the actual ${{g}_{r}}$ obtained. Using the$~A$ and$~\alpha $ estimation, the consecutive normalization process is free from false positives. Therefore, the places where difference is high experiences a bit high false rate. From this approach, The CNN is fed with the non-replicated features to perform allied matching with the external training inputs, in its hidden layer for “m” non-replicated features and the activation process is the normalization of extracted features by detaining the replicated ones. This activation process is required to Such replications are prevented from increasing the false rate through the hidden computing layers of the CNN and they are formulated in the below derivations:

${{a}_{t}}=\left( {{v}_{p}}-{{u}_{t}} \right)+{{g}_{r}}*\left( \frac{\sigma -{{v}_{p}}}{\beta +\mu } \right)$                    (9)                    

$A=\left( {R}'+\mu  \right)*\underset{\sigma }{\mathop \sum }\,\left( {{v}_{p}}+{{f}_{u}}\left( n-1 \right) \right)$                    (10)

Eq. (9) is used for the matching process and it is described as ${{a}_{t}}$, the current state of the image is ${{u}_{t}}$. The activation process is symbolized as$~A$. Here, the mapping is processed for non-replication images, which provides efficient processing for the non-replication images. Thus, the external training inputs are used for the non-replication features in these neural layers, whereas Eq. (10) is used for the prevention of replications which increases the false rate through the hidden computing layers by introducing the activation process in CNN. The CNN for the previous state (false rate) based activated training model is illustrated in Figure 4.

Figure 4. CNN for ${{v}_{p}}$ based activated training

The $v_p$ based assessment is performed to identify any false rate is $f_u(\%)$ even after normalization. This learning aims at extracting the least feasible $m$ form ${{v}_{p}}\in R'$. If the activation generates$~\alpha \ge {{\alpha }_{t}}$ and $\alpha <{{\alpha }_{t}}$ classifications under ${{v}_{p}}\in R'$ or $m$ then the true positives are extracted. In the activation process, $\left( {R}'+\mu  \right)$ and ${{f}_{u}}\left( n-1 \right)$ are the output-extracting conditions. Based on the available activations the ${{\alpha }_{t}}\left( \frac{\sigma -{{v}_{p}}}{\beta +\mu } \right)$ is the extracting condition for actual feature classification (Figure 4). Thus, it is processed for the different layers in CNN for unique feature detection. The features are allied with the training input as derived below:

       ${{l}_{A}}=\left[ \left( \beta +{{i}_{0}} \right)*\left( \alpha +A \right) \right]+{{a}_{t}}-{{t}_{e}}$                    (11)

whereas,

${a}'={{v}_{p}}\left( {{i}_{0}} \right)+{{a}_{t}}-{{u}_{t}}\left( n-1 \right)-{{t}_{e}}$                    (12)

The allied process is examined in Eq. (11) and it is described as ${{l}_{A}}$, in this case, normalization replication and non-replication are given as the input for the CNN. The training input is fed for the computation process where the false rate is addressed on the required time interval. This includes the activation process to reduce the replication in this proposed work and from this time is observed by calculating with the previous state of the process. This runs through the different layers in CNN and estimates the better computation in this work. The CNN process for precision-oriented training is illustrated in Figure 5.

Figure 5. CNN process for precision-oriented training

Unlike the process in Figure 4 for false-based training, the above Figure 5 presents the learning process based on precision. Here, the activation function validates ${{u}_{t}}=0$ or ${{u}_{t}}=1$ condition, such that for $n$ trails the allied process of conjoint$~R'$ and ${{N}_{o}}$ is used. This validation is performed to verify if ${\alpha }'=true/false$; the passing criteria is used to train the$~n$ layers of the CNN. The case of ${\alpha }'=false$ generates $R'$ for different$~\alpha \ge {{\alpha }_{t}}$ (or) $\alpha <{{\alpha }_{t}}$ conditions.Thus, both the allied process is observed on the mentioned time interval, from this approach, the precision is trained in the CNN to obtain the better output as DR detection and it is formulated below.

${{r}_{c}}=\left[ \left( {{v}_{p}}-{{u}_{t}} \right)+\left( {{l}_{A}}*{{f}_{u}} \right) \right]+\mu -{{t}_{e}}\left( n-1 \right)$                    (13)

The precision is improved in this equation and it is represented as${{r}_{c}}$, in this case, detection of DR is used to relate with better computation with different layers. Here, it is associated with the feature extraction from the fundus image where the time is calculated for reliable improvement and replication is reduced in the normalization. In this methodology, the CNN is proposed for efficient feature extraction where the training inputs are deliberated with the different layers and improve the precision in this work. Thus, the training is improved through replicated and non-replicated features to ensure high precision in DR detection is achieved. The number of training iterations deployed reduces the false rate and achieves better precision. This analysis for testing, training, and validation is illustrated in Figure 6.

Figure 6. False rate and ${{r}_{c}}$ analysis

The above Figure 6 presents the analyses of false rates and ${{r}_{c}}$ for different training iterations. The training, testing, and validation are the considerations throughout the iterations. In the activation-based conditions, the maximum possible conditions for$~\alpha $ and $\alpha '$ are analyzed to validate both ${{N}_{o}}$ and$~R'$ equivalently. Therefore, the ${{v}_{p}}$ and ${{u}_{t}}$ differentiations are used throughout the training to increase ${{r}_{c}}$. The AUC and confusion matrix analysis based on false rates and true positive rates are discussed in this section. In Figure 7, the AUC for the different processes: ${{I}_{A}}$ and$~\alpha '$ are presented.

Figure 7. AUC for ${{I}_{A}}$ and$\text{ }\!\!~\!\!\text{  }\!\!\alpha\!\!\text{  }\!\!'\!\!\text{ }$

In Figure 7, the AUC analysis for ${{I}_{A}}$ and$~\alpha '$ are presented. This proposed method performs ${{v}_{p}}$ and ${{u}_{t}}$ differentiations between successive ${{N}_{o}}$. In this differentiation, precision focussed$~\alpha '$verification is performed. If the activation function generates$~m$ then$~\alpha ={{\alpha }_{t}}$ (or) $\alpha >{{\alpha }_{t}}$ or$~\alpha <{{\alpha }_{t}}$ or $\alpha <{{\alpha }_{t}}$ assessment is performed using$~R'$ inputs. This suppresses the false rates between the ${{f}_{u}}\left( n-1 \right)$ for$~A$ estimation. Hence, the true positives are improved. Followed by this process, the precision-focused confusion matrix is presented in Figure 8 below.

Figure 8. Confusion matrix for ${{I}_{A}}$ and $\text{ }\!\!\alpha\!\!\text{  }\!\!'\!\!\text{ }$

The confusion matrix is validated for ${{N}_{o}}$ and$~R'$ across various ${{I}_{A}}$ and$~\alpha '$. The differentiation-focused improvements are validated across $\alpha $ conditions post the A function implication. Based on the available ${{r}_{c}}$ and $\left( r-1 \right)$ recurrences the ${\alpha }'=True$ is achieved to reduce the differentiations. Considerably, the recurrent iterations are useful for $\sigma \left( {{g}_{r}} \right)$ extraction to increase the precision (Figure 8).

4. Results and Discussion

The experimental analysis is performed using MATLAB codes; the software is deployed in a system with a 2.0GHz processing element and 4GB of random access memory. The software specification is used to execute the codes that process DR images (data source: [32]). This source contains images resized into 224$\times $224 pixels under different categories. The activated CNN is trained under 1200 iterations and 3-8 epochs/ iteration. A total of 3662 DR images are used for training and 180 images for testing. From this, the results of 4 sample inputs are presented in Tables 1 and 2 as per the processes explained in the proposed method.

As far as the scalability is concern, the proposed method is designed to support various input types irrespective of the infection type. The normalization and false rate detection for any image size and feature extraction are monotonous. Using the monotonous assessments of ${{i}_{n}}\left( X \right)$ and$~\sigma \left( {{g}_{r}} \right)$, the image with varying sizes and features are addressed. The change in feature distribution or feature presence are identified with multiple ${{\alpha }_{t}}$ and$~A$ assessments. Using the CNN training, the number of iterations is alone variable based on the image size and number of training inputs. This is unanimously followed for large and small datasets to retain similar precision.

Table 1. Feature detectionand normalization

Input

         ${{g}_{i}}$

         ${{l}_{w}}$

$\alpha $

4.2 Comparative analysis

In the results and discussion, the metrics such as accuracy, precision, sensitivity, false rate, and mean error are comparatively analyzed with the existing MVDRNet [23], DRFEC [28], and HOG+RCNN [17] methods. These methods are discussed briefly in the related works section. Besides, the replicated features (4 to 22) and the training iterations (100 to 1200) are varied to perform the comparative analysis. The proposed method is different from the existing methods by converging pre-normalization and activation together. The activation thus operates after the failing outputs of ${{f}_{u}}\in {{I}_{W}}$. In the normalization process, the chances of verifying$~\delta \forall {R}'$ and ${{N}_{o}}$ is monotonous. Therefore, the activation process instigates the input neurons under ${R}'$ detection over ${{N}_{o}}$. Besides $\left( \alpha +{{g}_{i}} \right)$ and $\left( {{f}_{u}}*\beta  \right)$ are independent processes across different false rate identification. Thus, the existing neurons that are less categorized are reformed to contains ${{i}_{n}}\left( X \right)$ until ${{g}_{r}}\left( {{i}_{o}} \right)$ requires further training instances. The existing methods revive the neuron with/without activation functions, irrespective of the needs. Different from the existing methods the false positive achieving instances alone revive the neurons for training, reducing its complexity.

4.2.1 Accuracy

In this method, the diabetic retinopathy detection process is performed using A-CNN to provide necessary recommendations to prevent from the early stage (Refer to Figure 9). Based on the severity analysis, the appropriate treatment is provided to reduce the risk of DR. Here, the CNN is developed to obtain better DR prediction precision in this proposed work and from which it extracts the necessary fundus image feature. The number of training iterations deployed reduces the false rate and thereby achieves high DR detection accuracy. The normalization of replicated and non-replicated image features is independently identified and segregated to maximize the feature classification and thereby reduce the false rate in the precise time interval. The replicated image features are obtained to provide training for those images. The replicated data are avoided to achieve high DR detection accuracy with less false rate and analysis time. The initial fundus image is compared with the training image for accurate DR detection. Similarly, the CNN is used in this model for identifying and segregating the replicated and non-replicated data to achieve high accuracy.

Figure 9. Accuracy

4.2.2 Precision

In this proposed method, the DR detection analysis is performed by extracting the features from the input images and fetching the features for accurately detecting the problem to improve high detection precision represented in Figure 10.

The replicated features and false rate are suppressed using activation process for differentiating ${{g}_{i}}$ and ${{l}_{w}}~\forall {{f}_{u}}\left( X \right)$ between the multiple training instances for identifying the false rate occurrence. The feature classification is pursued to identify the false rate occurrence due to retinal abnormalities and replicated features observed from the input images. Due to min/ max sensitivity variations in the input image, the risk of DR is easily identified to maximize decision precision with less replicated features. Based on this normalization process, the final output is pursued based on replication/ non-replication features. The false rate of occurrence leads to the chances of vision loss, thereby affecting the retina and reducing detection accuracy. The CNN was used to increase the robustness range in DR detection with maximum pooling between training images and preprocessed images. In this scenario, the replicated features identified images are recurrently trained until achieve maximum true positives from 1 to ${{l}_{w}}$ instances$~\in {{t}_{e}}$. Hence, high DR detection precision is achieved.

Figure 10. Precision

4.2.3 Sensitivity

The high sensitivity is obtained from input fundus images based on extracted features classification to reduce lower severity levels in this category and give better output (Refer to Figure 11). In this proposed method, the lower severity levels are detected based on similar feature extraction in any region leads to retinal abnormalities; such problems are identified to state the detection of DR among the deliberates and patients for providing precise diagnosis.

Figure 11. Sensitivity

Based on the condition $\left( {{l}_{w}}+\frac{{}^{X*\mu }/{}_{\left( {{i}_{0}}+{{f}_{u}} \right)}}{\mathop{\sum }_{\beta }\left( \alpha +{{i}_{n}} \right)} \right)$ used for identifying the lower severity level in training images are addressed with less classification time. Here, the chances of periodic shuffling are made based on extracted features from the initial retina image to the corresponding pre-processed image for improving DRD precision. In the normalization process, the ${{l}_{w}}$ features are matched with$~\alpha $ in random time intervals for$~\mu $ process. The normalization output is used to identify the false rate between multiple training instances. If there is normalization is detected in any region, then it deploys the false rate. In this proposed method, the activation process is constantly defined for high accuracy of DR detection from which high sensitivity is satisfied.

4.2.4 False rate

In this proposed method sequential DR detection using the input fundus images is performed based on extracted features and classification identifies the failing features in ${{f}_{u}}\in {{l}_{w}}$ for improving detection accuracy (Figure 12). Based on the normalization, the matching of lower severity level features with $\alpha $ in different ${{t}_{e}}$ using CNN for processing the replicated images is to reduce higher severity level. Based on the feature classification, the CNN is applied to identify the non-replication states, from this identification the dissimilar features extracted from the input images are used to provide the precise DR detection. This process maps with the previous output for non-replicated feature identification through the proposed method to satisfy high sensitivity.

Figure 12. False rate

Using this proposed method, the consistent replication and normalization process is evaluated for DR detection to provide reliable processing without increasing the false rate as the optimal output. The CNN is implemented to reduce the higher level of severity and where the false rate is reduced. In this article, high detection precision is achieved under feature extraction and classification. Using CNN, the less false rate is detected.

4.2.5 Mean error

In this diabetic retinopathy detection is pursued using feature extraction, classification, and normalization output to satisfy high precision with less mean error represented (Refer to Figure 13). The replicated data is addressed from the extracted features is mitigated using the CNN process. In this process, if a false rate is identified in the input images, then provides the efficient reduction of replication in this computation process for easily recognizing DR. Both the replicated and non-replicated features are processed in CNN for addressing the false rate. The CNN helps to reduce the false rate occurrence based on increasing true positives. From the instance, the normalization process is pursued to identify the false rate that is trained, and it’s been avoided in the upcoming layers. Thus, the training is given to the false rate identified images by mapping with the previous step to prevent replication. In this proposed work, the CNN is to satisfy high accuracy and precision of DR detection and thereby reduce mean error.

Figure 13. Mean error

4.2.6 F1-score

The F1-score measures the model's performance by balancing precision and recall. The F1 score is high when fewer features are replicated, the F1 score fluctuates with an increase in replicated features to indicate variability due to overfitting or reduced feature distinctiveness. The score reduces when redundancy in replicated features dilutes the meaningful variance that is required for optimal detection.

Figure 14. F1-score

The lower F1 score occurred when training iterations were minimal due to insufficient learning. The score increases with more iterations when the model understands patterns in fundus images. The proposed method achieves a consistent and optimal performance with a high F1 score. Over-replication of features and insufficient training leads to less detection. A high F1-Score indicates the proposed method's capability to identify DR in fundus images by maintaining a balance between false positives and false negatives (Figure 14).

4.2.7 Specificity

The replicated features determine the model's input variability which is measured over the specificity that varies from high to low. High specificity ensures the model's ability to correctly identify non-diseased cases. Specificity is high with fewer replicated features and a wavy trend appears due to over-representation with an increase in features. This may lead to confusing the model to identify false positives as infected parts. An increase in replication lowers specificity which disturbs the system to distinguish between relevant and redundant features. A low specificity with minimal iterations slows down its ability to accurately distinguish between diseased and healthy cases. An increase in iteration progress better learning and feature extraction. Specificity in diabetic retinopathy detection helps to avoid misclassifying healthy fundus images as diseased. High and stable specificity with continuous training in the model helps to distinguish DR from non-DR images (Figure 15). Tables 3 and 4 are used to summarize the above comparative analysis with the discussion.

Figure 15. Specificity

Table 3. Summary of comparative analysis for replicated features

Metrics

MVDRNet

DRFEC

HOG+RCNN

CFCM

Accuracy (%)

86.46

87.54

89.97

91.341

Precision

0.898

0.911

0.927

0.9355

Sensitivity

0.881

0.891

0.915

0.9214

False Rate

0.159

0.138

0.107

0.0687

Mean Error

0.091

0.082

0.069

0.0514

F1-Score

0.882

0.893

0.910

0.9338

Specificity

0.883

0.890

0.910

0.9216

Table 4. Summary of comparative analysis for training iterations

Metrics

MVDRNet

DRFEC

HOG+RCNN

CFCM

Accuracy (%)

88.17

90.81

93.03

95.321

Precision

0.90

0.914

0.938

0.9597

Sensitivity

0.902

0.913

0.93

0.9504

False Rate

0.115

0.103

0.078

0.0431

Mean Error

0.067

0.052

0.042

0.0251

F1-Score

0.899

0.913

0.937

0.9591

Specificity

0.901

0.920

0.939

0.9547

The proposed CFCM leverages accuracy, precision, and sensitivity by 10.05%, 11.75%, and 12.87% respectively. This method improves F1-score by 11.64% and 9.08% respectively. This method reduces the false rate and mean error by 13.19% and 8.78%.

The proposed CFCM leverages accuracy, precision, and sensitivity by 9.3%, 12.71%, and 10.62% respectively. This method improves F1-score by 12.83% and 10.41% respectively. This method reduces the false rate and mean error by 11.11% and 8.57%.

5. Conclusion

To address the problem of replicated features in DR detection, this article proposed and briefed the CFCM. The proposed method is designed to reduce the false rates using activated CNN.

The activation process is used to reduce the mean error by training the network using precision and false rates in and fro manner. The hidden computing layers are designed to accommodate replication, condition-based feature extractions and DR region detection. The unique features are allied with the training network throughout the iterations until the highest possible accuracy is achieved. In precise, the feature sensitivity is used to define the feature classification as replicated or non-replicated. The activation function normalizes the replications to extract any possible feature matches with the input. This enhances the mean error reduction through maximum precision conditions. Therefore, the proposed CFCM leverages accuracy, precision, and sensitivity by 10.05%, 11.75%, and 12.87% respectively for its maximum replicated features.

Through the experimental analysis, the problem of feature segregation based on its unveiling region was seen. This does not fit the initial matching features, for which the F1-score is less for certain iterations. Therefore, to address this problem, the activation based on sigmoid is planned to be used as a segregating pass in future work. The sigmoid activation function used revives only limited neurons for verifying matching features. As the function is linear, no replication is false positive/matching detection would be seen.

  References

[1] Bilal, A., Sun, G., Li, Y., Mazhar, S., Khan, A.Q. (2021). Diabetic retinopathy detection and classification using mixed models for a disease grading database. IEEE Access, 9: 23544-23553. https://doi.org/10.1109/ACCESS.2021.3056186

[2] Niu, Y., Gu, L., Zhao, Y., Lu, F. (2021). Explainable diabetic retinopathy detection and retinal image generation. IEEE Journal of Biomedical and Health Informatics, 26(1): 44-55. https://doi.org/10.1109/JBHI.2021.3110593

[3] Prabhakar, T., Rao, T.M., Maram, B., Chigurukota, D. (2024). Exponential gannet firefly optimization algorithm enabled deep learning for diabetic retinopathy detection. Biomedical Signal Processing and Control, 87: 105376. https://doi.org/10.1016/j.bspc.2023.105376

[4] Kadry, S., Crespo, R.G., Herrera-Viedma, E., Krishnamoorthy, S., Rajinikanth, V. (2023). Deep and handcrafted feature supported diabetic retinopathy detection: A study. Procedia Computer Science, 218: 2675-2683. https://doi.org/10.1016/j.procs.2023.01.240

[5] Ohri, K., Kumar, M. (2023). Domain and label efficient approach for diabetic retinopathy severity detection. Multimedia Tools and Applications, 83: 35795-35824. https://doi.org/10.1007/s11042-023-16908-3

[6] Nahiduzzaman, M., Islam, M.R., Islam, S.R., Goni, M.O.F., Anower, M.S., Kwak, K.S. (2021). Hybrid CNN-SVD based prominent feature extraction and selection for grading diabetic retinopathy using extreme learning machine algorithm. IEEE Access, 9: 152261-152274. https://doi.org/10.1109/ACCESS.2021.3125791

[7] Sangeetha, K., Valarmathi, K., Kalaichelvi, T., Subburaj, S. (2023). A broad study of machine learning and deep learning techniques for diabetic retinopathy based on feature extraction, detection and classification. Measurement: Sensors, 30: 100951. https://doi.org/10.1016/j.measen.2023.100951

[8] Navaneethan, R., Devarajan, H. (2024). Enhancing diabetic retinopathy detection through preprocessing and feature extraction with MGA-CSG algorithm. Expert Systems with Applications, 249: 123418. https://doi.org/10.1016/j.eswa.2024.123418

[9] Gao, W., Lin, P., Li, B., Shi, Y., Chen, S., Ruan, Y., Zakharov, V.P., Bratchenko, I. (2023). Quantitative assessment of textural features in the early detection of diabetic retinopathy with optical coherence tomography angiography. Photodiagnosis and Photodynamic Therapy, 41: 103214. https://doi.org/10.1016/j.pdpdt.2022.103214

[10] Makmur, N.M., Kwan, F., Rana, A.D., Kurniadi, F.I. (2023). Comparing local binary pattern and gray level co-occurrence matrix for feature extraction in diabetic retinopathy classification. Procedia Computer Science, 227: 355-363. https://doi.org/10.1016/j.procs.2023.10.534

[11] Hemanth, S.V., Alagarsamy, S. (2023). Hybrid adaptive deep learning classifier for early detection of diabetic retinopathy using optimal feature extraction and classification. Journal of Diabetes & Metabolic Disorders, 22(1): 881-895. https://doi.org/10.1007/s40200-023-01220-6

[12] Tavakoli, M., Mehdizadeh, A., Aghayan, A., Shahri, R.P., Ellis, T., Dehmeshki, J. (2021). Automated microaneurysms detection in retinal images using radon transform and supervised learning: Application to mass screening of diabetic retinopathy. IEEE Access, 9: 67302-67314. https://doi.org/10.1109/ACCESS.2021.3074458

[13] Gupta, S., Thakur, S., Gupta, A. (2023). Comparative study of different machine learning models for automatic diabetic retinopathy detection using fundus image. Multimedia Tools and Applications, 83: 34291-34322. https://doi.org/10.1007/s11042-023-16813-9

[14] Saranya, P., Pranati, R., Patro, S.S. (2023). Detection and classification of red lesions from retinal images for diabetic retinopathy detection using deep learning models. Multimedia Tools and Applications, 82(25): 39327-39347. https://doi.org/10.1007/s11042-023-15045-1

[15] Mukherjee, N., Sengupta, S. (2023). Application of deep learning approaches for classification of diabetic retinopathy stages from fundus retinal images: A survey. Multimedia Tools and Applications, 83: 43115-43175. https://doi.org/10.1007/s11042-023-17254-0

[16] Sivapriya, G., Devi, R.M., Keerthika, P., Praveen, V. (2024). Automated diagnostic classification of diabetic retinopathy with microvascular structure of fundus images using deep learning method. Biomedical Signal Processing and Control, 88: 105616. https://doi.org/10.1016/j.bspc.2023.105616

[17] Nasir, M.K., Ahsan, M., Based, M.A., Haider, J., Palani, S. (2023). A faster RCNN based diabetic retinopathy detection method using fused features from retina images. IEEE Access, 11: 124331-124349. https://doi.org/10.1109/ACCESS.2023.3330104

[18] Wong, W.K., Juwono, F.H., Capriono, C. (2023). Diabetic retinopathy detection and grading: A transfer learning approach using simultaneous parameter optimization and feature-weighted ECOC ensemble. IEEE Access, 11: 83004-83016. https://doi.org/10.1109/ACCESS.2023.3301618

[19] Khan, Z., Khan, F.G., Khan, A., Rehman, Z.U., Shah, S., Qummar, S., Ali, F., Pack, S. (2021). Diabetic retinopathy detection using VGG-NIN a deep learning architecture. IEEE Access, 9: 61408-61416. https://doi.org/10.1109/ACCESS.2021.3074422

[20] Shamrat, F.J.M., Shakil, R., Akter, B., Ahmed, M.Z., Ahmed, K., Bui, F.M., Moni, M.A. (2024). An advanced deep neural network for fundus image analysis and enhancing diabetic retinopathy detection. Healthcare Analytics, 5: 100303. https://doi.org/10.1016/j.health.2024.100303

[21] Kommaraju, R., Anbarasi, M.S. (2024). Diabetic retinopathy detection using convolutional neural network with residual blocks. Biomedical Signal Processing and Control, 87: 105494. https://doi.org/10.1016/j.bspc.2023.105494

[22] De Sousa, T.F., Camilo, C.G. (2023). HDeep: Hierarchical deep learning combination for detection of diabetic retinopathy. Procedia Computer Science, 222: 425-434. https://doi.org/10.1016/j.procs.2023.08.181

[23] Luo, X., Pu, Z., Xu, Y., Wong, W.K., Su, J., Dou, X., Ye, B., Hu, J., Mou, L. (2021). MVDRNet: Multi-view diabetic retinopathy detection by combining DCNNs and attention mechanisms. Pattern Recognition, 120: 108104. https://doi.org/10.1016/j.patcog.2021.108104

[24] Saranya, P., Umamaheswari, K.M. (2023). Detection of exudates from retinal images for non-proliferative diabetic retinopathy detection using deep learning model. Multimedia Tools and Applications, 83: 52253-52273. https://doi.org/10.1007/s11042-023-17462-8

[25] Oh, K., Kang, H.M., Leem, D., Lee, H., Seo, K.Y., Yoon, S. (2021). Early detection of diabetic retinopathy based on deep learning and ultra-wide-field fundus images. Scientific Reports, 11(1): 1897. https://doi.org/10.1038/s41598-021-81539-3

[26] Liu, K., Si, T., Huang, C., Wang, Y., Feng, H., Si, J. (2024). Diagnosis and detection of diabetic retinopathy based on transfer learning. Multimedia Tools and Applications, 83: 82945-82961. https://doi.org/10.1007/s11042-024-18792-x

[27] Zhang, X., Li, F., Li, D., Wei, Q., Han, X., Zhang, B., Chen, H., Zhang, Y., Mo, B., Hu, B., Ding, D., Li, X., Yu, W., Chen, Y. (2022). Automated detection of severe diabetic retinopathy using deep learning method. Graefe's Archive for Clinical and Experimental Ophthalmology, 260: 849-856. https://doi.org/10.1007/s00417-021-05402-x

[28] Das, D., Biswas, S.K., Bandyopadhyay, S. (2023). Detection of diabetic retinopathy using convolutional neural networks for feature extraction and classification (DRFEC). Multimedia Tools and Applications, 82(19): 29943-30001. https://doi.org/10.1007/s11042-022-14165-4

[29] Krishnamoorthy, S., Weifeng, Y., Luo, J., Kardy, S. (2023). H1DBi-R Net: Hybrid 1D Bidirectional RNN for efficient diabetic retinopathy detection and classification. Artificial Intelligence Review, 56(Suppl 2): 2759-2787. https://doi.org/10.1007/s10462-023-10589-y

[30] Modi, P., Kumar, Y. (2023). Smart detection and diagnosis of diabetic retinopathy using bat based feature selection algorithm and deep forest technique. Computers & Industrial Engineering, 182: 109364. https://doi.org/10.1016/j.cie.2023.109364

[31] Usman, T.M., Saheed, Y.K., Ignace, D., Nsang, A. (2023). Diabetic retinopathy detection using principal component analysis multi-label feature extraction and classification. International Journal of Cognitive Computing in Engineering, 4: 78-88. https://doi.org/10.1016/j.ijcce.2023.02.002

[32] Diabetic Retinopathy 224×224 (2019 Data). (2019). https://www.kaggle.com/datasets/sovitrath/diabetic-retinopathy-224x224-2019-data.