Composite Localization for OD Segmentation from Retinal Images Through Circular Hough Transform

Composite Localization for OD Segmentation from Retinal Images Through Circular Hough Transform

Azra Fatima Eepuri Kiran Kumar*


Corresponding Author Email: 
kiraneepuri@kluniversity.in
Page: 
3295-3304
|
DOI: 
https://doi.org/10.18280/ts.410645
Received: 
17 October 2023
|
Revised: 
20 March 2024
|
Accepted: 
12 October 2024
|
Available online: 
31 December 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

In retinal imaging, the Optic Disc (OD) is a key feature that indicates the characteristics of many eye illnesses, such as Diabetic Retinopathy (DR), glaucoma, and others. For a proper diagnosis, the accurate segmentation of OD is very much required but it is a challenging task due to several distractors like noises, contrast abnormalities and retinal vessels. Hence, this paper proposes new method for OD segmentation in two-fold; one is OD localization, and another is OD segmentation. The OD localization finds a sub-region from the retinal image through the OD pixel derived through three different methods namely Template matching methods, Maximum Entropy method and Vessel Density map method. Initially, three methods determine three OD candidate pixels and based on them the final OD pixel is determined. Next, the second fold removes the blood vessels first and then extracts the OD boundary through Circular Hough Transform (CHT). Experimental Investigations on two publicly available dataset such as MESSIDOR and DRIVE proves the superiority of proposed approach.

Keywords: 

Optic Disc (OD), Diabetic Retinopathy (DR), template matching, entropy, vessel density, blood vessel removal, circular Hough transform, overlap score

1. Introduction

Recently, Automated and early Diagnosis of eye related diseases through retinal images processing has gained great significance, especially for the life-threatening diseases like DR and Glaucoma [1-3]. Out of all these disorders, DR is the leading cause of blindness, affecting almost 415 million people. DR is most commonly caused by long-term diabetes and can lead to temporary or permanent visual loss [4]. The initial stages of DR can’t be recognized as it shows very least impact on the vision impairments, but it shows serious impact at its advanced stages and causes even permanent vision loss. Hence, there is a need of an early diagnosis to stop the adverse effect of DR. In this regard, Retinal image processing has acquired lead position because the manual diagnosis leads to an excess delay [5]. Analyzing the retinal image attributes helps in the earliest and fast diagnosis of DR. The major attributes of retinal images are shown in Figure 1, they are Retinal vessels, OD, Hard Exudates, and Neovascularization etc. Between these attributes, OD and retinal vessels are majorly employed for DR screening.  

Automatic Segmentation of OD from retinal images plays a very important role in the diagnosis of DR. Additionally, the OD segmentation is also utilised as a first step in several other methods for segmenting retina pictures. For example, during the identification of exudates from retinal images, the prior OD segmentation helps in reducing the false positive count because the OD and exudates has similar pixel intensity values. Further, the macula lies in a constant distance from OD, hence the OD segmentation helps in the accurate localization of macula [6]. OD segmentation is a difficult job because OD comes in many different sizes, shapes, and colours. However, it is very useful for research into eye diseases. Further, the boundary of OD suffers with uneven contrast and the blood vessels moves out from the center of OD. Such kinds of reasons are the major distractors of OD segmentation. Although supervised learning approaches can address these challenges, they give rise to significant computer complexity, making them unsuitable for practical applications.

Figure 1. Retinal image attributes

Therefore, this research presents a novel framework for segmenting OD consisting of two phases: OD localization and OD segmentation. Our primary objective in the initial stage is to extract the OD area from the input retinal picture. Towards such localization, we employed three different methods namely Template matching methods, Maximum entropy method, and vessel density mapping method. The distance between each of the three OD candidate pixels and the centroid is used to select the final OD pixel. The OD region is extracted based on the OD pixel, in a dynamic fashion through the dimensions of input retinal image. Next in the OD segmentation, initially the blood vessels are removed and then we apply Circular Hough Transform (CHT) for OD boundary determination.  

The remaining portion of the study is structured as follows: section 2 presents an in-depth analysis of the existing literature. Section 3 outlines the specifics of the proposed OD segmentation scheme. Section 4 contains the specific information on the experimental research conducted, while section 5 serves as the conclusion of the study.

2. Literature Survey

In the past, various methodologies have been proposed by various researchers due to the significant role of OD in DR diagnosis [7-9]. In order to analyze color retinal pictures, Wisaeng et al. [10] presented a new OD segmentation approach that makes use of mathematical morphology and a marker-controlled watershed algorithm. To improve the quality of the retinal image, they implemented colour image normalisation, image enhancement, and noise elimination as preprocessing methods. They tested their approach using two-color retinal imaging datasets, the Thailand Dataset and the STARE dataset. To segment OD from retinal images, Gao et al. [11] implemented an adaptive level set-based contour extraction method that was predicated on saliency and threshold. In addition, they suggested a modified LIF method that incorporates shape priori information to address the unreliable information produced by abnormalities in pixel intensities. Experimental validation is conducted using the publicly accessible dataset name DIARETDB0.

Civit-Masot et al. [12] developed a diagnostic tool to diagnose glaucoma using eye fundus pictures. The overall system consists of two subsystems. The initial subsystem extracts positional and morphological features from the retinal image and employs segmentation and machine learning techniques to independently identify the optic cup and disc. The second sub-system subsequently employs a pre-trained model based on Convolutional Neural Networks (CNNs) to diagnose glaucoma. The classification results are more precise when the results from two sub-systems are combined. The "Minimising Entropy and Fourier Domain Adaption Network (MeFDA)" was proposed by Xu et al. [13] to enhance the segmentation performance in OD segmentation. Initially, they conducted an adversarial optimisation on the entropy maps of the estimated segmentation results to mitigate the domain shift. Subsequently, they implemented optimisation of entropy minimization over the unlabeled target domain data to enhance the credibility of segmentation maps' predictions.

Gao et al. [14] developed the "Locally Statistical Active Contour Model with the Information of Appearance and Shape (LSACM-AS)" and "Modified Locally Statistical Active Contour Model (MLSACM-AS)" for OD segmentation. LSACM addresses the generalised inhomogeneity issues that arise in images because of illumination variations. They also included data on the multi-dimensional feature space's local image probability surrounding the site of interest to reduce the effect of pathological change and vascular occlusions on OD segmentation. The DRISHTI-GS dataset is made publicly available for experimental validation. Using an active contour based on basic splines, Gagan et al. [15] developed a completely automated approach for OD segmentation from retinal images [16]. As part of the segmentation process, the active contour is scaled, rotated, and translated. Using Gradient Descent and Green's Theorem, they fine-tuned five parameters to obtain the optimal fit on the OD, and they optimized the active contour's energy by defining it through the local contrast [17]. The OD is identified using a normalised cross-correlation procedure that is based on multi-resolution. Experimental validation is conducted using MESSIDOR, DRISHTI-GS, and RIGA.

U-net, a coarse-to-fine deep learning approach based on a CNN model, was proposed by Wang et al. [18] for the precise segmentation of OD from retinal images. The U-net model was trained separately for colour and grayscale images, resulting in the production of two distinct results. Next, the results are combined using an overlap strategy to identify the local image portion, which is the OD candidate region. The result is once again submitted to U-net for additional segmentation. An Improved U-net was proposed by Liu et al. [19] for the segmentation of OD from retinal images. The higher order consistency between the output and ground truth images was also improved by the addition of a patch level adversarial network. Furthermore, they implemented a novel loss function to resolve the class imbalance issue between pixels within a restricted targeted area. Validation is conducted using RIM-ONEv3 and DRISHTI. A domain adaption framework for the detection of OD from retinal images was proposed by Kadambi et al. [20] using a Wasserstein Generative Network (WGAN) [21]. The typical adversarial models are significantly inferior to WGAN in terms of their ability to achieve stability and convergence.

OD was extracted from the fundus pictures using a Particle Swarm Optimization (PSO) augmented ensemble of Deep Neural Networks, as suggested by Zhang and Lim [22]. They implemented a six-search mechanism, diversified search process that was founded on an Improved PSO. Additionally, they implemented a mask R-CNN, a superior transfer learning mechanism for segmentation that was optimised through PSO. Specifically, PSO is used to optimize two parameters, the learning rate and the momentum, in transfer learning procedures. Jiang et al. [23] introduced a CNN model called Joint RCNN for object detection segmentation. The Joint RCNN is a hybrid of two models: the Cup Proposal Network (CPN) and the Disc Proposal Network (DPN). These models are designed to generate bounding boxes for optical cups and optical ODs, respectively. A disc attention module is suggested to integrate DPN and CPN, which will select the appropriate bounding box of OD and then continue the propagation for OD detection.

For automatic OD segmentation, Fu et al. [24] combined a model-driven probability bubble model with U-Net. The positional relationship between the OD and vessel is the foundation of the former paradigm. Localization results are fused into the output layer of U-net through the computation of joint probability. Xiong et al. [25] also employed the U-net for OD segmentation and proposed a Bayesian U-Net-based Hough Transform that is weak label-based. They constructed a probabilistic graphical model and investigated the Bayesian approach using the conventional U-net model. In order to optimise the Bayesian U-Net, they implemented the expectation maximisation algorithm to forecast the OD mask and revise its weights.

Roychowdhury et al. [26] suggested an OD categorization system to find the OD to vessel origin border. Through the utilization of the Gaussian Mixture Model (GMM) and six-region-oriented features, OD regions that were distinct from non-OD regions were categorized. An efficient ellipse method was employed to obtain the circular shape of the OD. A robust OD detection framework was proposed by Reddy et al. [27]. This framework commences with the identification of the OD pixel, which is the centre of the OD region. Subsequently, a sub-image extraction is performed using the ODP, and the resulting image is subjected to morphological processing to facilitate blood vessel elimination. A novel Edge Density Filter (EDF) separates the OD region in the blood vessel deleted, edge identified, and binarized sub-image.

3. Proposed OD Segmentation

3.1 Overview

The main intention of this approach is to extract the complete OD structure from retinal images. Towards such intention, we proposed a new method which segments the complete OD in two stages; they are (1) OD localization and (2) OD segmentation. The former stages intend to localize the OF rom the retinal image and it applies three different methods. The OD localization aims at determining the Region of Interest (RoI) where the OD resides in retinal image. Towards such process, we apply totally three different methods namely template matching method, Maximum Entropy method and vessel density mapping method. Three OD candidate pixels are identified through three methods, and one candidate is finalized based on their distance from centroid. Once the OD pixel is identified, and then the OD region is localized based a generalized mechanism. There are very few vessels in the localized OD picture, which was processed for OD segmentation further. First this work eliminates blood vessels next, using Circular Hough Transform (CHT) segmentation, finds the OD's border. Figure 2 shows the block diagram of proposed of segmentation mechanism.

3.2 OD localization

During this stage, we isolate the OD region by utilizing the seed point of the OD. We utilized three distinct techniques on the green channel of the retinal picture for this specific objective. All the three methods follow different methodologies to determine the seed point of OD. For every method one pixel is identified as a seed point. Next, from the three OD candidate pixels, one pixel is measured as a final OD pixel. For this purpose, the centroid point is calculated based on the three OD candidate pixels. Then each candidate pixel is compared with the centroid and the pixel which was located nearby to the centroid is finalized as the OD pixel. If two candidate pixels identified that they are close to the centroid, then the final OD pixel is determined as the average of these two pixels. All the three methods are applied on the green plane only because it better visualizes the OD and retinal vessels. The details of three methods are discussed to here.

Figure 2. Block diagram of proposed OD segmentation mechanism

3.2.1 Template matching method

This technique locates the OD pixel depending on picture grey level intensity. This proposal generates a template with dimensions of 21 by 21 and positions it on the area of picture so that every pixel serves as a center pixel [28]. For every template, we extract the maximum difference by computing the difference between maximum and minimum pixel intensities. Consider $T_k^{\max }(i, j)$ and $T_k^{\min }(i, j)$ are the two pixels with maximum and minimum intensities in the kth template, then the differences computed as follows

$T_k(i, j)=T_k^{\max }(i, j)-T_k^{\min }(i, j)$         (1)

Then the candidate pixel through template matching method is completed as:

$T M_C=\underset{(i, j)}{\operatorname{argmax}} T_k(i, j)$            (2)

Figure 3. (a) Original color retinal image, (b) Green channel and (c) Candidate OD pixel located

Since the OD is a brightest region in retinal image, it can be determined based on the maximum difference in the pixel intensities which was the main theme behind the template matching method. Figure 3 shows the result of template matching method.

3.2.2 Maximum entropy method

The retinal image with OD region and vessels consists of larger gray level variations. The bright pixel of OD surrounded by dark pixel vessels introduces huge grey level variance in the OD region. Using this notion, we have suggested an Entropy-based algorithm that identifies the OD pixel by determining the highest variation in green levels. The variance is calculated statistically using a window where each pixel is positioned at the center. Let W(i,j) represent a window cantered on a pixel, and μw denote the mean of all pixels inside that window. The grey level variance is then calculated as follows:

$\sigma(i, j)=\frac{1}{\operatorname{Length}(W)-1} \sum_{i=1}^P \sum_{j=1}^Q\left(W(i, j)-\mu_W\right)^2$             (3)

And

$M E_C=\underset{(i, j)}{\operatorname{argmax}} \sigma(i, j)$        (4)    

where, $\mu_W$ is computed as follows:

$\mu_W=\frac{1}{P * Q} \sum_{i=1}^P \sum_{j=1}^Q W(i, j)$           (5)

We set the size of the Windows to be 51×51 in order to calculate the gray level variance of each pixel. The pixel acquired using this approach is the pixel with the highest variance and it is surrounded by at least 10 brighter pixels in its vicinity. Figure 4 shows the result of maximum entropy method.

Figure 4. (a) Original color retinal image, (b) Green channel and (c) Candidate OD pixel located

3.2.3 Vessel density map method

The main theme behind this method is that the vessel structure present within the vicinity of the OD has higher density than the vessels present in the other regions [29]. To Measure such density, the retinal vessels needs to be identified. For this purpose, we slide a window of size 21×21 over retinal image an finds the vessels count [30]. After placing a window under particular region with a enter pixel, it is processed for Edge Detection. Since the retinal vessels are high frequency components, the edge filters identify them. Here we apply Prewitt edge detector for vessels determination and the edge pixels are represented with 1 and non-edge pixels are represented with zero. Based on the signs, the vessel density map is computed and the region which has maximum value is considered as OD region and the pixel located at its Centre is considered as the candidate of OD pixel.

$V D_C=\underset{(i, j)}{\operatorname{argmax}} D(i, j)$           (6)

where,

$D(i, j)=\sum_{i=1}^P \sum_{j=1}^Q d_v(i, j) \forall(i, j) \in 1$           (7)

where, $d_v(i, j)$ is the vessel pixel which was designated with one. The pixels designated with one are only considered here to compute the vessel density map. The region which has more vessel density map is treated as region within the vicinity of the OD and the pixel located at its Centre is picked up as a candidate OD pixel. Figure 5 shows the result of vessel density map method.

Figure 5. (a) Original color retinal image, (b) Green channel and (c) Candidate OD pixel located

Once the OD candidate points are derived with the three methods, a centroid is calculated as:

$\begin{aligned} C =\left(\frac{T M_C(x)+M E_C(x)+V D_C(x)}{3}, \frac{T M_C(y)+M E_C(y)+V D_C(y)}{3}\right)\end{aligned}$      (8)

Then the distance is computed for each OD candidate point form the centroid, let they are denoted as $d_{c, T M_C}$, $d_{c, M E_C}$ and $d_{c V D_C}$, then the final OD pixel is derived as the one which is close to C. if two points are found to be close to C, then the OD pixel is the mean of those two points.

3.2.4 Localization  

Once the OD pixel is obtained through the above process then it is utilized to get the OD region. At this phase, some of the earlier methods applied directly cropping over the image by fixing the row and column size, but it is not suitable for all the retinal images because some retinal images have more length and less with and some other images less length and more width. Hence, we propose a Generalize mechanism to localize the OD region with the help of OD pixel. Consider $M \times N$ be the size of regional image, $L_x$ and $L_y$ be the x- and y-coordinates of OD pixel, $x_m$ and $y_m$ be the starting position of OD region to be cropped, they are obtained as:

$x_m=\left\|L_x-\left(\frac{w_l}{2}\right)\right\|$            (9)

$y_m=\left\|L_x-\left(\frac{h_l}{2}\right)\right\|$          (10)

where, $w_l$ and $h_l$ are the width and height of corresponding localized region, they are computed from original image sizes as follows

$w_l=\frac{M}{2} \& h_l=\frac{N}{2}$           (11)

Finally, the crop image is represented with four values in a rectangular Window as:

$R W=\left[x_m\ y_m\ w_l\ h_l\right]$               (12)

Mostly this technique is used not to alter the OD shape as, if the system employs a constant size rectangular Window for all the data sets, the OD form would become shrined or dragged either horizontally or vertically. Such sort of damage to OD produces problems in the OD boundary determination. Therefore, we chose the half size of column and row of the input retinal picture for width and length of the rectangle window such that the variations in the form of the OD may be effectively managed. Our approach helps a lot for OD boundary determination using CHT as we employ a dynamic localization and it has no effect on the form of the OD.

3.3 OD segmentation

Once the optic disc region is localized from the input retinal image then it was subjected to segmentation. At this phase, the segmentation follows two stage processes. In the first stage, it removes blood vessels from the OD image and in a second stage it determines the boundary of the OD through CHT.

3.3.1 Blood vessels removal

Most of the objects that obstruct OD segmentation are blood vessels. Therefore, getting rid of them is crucial. Even in the middle of the OD, blood vessels can still originate; this is because the OD is considered the location of origin for these vessels. Hence our method intended to remove blood vessels from OD image we apply mathematical morphology. In general the retinal vessels are linear and elongated structures having longer lengths and shorter widths. Consider L be the length and W be the width $\ll L$. Moreover the retinal vessel in the OD region looks much darker and maintains constant gray level intensities. Based on these attributes, we employee morphology through a linear structuring element in rotational basis on each pixel. The length of Structuring Element (SE) is considered as $l_{S E}$ and width is considered as 1 and it is rotated with an angular deviation of 20°. At every rotation our method finds the maximum variance, mathematically it is described as follows

$I_n(i, j)=\max _{(i, j)}\left(I_{S E}^\theta(i, j)\right)$             (13)

In this case, SE is a structural element that rotates at θ. In is an OD picture that does not contain any vessels. The following is the formula for calculating the variance in this case, as the result is a pixel with the highest grey level variance at a certain rotation:

$\sigma_k(i, j)=\operatorname{Var}\left(I_{S E}^\theta(i, j)\right), k=1,2, \ldots, 9$        (14)

With the help of above equation the final rotation with maximum gray level variance is identified as:

$\theta=\underset{(k)}{\arg }\left(\max \left(\sigma_k(i, j)\right)\right.$           (15)

where, $\sigma_k(i, j)$ denotes the pixels variance at kth rotation and k is varied from 1 to 9. Figure 6 shows the OD region before and after the blood vessels removal.

Figure 6. OD image (a) before blood vessels removal and (b) after blood vessels removal

3.3.2 CHT

OD is considered as a bright region with circular shape in the retinal image. Hence we employed CHT [31] to find out the OD boundary from the localized OD image. At this phase, we kept the varying radius from 30 to 55. In general CHT is regarded as an extension of Hough Transform and used to detect the objects with circular shape in the image. The basic mathematical expression of CHT is expressed as

$(X-A)^2+(Y-B)^2=R^2$          (16)

where, A and B are the coordinates of centre point of the circle of every point X and Y and R is the circle’s radius. In our work, the radius is fixed in between 33 and 55 and it was obtained from the simulation experiments. In CHT, the circle shaped regions are voted in the accumulator and the local circle which got maximum voting is determined as the OD. Under this phase, initially the OD image is process to Gaussian blurring and then processed for edges extraction. For edges extraction, we employed Prewitt edge operator. In each circle the accumulator performs voting and the local maximum voted circles of accumulator gives the circular Hough space. Then the maximum voted circle is determined from the circular Hough space and it is considered as the optic disc boundary.

4. Experimental Investigations

Applying the suggested technique to several fundus photos allows us to examine its performance in the next section. A personal PC with a 1 TB hard drive and 4 GB of RAM is used to run the simulations using the MATLAB application. The specifics of the datasets utilised for simulation are originally explored here. We will next compare the outcomes after investigating the performance measures.

4.1 Datasets  

In order to validate our experiments, we utilised two datasets, namely DRIVE and MESSIDOR. This text examines the specifics of these two datasets.

The DRIVE dataset consists of a total of 40 fundus photos, which are divided into two groups. Each group has 20 images. The two groups are designated as the training group and the testing group. The fundus pictures were obtained using a Canon CR5 non-mydriatic 3CCD camera with a Field of View (FOV) of 450. The spatial resolution of each picture is 565×584 pixels. In addition, the DRIVE dataset includes vessel pictures that have been manually segmented by medical specialists. The photos in the two groups have been carefully split by two distinct observers, allowing for two different views to be observed. The manually segmented pictures can be referred to as Ground-truth images and are used to assess performance.

MESSIDOR [32]: This dataset consists of a total of 1200 retinal fundus pictures, making it the biggest dataset of its kind. All of the subjects were photographed with a "non-mydriatic 3CCD camera (Topcon TRCNW6)" with a field of view (FOV) of 450. The photos in this collection have various resolutions, including 1440×960, 2240×1488, and 2304×1536. The collection exclusively consists of photos in the .TIFF format. Out of the 1200 retinal pictures, 800 were taken with pupil dilation and the remaining 400 were taken without pupil dilation. The offered standard reference includes the classification of DR as well as the assessment of the risk of Macular Edoema in each picture.

4.2 Performance metrics

Several performance measures have also been proposed for subjective examination. First, the segmentation results are used to assess several reference measures, such as False Positive (FP), False Negative (FN), True Positive (TP), and True Negative (TN).

(i) TP: The sum of all OD pixels that have been labelled as such.

(ii) Total Number of Non-OD Pixels (TN): Sum of all pixels that are not optically visible (OD).

(iii) For each FP, add up all the non-OD pixels that are also marked as OD pixels.

(iv) FN: The number of OD pixels incorrectly labeled as non-OD.

From these secondary measures, we can derive the following equations for measuring sensitivity (true positive rate), specificity (precision), accuracy, dice coefficient (DC), and OD overlap (ODO).

$\operatorname{Sensitivity}(S n)=\frac{T P}{T P+F N}$           (17)

Specificity $(S p)=\frac{T N}{T N+F P}$              (18)

$\operatorname{Accuracy}(A c)=\frac{T P+T N}{T P+T N+F P+F N}$            (19)

$D C=\frac{2 * T P}{(2 * T P+F P+F N)}$          (20)

$O D O=\frac{T P}{T P+F P+F N}$           (21)

5. Results and Discussion

Figure 7 displays the outcomes of the suggested strategy applied to the MESSIDOR dataset, while Figure 8 displays the findings obtained from the DRIVE dataset. From the segmented results, we can see that the proposed approach has succeeded in the complete OD segmentation. Moreover, the OD region highlighted with black color mark is totally continuous in nature which denotes that the OD is segmented perfectly even at breakages due to vessels outage. Since the proposed approach employed CHT to determined OD boundary, it was succeeded in the determination of complete OD structure.

Figure 7. Results of MESSIDOR dataset (a) Input color retinal image, (b) Retinal Sub-image, (c) OD Image after the removal of blood vessels and (d) Segmented OD

Table 1 shows the Performance metrics calculated over retinal images of DRIVE and MESSIDOR datasets. Due to space limitation, here we mentioned the results of only 20 images from both datasets. On an average, the proposed method gained sensitivity, specificity, ODO and DC on MESIDOR dataset is of 0.8988, 0.9272, 0.8619 and 0.9164 respectively. Similarly, the average performance on DRIVE is observed as 0.8910, 0.8352, 0.8140 and 0.9097 of sensitivity, specificity, ODO and DC respectively.

Table 1. Performance metrics over retinal images of DRIVE and MESSIDOR datasets

Image

MESSIDOR

DRIVE

Sensitivity

Specificity

ODO

DC

Sensitivity

Specificity

ODO

DC

S1

0.8986

0.9310

0.8687

0.9233

0.8925

0.8425

0.8207

0.9164

S2

0.8519

0.9345

0.8220

0.8766

0.8521

0.7948

0.7740

0.8697

S3

0.8525

0.9221

0.8226

0.8772

0.8464

0.7954

0.7746

0.8703

sS4

0.8735

0.9438

0.8436

0.8982

0.8674

0.8164

0.7956

0.8913

S5

0.9009

0.9268

0.8710

0.9256

0.8948

0.8440

0.8230

0.9187

S6

0.8592

0.9398

0.8293

0.8839

0.8531

0.8021

0.7813

0.8770

S7

0.8522

0.9227

0.8223

0.8769

0.8461

0.7951

0.7743

0.8700

S8

0.9286

0.9198

0.8987

0.9533

0.9225

0.8715

0.8507

0.9464

S9

0.9335

0.9273

0.9036

0.9582

0.9274

0.8764

0.8556

0.9513

S10

0.9110

0.9152

0.8811

0.9357

0.9049

0.8539

0.8331

0.9288

S11

0.8616

0.9245

0.8317

0.8863

0.8555

0.8045

0.7837

0.8794

S12

0.8910

0.9257

0.8611

0.9157

0.8849

0.8339

0.8131

0.9088

S13

0.8680

0.9210

0.8381

0.8927

0.8619

0.8109

0.7901

0.8858

S14

0.9419

0.9070

0.9120

0.9666

0.9361

0.8848

0.8640

0.9597

S15

0.9486

0.9249

0.9187

0.9733

0.9399

0.8915

0.8707

0.9664

S16

0.9012

0.9276

0.8713

0.9259

0.8951

0.8541

0.8233

0.9190

S17

0.8716

0.9427

0.8417

0.8963

0.8655

0.8145

0.7937

0.8894

S18

0.9111

0.9222

0.8812

0.9358

0.9050

0.8540

0.8332

0.9289

S19

0.9737

0.9381

0.8638

0.9184

0.9405

0.8366

0.8159

0.9116

S20

0.9444

0.9282

0.8545

0.9091

0.9281

0.8273

0.8103

0.9060

Figure 8 and Figure 9 show the impact of OD localization on the OD segmentation performance with respect to sensitivity and Overlap score respectively. Unlike most of existing methods which use only one mechanism to find out the OD pixel, we employed three methods in three different scenarios for OD pixel determination. Such kind of integrated mechanism improves the accuracy of position of OD pixel; means identifies the perfect OD pixel. So, the fused mechanism has gained more sensitivity and overlap score than the individual methods. The average sensitivity of the fused mechanism is 0.8967, whereas for the separate approaches, it is 0.8741, 0.8566, and 0.8333 for Template matching, Maximum Entropy, and Vessel Density map, respectively. Additionally, the fused approach has an average Overlap score of 0.7810, whereas the separate techniques have scores of 0.7555, 0.7436, and 0.7174 for Template matching, Maximum Entropy, and Vessel Density map, respectively.

Figure 8. Results for DRIVE dataset (a) color retinal image, (b) OD image and (c) Segmented OD

Figure 9. Sensitivity at different OD localization methods

Figure 10 shows the impact of OD localization on the OD segmentation performance with respect to specificity. From the results, it can be noticed that the proposed fused mechanism had gained better specificity because it localizes the OD pixel with most precisely. Compared with the individual localization methods, the fused localization can give more options about the candidate OD pixels and hence shows better OD pixels center. The normal specificity of fused method is observed as 0.8368 while for individual methods, it is observed as 0.7506, 006638, and 0.6496 by Template matching, Maximum Entropy and Vessel Density map respectively. Next, Figure 11 shows the area under ROC curve for the proposed OD segmentation method. The ROC characteristics better performance as the coinciding point of sensitivity and specificity is observed at 0.9956.

Figure 10. Overlap score at different OD localization methods

Figure 11 compares the specificity of various OD localization methods in identifying true negatives. To check the effectiveness of proposed approach under different conditions like low contrast and Noisy, we applied the proposed method on a low quality image dataset named as High Resolution Fundus (HRF) database [33]. HRF images are contaminated with noise and are having abnormal contrast levels. Due to these disturbances, the OD extraction is very tough. A Canon CR-1 fundus camera with a 45° field of view and different capture settings was used to take pictures of 18 sets of the same eye in 18 people. The examination had to be done again because the first picture in each pair was not very good. The fields of view of the two pictures are pretty much the same, though small changes were made because the eyes moved between shots. For the performance review, the low-quality pictures are looked at, and the suggested method is used on those images. Budai et al. [34] took a picture of this bad information.

Figure 11. Specificity at different OD localization methods

Figure 12 shows the area under the ROC curve, illustrating the overall performance of the model in distinguishing between classes. For the HRF database, Figure 13 and Figure 14 show the impact of OD localization on the OD segmentation performance with respect to sensitivity and specificity respectively. Compared with the results of MESSIDOR and DRIVE, the results of HRF are very less because due to the noisy and low contrast nature of retinal images. The typical Sensitivity of fused process is observed as 0.6768 while for individual methods, it is observed as 0.4054, 0.5296 and 0.5656 Template matching, Maximum Entropy and Vessel Density map respectively. Similarly, the average Specificity of proposed fused localization method is observed as 0.5450, while for other methods like Template matching, Maximum Entropy and Vessel Density map, it is observed as 0.5247, 0.5165 and 0.4875 respectively. These values demonstrate the lower performance, they are better for HRF database because the images of HRRF are very abnormal and low quality.

Figure 12. Area under ROC curve

Figure 13. Sensitivity at different OD localization methods on HRF database

Figure 14. Specificity at different OD localization methods on HRF database

Figure 15 shows the comparison between proposed and existing OD segmentation methods such as OD through Gaussian Mixture Model (OD-GMM) and OD through Edge Density filter (OD-EDF). OD-GMM applied supervised learning-based pixel wise classification mechanism for segmenting the OD from retinal images. In their methods, they described each pixel with different set of features and then classified through GMM algorithm. However, the pixel wise classification for OD segmentation is not practically viable solution because some retinal images have more number of pixels and they constitute more computation l burden on the segmentation system. On the other hand, the OD-EDF applied the mechanism similar to our proposed method, i.e., OD localization and OD segmentation. Additionally, they employed three techniques for OD localization; nonetheless, the OD pixel is not correctly detected, leading to restricted performance in some photos. Thanks to its improved efficiency, the suggested strategy outperformed the existing methods because it applied three efficient localization methods at OD pixel determination and CHT for OD boundary extraction. Moreover, our method also involves an effective blood vessel removal strategy in the OD image which helps in the clear identification of OD boundary.

Figure 15. Accuracy comparison between different methods at different datasets

Sensitivity is computed based on the truly detect OD pixels to the total OD pixels. Here, for the computation of sensitivity, initially we measure TP as the accumulative count of detected OD pixels to the try OD pixels. As much as larger the TP, the sensitivity is that much larger. Next, the pixels those are grouped as non-OD pixels but originally OD pixels, then such kind of pixels are counted under False Negative. From the results, the proposed approach is observed to have better sensitivity than all the existing methods because it applies a pixel by pixel scanning over the binary image. On an average, the proposed approach has obtained an overlap score of 0.9200 while the existing methods has gained 0.8852, 0.9150, 0.8990 and 0.9023.

To assess the proposed approach's performance, a sensitivity analysis measuring the proportion of correctly identified positive instances was conducted. Table 2 compares sensitivity values, showing that the proposed method consistently outperforms conventional approaches, highlighting its robustness and improved detection capabilities.

Table 2. Sensitivity comparison between proposed and conventional approaches

Method

Sensitivity

MESSIDOR

DRIVE

OD-EDF [32]

0.9333

0.8888

OD-GMM [31]

0.9008

0.8485

Proposed

0.9445

0.8996

The computational complexity of proposed OD segmentation method is noticed as $O(N d 3+m 3)+O(n)$ whereas the existing methiods complexity is observed as more, i.e., OD-EDF is observced as $O\left(N^2\right)$ and for OD-GMM it is observed as $O\left(N^2\right)+O(N \log N)$.

6. Conclusion

The primary focus of this work was on the process of separating out OD from retinal graphics. Towards such aim, a new method was proposed in two fold; in the first fold, the OD is localized from input color retinal images and in the second fold, the localized region is processed for OD segmentation. Three different methods are employed to localize the OD based on the seed point called as OD pixel which is regarded as the center point of OD. Prior to OD border identification with CHT, the localized OD zone undergoes processing to remove blood vessels. The simulation studies examine the superiority in terms of Dice coefficient, Sensitivity, Specificity, and Overlap score on two retinal image datasets, namely MESSIDOR and DRIVE. The results of the comparison with the existing approaches demonstrate that the suggested method is capable of correctly segmenting the OD from various fundus pictures. As the OD is major part for the diagnosis of Glaucom and DR, its proper and accurate extraction is major task in automized systems. Such major responsibility is taken by proposed approach and segmented OD from different types of images effectively.  

Since OD localization and segmentation is non-trivial process, they can be optimized through nature inspired algorithms and it is one of the possible future direction to reduce the complexity. For instance, tuning filter parameters for OD localization is one of iterative process at which we can apply nature inspired algorithms.

  References

[1] Gandhi, M., Dubey, S. (2013). Evaluation of the optic nerve head in glaucoma. Journal of Current Glaucoma Practice, 7(3): 106. https://doi.org/10.5005/jp-journals-10008-1146 

[2] Ferreri, F.M. (2019). Optic Nerve. BoD–Books on Demand.

[3] Jeganathan, V.S.E., Wang, J.J., Wong, T.Y. (2008). Ocular associations of diabetes other than diabetic retinopathy. Diabetes Care, 31(9): 1905-1512. https://doi.org/10.2337/dc08-0342 

[4] Abràmoff, M.D., Niemeijer, M., Suttorp-Schulten, M.S., Viergever, M.A., Russell, S.R., Van Ginneken, B. (2008). Evaluation of a system for automatic detection of diabetic retinopathy from color fundus photographs in a large population of patients with diabetes. Diabetes Care, 31(2): 193-198. https://doi.org/10.2337/dc07-1312 

[5] Squirrell, D.M., Talbot, J.F. (2003). Screening for diabetic retinopathy. Journal of the Royal Society of Medicine, 96(6): 273-276. https://doi.org/10.1177/014107680309600604 

[6] Aquino, A., Gegúndez-Arias, M.E., Marín, D. (2010). Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques. IEEE Transactions on Medical Imaging, 29(11): 1860-1869. https://doi.org/10.1109/TMI.2010.2053042 

[7] Almazroa, A., Burman, R., Raahemifar, K., Lakshminarayanan, V. (2015). Optic disc and optic cup segmentation methodologies for glaucoma image detection: a survey. Journal of Ophthalmology, 2015(1): 180972. https://doi.org/10.1155/2015/180972 

[8] Thakur, N., Juneja, M. (2018). Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma. Biomedical Signal Processing and Control, 42, 162-189. https://doi.org/10.1016/j.bspc.2018.01.014 

[9] Veena, H.N., Muruganandham, A., Kumaran, T.S. (2020). A Review on the optic disc and optic cup segmentation and classification approaches over retinal fundus images for detection of glaucoma. SN Applied Sciences, 2(9): 1476. https://doi.org/10.1007/s42452-020-03221-z 

[10] Wisaeng, K., Sa-Ngiamvibool, W. (2018). Automatic detection and recognition of optic disk with maker-controlled watershed segmentation and mathematical morphology in color retinal images. Soft Computing, 22(19): 6329-6339. https://doi.org/10.1007/s00500-017-2681-9 

[11] Gao, Y., Yu, X., Wu, C., Zhou, W., Lei, X., Zhuang, Y. (2019). Automatic optic disc segmentation based on modified local image fitting model with shape prior information. Journal of Healthcare Engineering, 2019(1): 2745183. https://doi.org/10.1155/2019/2745183 

[12] Civit-Masot, J., Domínguez-Morales, M.J., Vicente-Díaz, S., Civit, A. (2020). Dual machine-learning system to aid glaucoma diagnosis using disc and cup feature extraction. IEEE Access, 8: 127519-127529. https://doi.org/10.1109/ACCESS.2020.3008539 

[13] Xu, S.P., Li, T.B., Zhang, Z.Q., Song, D. (2021). Minimizing-entropy and Fourier consistency network for domain adaptation on optic disc and cup segmentation. IEEE Access, 9: 153985-153994. https://doi.org/10.1109/ACCESS.2021.3128174 

[14] Gao, Y., Yu, X., Wu, C., Zhou, W., Wang, X., Chu, H. (2019). Accurate and efficient segmentation of optic disc and optic cup in retinal images integrating multi-view information. IEEE Access, 7: 148183-148197. https://doi.org/10.1109/ACCESS.2019.2946374 

[15] Gagan, J.H., Shirsat, H.S., Kamath, Y.S., Kuzhuppilly, N.I., Kumar, J.H. (2022). Automated optic disc segmentation using basis splines-based active contour. IEEE Access, 10: 88152-88163. https://doi.org/10.1109/ACCESS.2022.3199347 

[16] Kumar, J.H., Sachi, S., Chaudhury, K., Harsha, S., Singh, B.K. (2017). A unified approach for detection of diagnostically significant regions-of-interest in retinal fundus images. In TENCON 2017 - 2017 IEEE Region 10 Conference, Penang, Malaysia, pp. 19-24. https://doi.org/10.1109/TENCON.2017.8227829 

[17] Simmons, G.F. (1996). Calculus with Analytic Geometry. New York, USA: McGraw-Hill.

[18] Wang, L., Liu, H., Lu, Y., Chen, H., Zhang, J., Pu, J. (2019). A coarse-to-fine deep learning framework for optic disc segmentation in fundus images. Biomedical Signal Processing and Control, 51: 82-89. https://doi.org/10.1016/j.bspc.2019.01.022 

[19] Liu, Y., Fu, D., Huang, Z., Tong, H. (2019). Optic disc segmentation in fundus images using adversarial training. IET Image Processing, 13(2): 375-381. https://doi.org/10.1049/iet-ipr.2018.5922 

[20] Kadambi, S., Wang, Z., Xing, E. (2020). WGAN domain adaptation for the joint optic disc-and-cup segmentation in fundus images. International Journal of Computer Assisted Radiology and Surgery, 15(7): 1205-1213. https://doi.org/10.1007/s11548-020-02144-9 

[21] Shen, J., Qu, Y., Zhang, W., Yu, Y. (2018). Wasserstein distance guided representation learning for domain adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1): 4058-4065. https://doi.org/10.1609/aaai.v32i1.11784 

[22] Zhang, L., Lim, C.P. (2020). Intelligent optic disc segmentation using improved particle swarm optimization and evolving ensemble models. Applied Soft Computing, 92: 106328. https://doi.org/10.1016/j.asoc.2020.106328 

[23] Jiang, Y., Duan, L., Cheng, J., Gu, Z., Xia, H., Fu, H., Li, C., Liu, J. (2019). JointRCNN: A region-based convolutional neural network for optic disc and cup segmentation. IEEE Transactions on Biomedical Engineering, 67(2): 335-343. https://doi.org/10.1109/TBME.2019.2913211 

[24] Fu, Y., Chen, J., Li, J., Pan, D., Yue, X., Zhu, Y. (2021). Optic disc segmentation by U-net and probability bubble in abnormal fundus images. Pattern Recognition, 117: 107971. https://doi.org/10.1016/j.patcog.2021.107971 

[25] Xiong, H., Liu, S., Sharan, R.V., Coiera, E., Berkovsky, S. (2022). Weak label based Bayesian U-Net for optic disc segmentation in fundus images. Artificial Intelligence in Medicine, 126: 102261. https://doi.org/10.1016/j.artmed.2022.102261

[26] Roychowdhury, S., Koozekanani, D.D., Kuchinka, S.N., Parhi, K.K. (2015). Optic disc boundary and vessel origin segmentation of fundus images. IEEE Journal of Biomedical and Health Informatics, 20(6): 1562-1574. https://doi.org/10.1109/JBHI.2015.2473159 

[27] Reddy, Y.M.S., Ravindran, R.E. (2020). Optic Disk Segmentation through Edge Density Filter in Retinal Images. International Journal of Innovative Technology and Exploring Engineering, 9(3): 3168-3176. https://doi.org/10.35940/ijitee.C8989.019320 

[28] Yu, H., Barriga, E.S., Agurto, C., Echegaray, S., Pattichis, M.S., Bauman, W., Soliz, P. (2012). Fast localization and segmentation of optic disk in retinal images using directional matched filtering and level sets. IEEE Transactions on Information Technology in Biomedicine, 16(4): 644-657. https://doi.org/10.1109/TITB.2012.2198668 

[29] Jerman, T., Pernuš, F., Likar, B., Špiclin, Ž. (2016). Enhancement of vascular structures in 3D and 2D angiographic images. IEEE Transactions on Medical Imaging, 35(9): 2107-2118. https://doi.org/10.1109/TMI.2016.2550102 

[30] Otsu, N. (1975). A threshold selection method from gray-level histograms. Automatica, 11(285-296): 23-27. 

[31] Abdullah, M., Fraz, M.M., Barman, S.A. (2016). Localization and segmentation of optic disc in retinal images using circular Hough transform and grow-cut algorithm. PeerJ, 4: e2003. https://doi.org/10.7717/peerj.2003 

[32] MESSIDOR: Methods for Evaluating Segmentation and Indexing techniques Dedicated to Retinal Ophthalmology. https://www.adcis.net/en/third-party/messidor/. 

[33] Staal, J., Abràmoff, M.D., Niemeijer, M., Viergever, M.A., Van Ginneken, B. (2004). Ridge-based vessel segmentation in color images of the retina. IEEE Transactions on Medical Imaging, 23(4): 501-509. https://doi.org/10.1109/TMI.2004.825627 

[34] Budai, A., Bock, R., Maier, A., Hornegger, J., Michelson, G. (2013). Robust vessel segmentation in fundus images. International Journal of Biomedical Imaging, 2013(1): 154860. https://doi.org/10.1155/2013/154860