A Novel Deep Learning Algorithm for Optical Disc Segmentation for Glaucoma Diagnosis

A Novel Deep Learning Algorithm for Optical Disc Segmentation for Glaucoma Diagnosis

Geethalakshmi Rakesh* Vani Rajamanickam

SRM Institute of Science and Technology, Ramapuram, Chennai 600089, India

Corresponding Author Email: 
gr7830@srmist.edu.in
Page: 
305-311
|
DOI: 
https://doi.org/10.18280/ts.390132
Received: 
27 December 2021
|
Revised: 
2 February 2022
|
Accepted: 
12 February 2022
|
Available online: 
28 February 2022
| Citation

© 2022 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

In India, first major cause of blindness is the cataract and the next major cause of blindness is the glaucoma which is approximately 11.9 million per yearly. The Optical Nerve Head (ONH) misalignment is the initial symptom which helps in predicting glaucoma in early stage. The optic cup and optic disc misalignment cause variation in Cup to Disc Ratio (CDR). Accurate segmentation of optic disc and cup is needed in order to calculate CDR properly. Manual segmentation can be automated to improve accuracy. Several deep learning algorithms are proposed to improve segmentation of optic cup and disc, still segmentation becomes difficult because of intersection of cup and disc. Here a Modified U-net model is proposed, which locate the optic disc in retinal fundus image, after that disc and cup segmentation is performed to calculate the CDR also the existing algorithm like adaptive thresholding, U-net model results are compared with the proposed model. The proposed and the existing methods are evaluated on three different publicly available dataset RIM-ONE, DRIONS-DB and Drishti-GS1.

Keywords: 

glaucoma, deep learning, modified U-net

1. Introduction

Glaucoma is a “Silent Killer of eye sight” which causes vision loss that is irreversible. Why it is silent killer means the early symptoms are not at all noticeable before vision loss. Glaucoma is the subsequent driving reason for visual impairment all through the world. Around 11.9 million people got affected in India, around 79.6 million people around the world by 2020 [1] and this number increases exponentially in a year. Glaucoma mainly affects the people over age of 45 years throughout the world. The diagnosis of the glaucoma involves of five individual parameters they are Inter Ocular Pressure (IOP), Optic Nerve Head (ONH), Visual Field (VF), Angle and Thickness of cornea. Tonometer device is widely used by the ophthalmologist to measure pressure inside the eyes which is the primary step in glaucoma diagnosis. Once the pressure is abnormal then ophthalmologists examine the shape and colour of the eye which is done manually. Since glaucoma create change in shape of optic cup, optic disc and neuro retinal rim in early stage itself. Ophthalmoscopy is the screening technique used by the ophthalmologist to analysis the anatomy of retina. Third step is of Standard Automated Perimetry (SAP) which is used to map visual field of the eyes. Fourth is Gonioscopy which measures the draining angle of cornea and iris. Last step is pachymetry which estimate the thickness of the cornea.

In the above five technique ONH analysis [2] is mainly important since it is useful in early diagnosis of disease. The evaluation of ONH involves of estimating CDR (Cup to Disc Ratio) which is determined by segmenting optic disc and cup, measuring disc and cup diameter. The ophthalmologists dilate the eyes with the drops and then place the light over retina measure the optic cup and disc diameter. This will take more time, expensive and may produce inaccurate result while screening in highly populated region. The Computer Aided Diagnosis (CAD) [3] tools are one of ways adopted nowadays to get automatic segmentation of disc and cup. Several works are proposed to segment the optic disc and cup from the retinal fundus image to help the ophthalmologist in diagnosing glaucoma.

Usually, whatever visual features seen by the eyes get converted to optical signal pass through optic nerve to brain. Optic nerve starts from the Optic Disc (OD). The OD is made up of rods and cones which help in bright and dark vision respectively. The region of the retina where there will be no rods and cones which we name it has Blind spot no visual feature will be extracted over there. The innermost region of the OD is Optic Cup (OC) which is bright enough and covers almost 30% of OD in normal eye. If more than this percentage the glaucoma will be detected. Since OD and OC overlap each other the segmentation of OD and OC become challenging task for the ophthalmologist. Ophthalmologist use different images like Retinal Fundus image, Optical Coherence Tomography (OCT), Magnetic Resonance image (MRI) for diagnosis disease. In the proposed methodology we used retinal fundus image to segment optic disc [4] in glaucoma diagnosing. Retinal fundus image is captured using fundus camera. From retinal fundus images optic cup, optic disc, blind spot, Macula, blood vessels can be analysed.

The various parts of the retinal fundus image is shown in the Figure 1. The centre bright part of the eye is the optic disc inside OD we have an overlapped Optic Cup. In between OD and OC a layer called Neuro retinal rim will be present, it form the boundary between OC and OD. If there is any variation in size and shape of the optic cup the CDR varies which is the initial symptom for diagnosing glaucoma instead of several parameter like ocular pressure, draining angle change.

If the cup diameter increases then the ratio of cup and the disc increases from its normal value of 0.3. In order to calculate the CDR the segmentation of disc and cup is very much important. Figure 2 shows the boundary of optic disc and cup of normal and glaucoma retinal fundus image. The outer and inner dotted line represents the Optic Disc and cup respectively.

Figure 1. Parts of the retinal fundus image

Figure 2. Retinal fundus image

The deep learning approaches are recently under research to segment the OD and OC. Before segmenting there are three steps procedure applied in order to enhance the segmentation process, they are Pre-processing, feature selection and feature extraction. The first step involves pre-processing which remove noise in the image and enhance the image quality [5]. For pre-processing filters like median, Gaussian, canny can be used. After pre-processing the feature which we need to analysis has to be selected that’s Region of Interest need to found. Once the feature is selected then next thing is to extract features. Morphological process like dilation and erosion, thresholding techniques are adopted for feature selection and extraction process. The next step is to segment the region of Interest (OD and OC) is extracted from retinal fundus image. Finally, the classification is done in order to classify the presence or absence of glaucoma.

The structure of the paper include section 2 related works gives a detailed description about existing methodology, section 3 gives details about dataset used in this paper, section 4 describes the model architecture used in this paper, section 5 discusses about the methodology proposed in this paper, section 6 presents the result of proposed method and compare that with the existing method, finally conclusion and discussion are included in section 7.

2. Related Works

The previous study related to eye disease prediction from the retinal fundus images by segmenting optic disc and cup using various methods are discussed below. The various segmentation methods of optic cup and disc include graph cut, graph trace, thresholding, morphological process, machine learning and deep learning techniques. Chandrika and Nirmala [6] proposed k-Mean clustering to segment optic cup and disc from colour fundus image. The segmentation is performed by region description and feature extraction. The k-mean cluster is used to group feature in the image. The main problem in segmentation of by K-mean clustering is extracting blood vessels is difficult. Matched filters can be used to enhance the blood vessel in the fundus segmentation. The issue here matched filter enhances bright lesions caused due to diabetic retinopathy in addition to blood vessels. Here segmenting optic disc and cup is not accurate since matched filters extract bright lesions as well.

Fernandez et al. [7] proposed algorithm to automatically capture the optic disc and cup using colour changes in the image by calculating the colour derivatives. Also, they considered spatial derivative and other characteristics like distance from centre of OD to segment OD and OC automatically. The colour feature does not provide exact region of interest they lack in accurate segmentation. Raj et al. [8] proposed segmentation and reconstruction of OD and OC using sparse dissimilarity constraint coding (SDC). SDC consider both dissimilarity and sparse constraints in the image using a set of reference disc of known CDR. From reconstruction coefficient of SDC, the CDR of test image is calculated. This work is proposed for 2D fundus image, this has a problem since it has no information about the depth, which is essential for computing the CDR. Thus our method focus to find exact ROI by considering depth as well. Liu et al. [9] proposed semi supervised segmentation of OD and OC jointly by using conditional Generative Adversarial Nets (GAN). GAN architecture includes segmentation net, generative and discriminator which learn the feature between input fundus image and its own segmented image. They utilised labelled as well as unlabelled data which helps in improving generative maps. The GAN takes more computational time compared to our proposed method.

Mvoulana et al. [10] proposed fully automated methodology in screening glaucoma. It is a three-step procedure initially the presence of OD and OC was detected by combining the technique of brightness lesion and template matching technique, then the detected OD and OC was segmented by utilising texture and model based approach. At last Cup to Disc Ratio is calculated in order to detect the presence or absence of Glaucoma. The template matching technique involves of two steps first to calculate the histogram of retinal image and second thing is to find the brightest part of the image using matching filters. Based on the result OD is detected. In segmenting OD and OC texture based pixel matching approach and model based boundary fitting is proposed. Texture based pixel matching which utilise unsupervised clustering (K-mean) algorithm and uneven boundary in segmented image caused due to blood vessels get smoothened using fitting procedure. Finally, from the segmented Optic Disc and Optic Cup, Cup to Disc Ratio is determined by measuring the Optic Cup and Optic Disc area. OD and OC area is obtained by measuring the number of white pixel in the segment. The glaucoma screening is done using simple binary classification by fixing simple threshold value. This method involves brightness lesion and template matching which takes more time in automatic computation, in this work we focus on deep learning based segmentation to reduce computational time.

Zhao et al. [11] proposed semi supervised learning scheme to calculate the CDR from the fundus image. Implementation involves two step process first unsupervised feature representation using deep learning MFPPNet model is used to segment the OD and OC, then CDR calculation is done by random forest regression. This unsupervised learning includes a densely connected network, pyramid pooling and fully connected layer to extract feature from the input. MFPPNET compute more parameter in segmenting optic disc and cup. More parameter implies more computational time. The modified U-net compute half the parameter compares with that of MFPPNET. Thus computational time as well as accuracy is improved in our proposed work. Jiang et al. [12] proposed multi path recurrent neural network to segment OD and OC from fundus image. Multipath recurrent network proposed resembles u-net architecture it has both encoding and decoding path. Recurrent neural network is applied in u-net to segment OD and OC. Prastyo et al. [13] proposed U-net architecture in segmenting optic cup from retinal fundus image. The U-net model proposed here segment the OC alone the drawback this method, it doesn’t detect the OC and region of interest automatically. The ground truth mask of OD needs to be provided for segmenting OC. This can be improved by using localization method to segregate red and green channel. The U-net architecture include large number of filter to segment OD and OC which takes more computational time. Our modified U-net include half the filter compared to U-net thus computational time also reduced.

Božić-Štulić et al. [14] proposed deep learning based Optic Disc and Optic Cup segmentation from fundus. Here mean shift algorithm is utilized to find the ROI (Region of Interest), then training on OD and OC is done using labelled dataset on a fine tuned SegNet CNN model. After training the model the test image is fed to the deep learning network to calculate Cup to Disc Ratio (CDR). Mean shift algorithm lacks knowledge of colour information they focus only on pixel value. The accuracy of segmentation is less compared to our proposed work. Veena et al. [15] proposed two different convolutional neural network model in segmenting the OD and OC individually. First CNN model is used to segregate the Optic Disc from fundus, then prediction is done based on the result. Second CNN model segregate the Optic cup from the fundus and then prediction is done based on the result. By combining the prediction on both the model the presence and the absence of glaucoma is detected. By utilizing two different CNN model increase the computational time. Liu et al. [16] proposed a dense CNN for OD and OC segmentation. In this method OD and OC get segmented jointly. It is a two-step process, initially the OD is detected, after that OD and OC get segmented simultaneously using Densely connected Depth wise Separable Convolution network (DDSC-net). DDSC-net architecture requires more data to train the model.

3. U-Net Model

The proposed model is the Modified U-net architecture which is based on the U-net architure. CNN are highly used in image processing, since it can learn high level feature from low level data. Main application of CNN in image processing be image classification. For image classification, the region of interest need to be known [17]. Segmentation helps in finding out the region of interest in an image. Inorder to train a network to classify an image, the process involves of training a larger dataset. It actually takes lots of time to train a network. In that case the encoder decoder network helps in reducing the time in training the network. The proposed modified U-net architecture is the type of encoder-decoder network where the input image and the ground truth image is provided to the network based on the probability the ROI is segmented from the image.

The U-net architecture is the existing model [13]. Block diagram of U-net model is shown in the Figure 3. The arhitecture U shaped symmetric in both side. The left side of the U net is said to be contracting side which consist of onvolutional layers perform down sampling and the rigtht side of the U net is sad to be expansive side where transposed convolutional layers perform upsampling. Main problem in U-net, during upsampling the spatial information obtained is not specific. Inorder overcome this issue the U-net apply skip connections which merge the spatial information of the downsampling with the upsampling. By doing so the network contain many redundant feature, makes the system collapse while training. The Modified U-net architecture are highly applied in biomedial segmentation which overcome the problem of U-net architecture.

Figure 3. Architecture of U-net model

4. Methodology

The method proposed in this paper utilize Modified U-net model to segment optic disc from the fundus image. Proposed method is trained and examined on three publically available dataset RIM-ONE [18], DRISHTI-GS1 [19, 20], DRIONS-DB [21]. The complete workflow of the model proposed is shown in the Figure 4.

Figure 4. Work flow of proposed model

4.1 Dataset description

Three publically available dataset were used in this method RIM-ONE, DRISHTI-GS1 DRIONS-DB. The detailed description about the dataset is follows. RIM-ONE dataset consists of 159 fundus images categorised into test and train image. Each test and train image are categorised into glaucoma and normal by hospital as well as random way. DRISHTI-GS dataset consists of 101 fundus images obtained from Arvind Eye Hospital, it is segregated into 50 images for training purpose and 51 images for testing purpose. Every image in dataset is analysed by four different Experts, all the results are grouped and described in a excel sheet.

These images are collected from both male and female of age between 40 to 80 years. The dataset consists of retinal fundus image along with the optic nerve head marked manually by four ophthalmologists. DRION-DB dataset consist of 110 fundus images of dimension 600×400 resolutions. The average age of people taken in this dataset is 53 years. Detailed description of all three dataset is shown in Table 1.

All three dataset contain image obtained from colour fundus camera; centre position is focused on optic Nerve Head. The segmentation of optic disc and optic cup are important for analysing Optic Nerve Head for predicting glaucoma. The sample images from RIM-ONE, DRISHTI-GS and DRION-DB Dataset are shown in Figure 5.

Table 1. Dataset description

Dataset

Total

Image size

RIM-ONE

169

500×500

DRISHTI -GS

101

2896×1944

DRION-DB

110

600×400

Figure 5. (a) RIM ONE, (b) DRISHTI-GS Dataset, (c) DRION DB Dataset

4.2 Modified u-net architecture

The proposed modified U-net architecture is derived from the U-net architecture. Figure 6 shows the architecture of proposed model. The model proposed contains half the filter in every convolutional layer with the input image size of 256×256. The filter size is reduced in order to reduce the trainable parameter which in turn reduces the training duration of the network. The overall parameter requires to train the proposed model are 6,56,257 parameters. Characteristics of various layers and functions are discussed in detail in the following sub division.

4.2.1 Convolutional layer

The convolutional layer performs convolution operation, it reduces the size of image for extracting feature efficiently. The convolution layer comprises of filter with some weight. This weight gets multiplied with the pixel intensity of the image, this in turn added with bias value. The sum of bias and the product term obtained from the entire pixel in the image replace the centre pixel in the image. Our proposed modified U-net consists of total 19 convolutional layers. The input image of size 256×256 fed to first convolutional filter 32×32. The filter size varies from 32 to 64 filters. The activation function relu is implemented in all the convolutional layers expect the final layer which use sigmoid function. The entire layer includes padding.

4.2.2 Max-pooling

Pooling layer is the additional layer placed next to the convolutional layer. This pooling layer is mainly meant for reducing the redundant feature in feature map. Various form of pooling can be performed in an image. Some common pooling adopted in CNN be Max pooling and average pooling. In our model max pooling operation is adopted. Usually, the filter size is small compared to feature size. Thus, max pooling filter of size 2×2 is applied. Since pooling operation reduce the size of the feature they are otherwise known as down samplers. In our model four max pooling filters are utilized.

4.2.3 Up-sampler

As the name suggests the up sampler perform the exact opposite function of the down sampler or max-pooling. It expands the feature map by repeating the single value to entire window of the output. The size of the up-sampler is the same 2×2 which produces the output image of 2N×2N×D. The down sampler acts as the encoder which diminish the feature map size, whereas up-sampler act as decoder which enhance feature map size. Four up-sampler of are utilized in our model. Along with the up-sampler de-convolutional layers are present they perform simple copy operation.

Figure 6. Modified u-net architecture

5. Result and Discussion

The model proposed is tested on the three publically available dataset RIM-ONE [18], DRISHTI-GS1 [19, 20], DRIONS-DB [21]. All three dataset are tested for optic disc segmentation. Already discussed that RIM-ONE dataset consist of 159 images in two folders of glaucoma and normal. From that 70% of image is taken for training and 30% of image is taken for testing. All the images are provided with the ground truth obtained from two ophthalmologists. DRISHTI-GS1 dataset consist of 101 images from those 51 images are utilized for training and 50 images are utilized for testing purpose. Ground truth is available for all the data, which is obtained from four different ophthalmologists. DRIONS-DB dataset consist of 110 images in this 80% is taken for training ad 20% is taken for testing. The ground truth is available for optical disc alone obtained from two ophthalmologists.

5.1 Performance metric

The performance of the proposed model is analysed with Dice Coefficient (DC) and Intersection over Union (IOU). DC is defined as the spatial overlap index. It measures the similarity two individual images, the value of dice coefficient lies in [0, 1]. If dice coefficient is 0 the there is no overlap between two images, if it is 1 then there is a complete overlap lies between two images. The dice coefficient value should not be greater than 1. The expression for dice coefficient calculation is expressed in Eq. (3). Consider two individual images A and B, the dice coefficient is calculated from precision and recall, shown in Eqns. (1) and (2).

Precision $=\frac{t_{p}}{t_{p}+f_{p}}$                  (1)

Recall $=\frac{t_{p}}{t_{p}+f_{n}}$          (2)

where, tp refers to True Positive, fp refers to False Positive and fn refers to False Negative.

Dice coefficient $=2 * \frac{\text { precision } * \text { Recall }}{\text { Precision }+\text { Recall }}$            (3)

IOU is defined has the metric which measure similarity between the predicted boundary and the ground truth boundary. The IOU value lies in [0, 1]. If the value of IOU is 0 the there is no similarity between two images, if the value is 1 then there is a complete similarity lies between two images. The expression for IOU is expressed in equation.

$\operatorname{IOU}(A$ and $B)=\frac{|A \cap B|}{|A \cup B|}$          (4)

The optic disc segmentation proposed in this paper is non-binary thus evaluation of dice coefficient and Intersection over Union is adopted also, these metrics are compared with the existing model in the literature. Recently many researchers used new metric other than dice coefficient and IOU in evaluating the non-binary feature map to binary ground truth.

5.2 Experimental result

The proposed model is tested on three publicly available dataset. The model uses modified U-net architecture for segmenting optic disc from the fundus image. First step we need to resize the image to 256 X 256. This image is fed into proposed model. The segmentation is done based on the knowledge of trained data. After segmentation true positive, true negative, false positive and false neagative where calculated to analyse the performance metrics The segmented optic disc from retinal image obtained from three publicly available dataset RIM-ONE [18], DRISHTI-GS1 [19, 20], DRIONS-DB [21]. is shown in Figure 7.

IOU: 0.9362050294876099

Dice: 0.971991061814878

RIMONE Sample #9

(a) Predicted OD, Ground truth OD, RIM-ONE Image

IOU: 0.9320363402366638

Dice: 0.9648227712137487

Drishti - GS1 Sample #29

(b) Predicted OD, Ground truth OD, DRISHTI-GS1

IOU: 0.9075535535812378

Dice: 0.9582362712628693

DRIONS-DB Sample #6

(c) Predicted OD, Ground truth OD, DRIONS-DB Image

Figure 7. experimental result of proposed model

5.3 Comparative analysis

The performance of model proposed is compared with the existing model like Adaptive thresholding and U-net model on three publically available dataset RIM-ONE [18], DRISHTI-GS1 [19, 20], DRIONS-DB [21, 22]. The DC and the IOU of segmented optic disc obtained from the proposed model and the existing model is tabulated in Table 2. Image size of 256 X 256 is fed has input to the proposed modified U-net model, initially pre-processing is performed by using CLAHE. From the Table 2 the proposed model performance is better compared with the state of art.

Table 2. Comparative analysis of optic disc segmentation

Technique

RIM-ONE

DRISHTI-GS1

DRIONS-DB

DC

IOU

DC

IOU

DC

IOU

U-net [13]

NA

NA

0.88

0.83

NA

NA

Adaptive Thresholding [22]

NA

NA

0.864

NA

NA

NA

Proposed model

0.94

0.88

0.94

0.88

0.85

0.75

6. Conclusion

The novelty of this research work lies in developing a deep learning model that efficiently segment optic disc from retinal fundus image. This paper proposes an automatic segmentation optic disc and optic cup from retinal fundus image for glaucoma detecction. Input image is validated using deep learning model named Modified U-net. In this algorithm, the image is converted into grayscale and preprocessed using gaussian blur in order to improve the accuracy of ROI detection. Further, the segmentation of optic disc and optic cup is done using Modified U-Net architecture. Modified U-Net proves to be a very efficient algorithm for medical image segmentation. The Dice Co-efficient and Intersection Over Union obtained using proposed model is analysed with U-net and adaptive Thresholding model. The result shows that the proposed model out performs better compared with existing methods in segmenting optic disc in screening glaucoma diagnosis. The algorithm proposed is tested on the three publically available dataset RIM-ONE, DRISHTI-GS1, DRIONS-DB. In future this proposed model is applied to segment cup from optic disc.

  References

[1] Ahmad, A., Ahmad, S.Z., Khalique, N., Ashraf, M., Alvi, Y. (2020). Prevalence and associated risk factors of glaucoma in Aligarh, India–A population based study. The Official Scientific Journal of Delhi Ophthalmological Society, 31(1): 36-40. http://dx.doi.org/10.7869/djo.565

[2] Lowell, J., Hunter, A., Steel, D., Basu, A., Ryder, R., Fletcher, E., Kennedy, L. (2004). Optic nerve head segmentation. IEEE Transactions on Medical Imaging, 23(2): 256-264. https://doi.org/10.1109/TMI.2003.823261

[3] Song, X., Song, K., Chen, Y. (2006). A computer-based diagnosis system for early glaucoma screening. In 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, pp. 6608-6611. https://doi.org/10.1109/IEMBS.2005.1616016

[4] Kavitha, S., Karthikeyan, S., Duraiswamy, K. (2010). Early detection of glaucoma in retinal images using cup to disc ratio. In 2010 Second International conference on Computing, Communication and Networking Technologies, Karur, India, pp. 1-5. https://doi.org/10.1109/ICCCNT.2010.5591859

[5] Geethalakshmi, R., Vani, R., Cruz, M.V. (2021). A study of glaucoma diagnosis using brain computer interface. In: Kumar A., Zurada J.M., Gunjan V.K., Balasubramanian R. (eds) Computational Intelligence in Machine Learning. Lecture Notes in Electrical Engineering, vol 834. Springer, Singapore. https://doi.org/10.1007/978-981-16-8484-5_25

[6] Chandrika, S., Nirmala, K. (2013). Analysis of CDR detection for glaucoma diagnosis. International Journal of Engineering Research and Application, 2(4): 23-27.

[7] Fernandez-Granero, M.A., Sarmiento, A., Sanchez-Morillo, D., Jiménez, S., Alemany, P., Fondón, I. (2017). Automatic CDR estimation for early glaucoma diagnosis. Journal of Healthcare Engineering, 2017: 5953621. https://doi.org/10.1155/2017/5953621

[8] Raj, D.D., Singerji, A., Titus, A. (2017). CDR in glaucoma detection using dissimilarity constraints coding. International Journal of Engineering Research & Technology (IJERT), 6(5): 7936-800.

[9] Liu, S., Hong, J., Lu, X., Jia, X., Lin, Z., Zhou, Y., Liu, Y., Zhang, H. (2019). Joint optic disc and cup segmentation using semi-supervised conditional GANs. Computers in Biology and Medicine, 115: 103485. https://doi.org/10.1016/j.compbiomed.2019.103485

[10] Mvoulana, A., Kachouri, R., Akil, M. (2019). Fully automated method for glaucoma screening using robust optic nerve head detection and unsupervised segmentation based cup-to-disc ratio computation in retinal fundus images. Computerized Medical Imaging and Graphics, 77: 101643. https://doi.org/10.1016/j.compmedimag.2019.101643

[11] Zhao, R., Chen, X., Liu, X., Chen, Z., Guo, F., Li, S. (2019). Direct cup-to-disc ratio estimation for glaucoma screening via semi-supervised learning. IEEE Journal of Biomedical and Health Informatics, 24(4): 1104-1113. https://doi.org/10.1109/JBHI.2019.2934477

[12] Jiang, Y., Wang, F., Gao, J., Cao, S. (2020). Multi-path recurrent U-Net segmentation of retinal fundus image. Applied Sciences, 10(11): 3777. https://doi.org/10.3390/app10113777

[13] Prastyo, P.H., Sumi, A.S., Nuraini, A. (2020). Optic cup segmentation using u-net architecture on retinal fundus image. JITCE (Journal of Information Technology and Computer Engineering), 4(02): 105-109. https://doi.org/10.25077/jitce.4.02.105-109.2020

[14] Božić-Štulić, D., Braović, M., Stipaničev, D. (2020). Deep learning based approach for optic disc and optic cup semantic segmentation for glaucoma analysis in retinal fundus images. International Journal of Electrical and Computer Engineering Systems, 11(2): 111-120. https://doi.org/10.32985/ijeces.11.2.6

[15] Veena, H.N., Muruganandham, A., Kumaran, T.S. (2021). A novel optic disc and optic cup segmentation technique to diagnose glaucoma using deep learning convolutional neural network over retinal fundus images. Journal of King Saud University-Computer and Information Sciences. https://doi.org/10.1016/j.jksuci.2021.02.003

[16] Liu, B., Pan, D., Song, H. (2021). Joint optic disc and cup segmentation based on densely connected depthwise separable convolution deep network. BMC Medical Imaging, 21(1): 14. https://doi.org/10.1186/s12880-020-00528-6

[17] Mangipudi, P.S., Pandey, H.M., Choudhary, A. (2021). Improved optic disc and cup segmentation in Glaucomatic images using deep learning architecture. Multimedia Tools and Applications, 80(20): 30143-30163. https://doi.org/10.1007/s11042-020-10430-6

[18] Fumero, F., Alayón, S., Sanchez, J.L., Sigut, J., Gonzalez-Hernandez, M. (2011). RIM-ONE: An open retinal image database for optic nerve evaluation. In 2011 24th International Symposium on Computer-Based Medical Systems (CBMS), Ristol, UK, pp. 1-6. https://doi.org/10.1109/CBMS.2011.5999143

[19] Sivaswamy, J., Krishnadas, S., Chakravarty, A., Joshi, G., Tabish, A.S. (2015). A comprehensive retinal image dataset for the assessment of glaucoma from the optic nerve head analysis. JSM Biomedical Imaging Data Papers, 2(1): 1004. 

[20] Sivaswamy, J., Krishnadas, S.R., Joshi, G.D., Jain, M., Tabish, A.U.S. (2014). Drishti-GS: Retinal image dataset for optic nerve head (ONH) segmentation. In 2014 IEEE 11th international symposium on biomedical imaging (ISBI), Beijing, China, pp. 53-56. https://doi.org/10.1109/ISBI.2014.6867807

[21] Carmona, E.J., Rincón, M., García-Feijoó, J., Martínez-de-la-Casa, J.M. (2008). Identification of the optic nerve head with genetic algorithms. Artificial Intelligence in Medicine, 43(3): 243-259. https://doi.org/10.1016/j.artmed.2008.04.005

[22] Issac, A., Parthasarthi, M., Dutta, M.K. (2015). An adaptive threshold based algorithm for optic disc and cup segmentation in fundus images. In 2015 2nd international conference on signal processing and integrated networks (SPIN), Noida, India, pp. 143-147. https://doi.org/10.1109/SPIN.2015.7095384