Automatic Detection of Exudates in Retinal Image Using Statistical Techniques

Automatic Detection of Exudates in Retinal Image Using Statistical Techniques

Vijay Khare

Electronics and Communication Department, Jaypee Institute of Information Technology, Noida 201309, India

Corresponding Author Email: 
vijay.khare@jiit.ac.in
Page: 
255-261
|
DOI: 
https://doi.org/10.18280/isi.280130
Received: 
16 November 2022
|
Revised: 
1 February 2023
|
Accepted: 
10 February 2023
|
Available online: 
28 February 2023
| Citation

OPEN ACCESS

Abstract: 

Diabetic retinopathy is an eye disease seen widely in diabetes patient. It is one of the leading causes of vision loss or blindness characterised by exudates as one of its symptoms. In this paper an objective is to develop algorithm for exudates detection in poorly contrasted or low-quality images. To identify the exudates, initially pre-processing operation is performed on retinal images to retrieve the colour information using histogram specification operation. A new method is proposed to obtain the textural feature of each pixel named as gray level pixel count matrix (GLPCM). The GLPCM method is compared with existing statistical technique i.e. gray level co-occurrence matrix (GLCM) and gray level run length matrix (GLRLM). The classification operation is performed using BPNN classifier. The result of proposed feature extraction technique has been validated based on the ground truth details provided in the dataset and achieved specificity and sensitivity 98.9%, 90.6% on DIARETDB0, DIARETDB1 and DRIVE dataset. In this study also compared different segmentation technique on similar dataset.

Keywords: 

diabetic retinopathy, optic disc, GLCM, GLPCM, GLRLM features extraction, and BPNN classifier

1. Introduction

Nowadays, digital image processing techniques like DICOM (Digital Imaging and Communications in Medicine) which consists of segmentation, enhancement and texture analysis for the analysis of 3D image datasets of human body obtained from MRI, CT scanner and ultrasounds in the area of medical field. In ophthalmology it is used to screen Glaucoma [1], Macular edema [2] and Diabetic retinopathy [3]. Diabetic retinopathy (DR) is a retinal disease characterised by diabetes which affects blood vessels and cause abnormalities in retina. According to Diabetes atlas [4, 5] there is millions of people across the world who are suffering from diabetes and have risk of DR. To screen such large population large no of skilled ophthalmologist are required. Thus, automatic DR detection is considered as important to assist the ophthalmologist.

DR is characterised by Exudates, Microaneurysm and haemorrhages as its symptoms. Exudates detection in retinal image is considered as crucial due to presence of similar regions in retinal image (i.e. OD, lighting effects and drusen). These regions share similar properties of exudates like bright in nature. Secondly due to non-uniform illumination it may happen that background pixels seemed as bright and exudates as dark. Thus initially we have proposed a generalized method to retrieve the colour information of retinal image. In literature many techniques have been employed for exudates detection.

In the research [6], exudates detection is carried out using combination of region growing and edge detection techniques. It fails to detect smaller exudates in low quality images. In the research [7], an algorithm using fuzzy C-means clustering is used to segment the image in different cluster followed by classifying these cluster on the basis of neural network classifier. In this approach a whole block of cluster is segmented as exudate or not and has low accuracy. In the research [8], the fuzzy C-means clustering is used to obtain different cluster in retinal image followed by morphological operation. To implement clustering approach in low quality images different pre-processing technique like contrast enhancement, colour normalization is used. But the main difficulty with clustering technique is to determine the no. of cluster that are needed for exudates detection. In the research [9], Contrast Limited Adaptive Histogram Equalization (CLAHE) is used to retrieve the color information of image followed by K-means clustering to segment the exudate cluster. In the research [10], pixel based approach by classifying the pixels on the basis of textural features using GLCM followed by KNN classifier. It is used only work exist in literature discussed the pixel-based approach to classify image as normal or DR effected. It is observed that different clustering techniques are widely used for exudates detection. While there is scope for pixel-based classification, and it can lead to more accuracy for exudates detection.

From above study, it is clear that none of the major work exist in literature utilizing different statistical feature extraction technique like GLCM and GLRLM on low quality retinal images. Thus, to compare different clustering and feature extraction technique we have implemented on similar datasets. The paper is structured into the following sections: first abstract followed by introduction, methods, results and discussion.

2. Method

The proposed method of exudates detection is derived into three steps:

  • Pre-processing

  • Optic disc detection

  • Detection of true exudates

Figure 1 this flow chart shows the algorithm used for the proposed model where it is initially highlighting the pre-processing of the input retinal image that is further deployed for the OD detection so as to identify the false and true exudates from BPNN classifier by computing its texture features generated from OD detection.

Figure 1. Flow chart of proposed method

2.1 Pre-processing

In researches [11, 12] datasets the image available are of different retinal colour, non-uniform illumination and comprising of poor contrast. To deal with these challenges initially few pre-processing steps are essential. In our algorithm pre-processing is divided into three steps as shown in Figure 2 (a-f).

Initially we have removed the non-illumination part at the periphery of retina. For this we have consider red channel of retinal image and perform threshold operation on it using a threshold value [13] typically 35 to generate a binary mask. On this binary mask morphological dilation operation is performed using structural element of disc shape with diameter of 15 pixels. We now apply AND operation of retinal image with resultant mask of eliminated periphery pixels. Secondly to avoid the obstacle of wide variability in colour of retinal images histogram specification operation is used. In histogram specification histogram of test image is reshaped similar to histogram of reference image selected. It leads to similar images in terms of intensity distribution.

Figure 2. (a) Original retinal image; (b) reference image for colour restoration; (c) mask obtained with eliminated periphery pixel; (d) Histogram of reference image; e) Histogram specified image; f) Processed image

Finally, to enhance the contrast of retinal image without affecting its colour information, CIELAB color model is used. CIELAB model decoupled intensity component from colour information (i.e. RGB is converted into ‘L’ ‘A’ ‘B’) where L matrix contains intensity information and A, B matrices contain colour information. Histogram equalization operation is then performed on intensity information ‘L’ matrix while keeping the colour matrix ‘A’ and ‘B’ unchanged. After this test retinal image is again converted into RGB image and used for exudates detection.

As seen, histogram of blue channel is narrow as compared to red and blue channel. Thus green and red channel information is used for exudates detection.

2.2 Count-labelling approach

In count-labelling operation a binary image is obtained corresponds to high intensity pixel in retinal image. The thresholding operation is used to obtain the binary by thresholding the green channel of retinal image with a threshold value ‘T’. But due to non-uniform illumination and lightening effects ‘T’ cannot be a fixed value. The problem of selecting ‘T’ is solved using Otsu algorithm [14] by selecting a local threshold in a block of size 15*15.

2.3 Optic disc detection

Optic disc detection is considered as vital in exudates detection since exudates share similar properties with optic disc in terms of colour, brightness and contrast as seen in Figure 2 (f). It observed that optic disc is a unique region in retinal image having bright pixel and blood vessel converge at it. Thus optic disc can be localized by moving a template of size 80*80 pixels with optic disc at centre at every pixel location in test image. The pixel location with maximum correlation with moving template is said to be location of optic disc. But it is computationally expensive to match 6400 template features at each pixel value in test image. A template of size 80*80 is used to avoid the chances of exudates or noise to be selected as optic disc. In order to reduce the computation cost histogram matching approach using KL divergence approach [15] is used. In histogram matching approach only 512 histogram features are matched (256 histogram feature for red and 256 for green). Secondly in place of matching pixel features at each pixel’s location, only high intensity labelled pixel are considered for OD detection.

Algorithm for OD localization:

  1. Input the retinal image and choose its red and green channel for OD detection.

  2. Select template image of size 80*80 pixels from training retinal images (typically 3 reference images) with OD at its centre, average them to obtain single template image. Obtain its histogram representation for red and green channel.

  3. From test retinal image a window of size 80*80 is chosen across the pixels of higher intensity pixels location labelled above and its red and green channel histogram is computed as shiwn in Figure 3 (a). The histogram of test image is matched with the histogram of reference template using KL divergence approach.

  4. In KL divergence method, distance between reference and test histogram is computed as:

Let M and N is probability distribution of occurrence of pixels in test window and reference template window.

$D_{K L}(M / N)_{\text {red }}=\sum_{i=1}^{256} M(i) \log \left(\frac{M(i)}{N(i)}\right)$

$D_{K L}(M / N)_{\text {green }}=\sum_{i=1}^{256} M(i) \log \left(\frac{M(i)}{N(i)}\right)$

$D_{\text {avg }}=\frac{\left|D_{K L}(M / N)_{r e d}\right|+\left|D_{K L}(M / N)_{\text {green }}\right|}{2}$

As we know, KL divergence is a measure of dissimilarity between two distributions. Thus minimum value Davg leads to maximum correlation and considered as OD location.

Figure 3. (a) Green channel of test retinal image with localized OD; b) OD excuded retinal image

This OD point obtained is used as seed point and points with maximum intensity are counted. The maximum distance from OD centre to bright pixel is named as radius. Once centre and radius is obtained zero pixel intensity circle is introduced as shown in Figure 3(b).

2.4 Detection of exudates

Exudates detection is divided into two steps:

  1. Feature extraction.

  2. Classification.

2.4.1 Feature extraction

In this paper for exudates detection, we have considered different existing textural feature extraction technique based on spatial information and compared with proposed one.

(1) GLCM matrix method:

In GLCM [16] the textural features of image are obtained by calculating how often pixel pair occur with specific spatial relationship (i.e. distance and angle) occur in sub region. In this method conditional probability density functions $P(i, j / d, \theta)$ is obtained to estimate the GLCM matrix $\phi(d, \theta)$ as: $\phi_{G L C M}(\mathrm{~d}, \theta)=[P(i, j / d, \theta)], \quad 0<i, j<N_g$.

where, d is inter sample distance, Ɵ is orientation between two pixels and Ng is number of gray level exist in sub-region. In GLCM four different matrixes are considered with different orientation of (0°, 45°, 90°and 135°) are used to represent spatial relationship in all direction. To illustrate the GLCM matrix construction a sample image with eight different gray level is considered as shown in Table 1 and GLCM matrix is obtained with neighbouring distance d=1 and Ɵ=0°, as:

Table 1. GLCM matrix obtained for orientation 0° and neighbouring distance d=1

 

1

2

3

4

5

6

7

8

1

1

0

0

0

0

0

0

0

2

0

0

0

0

0

0

0

0

3

0

0

0

0

0

0

0

0

4

0

0

0

1

1

0

0

0

5

0

0

0

1

2

1

2

0

6

0

0

0

0

0

0

0

1

7

1

0

0

0

0

0

1

0

8

0

0

0

0

1

0

0

0

In GLCM, the limitation is output matrix is always a square matrix of size M*M with entries corresponds to the frequencies of co-occurrence of pixel pairs (where M is corresponding gray level in sub-matrix as shown in Table 2) and does not depend upon the size of input matrix. In GLCM pixel relationship is considered between two pixels. Thus it provides second order textural features. The fine information of exudates can be obtained using higher order textural feature extraction method like GLRLM.

Table 2. Sub image matrix

1

1

5

6

8

5

5

5

7

1

4

5

7

7

2

8

5

4

4

3

(2) Gray level run length matrix (GLRLM) method:

The GLRLM [17] matrix method is based on computing run length of gray level (i.e. no of gray level runs exist in sub image of various length). It is computed as how many times a similar pixel is repeated with different length. The GLRLM matrix is represented as shown in Table 3.

$\phi_{G L R L M}(\theta)=[P(i, j / \theta)], \quad 0<i<N_g$ and $0<j \leq R_{\max }$

where, element $P(i, j / \theta)$ is a runlength of gray level ‘i’ of a length ‘j’ (repetition of graylevel ‘i’, ‘j’ times) in the direction $\theta$, Ng is no of gray levels and Rmax is no of column in subimage.

Table 3. GLRLM matrix obtained for orientation 0°

 

1

2

3

4

5

1

1

1

0

0

0

2

1

0

0

0

0

3

1

0

0

0

0

4

1

1

0

0

0

5

3

0

1

0

0

6

1

0

0

0

0

7

1

1

0

0

0

8

2

0

0

0

0

In GLRLM method consecutive run of gray level is counted in same row (i.e. P(5,3) represents pixel intensity repeats three times ‘555’ and P(7,2) represent pixel intensity represent ‘77’). Thus it provides higher order feature textural information as compared to GLCM which consider spatial relation of atmost two pixels. In GLRLM matrix column represent the gray level and row represent maximum run of gray level (no of column in sub-matrix) and it depend upon matrix size. But, in GLRLM for computing run length of each gray level we have to traverse the whole sub-matrix every time. Thus it provides high order features (i.e. consider more than two pixels) but it is computationally expensive.

(3) Proposed matrix (GLPCM) method:

In proposed matrix method, we have introduced a higher order feature extraction method with lower computation cost. The idea is to consider the spatial relation between the elements of single row in sub-matrix at one time by counting the occurrence of similar pixels in row. Thus relation between the pixels of complete row is considered by traversing the complete row once and obtained matrix is named as gray level pixel count matrix. The proposed matrix is computed as:

$\phi(\theta)=[P(i, j / \theta)], 0<i<N_g$ and $0<j \leq R_{\max }$

where, $P(i, j / \theta)$ is number of gray level ‘i’ in ‘j’ row of subimage in the direction $\theta$ as shown in Table 4.

Table 4. Proposed matrix $\phi\left(0^{\circ}\right)$ obtained for orientation 0°

 

1

2

3

4

5

6

7

8

1

2

0

0

0

1

1

0

1

2

1

0

0

0

3

0

1

0

3

0

1

0

1

1

0

2

0

4

0

0

1

2

1

0

0

1

The no of rows in proposed matrix is equal no of rows in sub-matrix under consideration and no of column is equal to the no of gray level in sub-matrix. Similarly we can obtained matrix Ø2, Ø3and Ø4 for different orientation of 45° (moving from left diagonal), 90° (vertical direction) and 135° (moving from right diagonal) as shown in Table 5. In proposed matrix method it is not essential that in each case dimension of proposed matrix method is less than GLRLM matrix, but proposed matrix method is less computationally expensive since it consider the count of gray level in one row at a time in place of whole sub matrix. Secondly the obtained matrix can be verified by evaluating the sum of each row which in constant and equal to no of elements in sub matrix.

Table 5. Proposed matrix $\phi\left(90^{\circ}\right)$ obtained for orientation 90°

 

1

2

3

4

5

6

7

8

1

1

0

0

1

1

0

0

1

2

1

0

0

0

3

0

0

0

3

0

0

0

1

2

0

0

1

4

0

0

0

1

0

1

2

0

5

1

2

3

0

0

0

0

1

In this paper we have obtain 6 textural features for each matrices ( $\phi 1, \phi 2, \phi 3$ and $\phi 4$ ). These features are Contrast, Entropy, Energy, Homogeneity, Correlation and dissimilarity as stated [16]. Let $\phi(i, j)$ is a count of pixel ‘i’ in jth row of sub-matrix locate at position (i,j) in $\phi$ matrix, m is no of rows, n is no of columns and N is the total no of elements in $\phi$ matrix. Then,

Contrast $: \sum_{i=0}^{m-1} \sum_{j=0}^{n-1}\left(|i-j|^2 \varphi(i, j)\right)$

Energy: $\sum_{i=0}^{m-1} \sum_{i=0}^{n-1} \varphi(i, j)^2$

Homogenity: $\sum_{i=0}^{m-1} \sum_{j=0}^{n-1} \frac{\varphi(i, j)}{1+(i-j)^2}$

Entropy: $\sum_{i=0}^{m-1} \sum_{j=0}^{n-1} \varphi(i, j) \log (\varphi(i, j)$

Correlation: $\sum_{\mathrm{i}=0}^{\mathrm{m}-1} \sum_{j=0}^{n-1} \frac{\left(i*j\right) \varphi(i, j)-\mu_x* \mu_y}{\sigma_x* \sigma_y}$

Dissimilarity $: \sum_{i=0}^{g-1} \sum_{j=0}^{g-1}|i-j| p(i, j)$

where,

$\begin{gathered}\mu_x=\frac{1}{N} \sum_{i=0}^{m-1} \sum_{j=0}^{n-1} i \varphi(i, j), \mu_y=\frac{1}{N} \sum_{i=0}^{m-1} \sum_{j=0}^{n-1} j \varphi(i, j) \\ \sigma_x=\sum_{i=0}^{m-1} \sum_{j=0}^{n-1}\left(i-\mu_x\right)^2 \phi(i, j) \\ \sigma_y=\sum_{i=0}^{m-1} \sum_{j=0}^{n-1}\left(j-\mu_y\right)^2 \varphi(i, j)\end{gathered}$

We are considering 6 features for each matrix ( $\phi 1, \phi 2, \phi 3$ and $\phi 4$ ). Thus, we have used 24 features for classification.

2.4.2 Classification

BPNN is a multi-layer feed forward neural network technique based on supervised learning algorithm.in this study two layer neural network is used. The Neural network architecture included an input layer and hidden layer consisting 10 neuron and an output layer comprising of single neuron and tansig function. Following training parameters used: Epochs=1000, Min grad=1e-7, Max fail=6, learning rate=0.01. The network is initially trained using training data for which input and desired output is known. The algorithm modifies the weight on the basis of error back propagation algorithm, so that mean square error (MSE) can be minimized.

Training Phase: BPNN is a supervised learning technique and its accuracy depends upon how well it is trained. In training 20 images are randomly selected contains 15 pathological retinal images and 5 retinal images of healthy patient. Out of these 20 images we have manually selected 1000 pixels and consider a template of size 40*40 around them. These templates comprise of exudates region, optic disc region, blood vessel, normal background and other pathology used to obtain the textural features to train the classifier. In Figure 4, green channel information for different templates images are shown.

Figure 4. (a) Optic disc region; (b) Blood vessel; (c) Exudates; (d) Normal background

In this study, early stopping is a form of regularization while training a model with an iterative method, such as gradient descent. Since all the neural networks learn exclusively by using gradient descent. Once training phase is complete and optimized value of weights is obtained. We use the same weights for classification between true and false exudates for each test image. Thus, classifier training time is not affecting the entire classification algorithm.

The number of pages for the manuscript must be no more than ten, including all the sections. Please make sure that the whole text ends on an even page. Please do not insert page numbers. Please do not use the Headers or the Footers because they are reserved for the technical editing by editors.

3. Result and Discussion

In order to validate our approach three different datasets [11, 12, 18] with different field of view, different size and of non-uniform illumination are used as shown on Table 6.

Table 6. Summary of dataset used

Dataset

Size

Total images

Pathological images

Field of view

DRIVE

768X584

40

7

45°

DIARETDB0

1500X1152

130

110

50°

DIARETDB1

1500X1152

89

84

50°

The proposed method is implemented in Matlab2013a with a system configuration Intel Core i3, 2.2 GHz processor and 4 GB RAM. Initially test retinal image is pre-processed to obtain its colour information and thresholding operation is applied on its green channel to obtain the binary image representing the location of bright pixels. These bright pixels correspond to exudate and lighting effects, since we have removed OD pixels by replacing it with dark intensity circle.Comparion of OD detection algorithm as shown in Table 7.

Table 7. Comparison of OD detection algorithms [19, 20]

Algorithm

Dataset used

No of images

Correctly OD detected

Success rate (%)

Sinha et al. [19]

DRIVE

40

34

85

DIARETDB1

89

88

98.8

Park et al. [20]

DRIVE

40

36

90

Proposed method

DRIVE

40

38

95

DIARETDB0

130

127

97.69

DIARETDB1

89

89

100

Figure 5. True exudates marked in test image using BPNN classifier

After this bright pixels left in image are available to be classified as true exudates and false exudates (noise due to lighting effects). To separate them we obtain 24 textural features using proposed approach by considering a window size 40*40 across each pixel. These features are passed through BPNN classifier.

The performance of proposed algorithm is evaluated on the basis parameters described as:

Sensitivity: It indicates the ratio of no of pixels detected as exudates to the total no of exudates.

Sensitivity=TP/(TP + FN)*100

Specificity: It indicates false exudates pixels detected correctly to the total no of non-exudates.

Specificity=TN/(TN+TP)*100

And, Accuracy=(TP+TN)/(TP+TN+FP+FN)*100.

where, TP is true positive (exudate pixel marked correctly), TN is true negative (non-exudates pixels marked correctly), FP is false positive (non-exudates pixels marked as exudates) and FN is false negative (exudate pixel marked as non-exudates). In Table 8 we have shown results for nine pathological images chosen from DIARETDB0 dataset.

Table 8. Performance measurement of different parameters on diseased retinal images from DIARETDB0 dataset

Image

Total no of pixels in image

TP

TN

FP

FN

Sensitivity (%)

Specificity (%)

Accuracy (%)

Image1

262144

3844

255300

2350

650

85.5362706

99.08791

98.85559

Image2

262144

6430

252192

2340

1182

84.4718865

99.08067

98.65646

Image3

262144

7343

252784

1342

675

91.5814418

99.47192

99.23058

Image4

262144

16682

241622

3024

816

95.3366099

98.76393

98.53516

Image5

262144

15632

243664

1169

1679

90.3009647

99.52253

98.91357

Image6

262144

8799

250870

1739

736

92.2810697

99.31158

99.05586

Image7

262144

13145

240450

7435

1114

92.1873904

97.00063

96.73882

Image8

262144

8679

250916

2114

435

95.2271231

99.16453

99.02763

Image9

262144

4635

253811

3133

565

89.1346154

98.78067

98.58932

The sensitivity, specificity and accuracy of GLPCM method are 90.67%, 98.9% and 98.62%. We also have compared different segmentation technique on similar dataset as discussed in Table 9.

Table 9. Accuracy of different segmentation methods

S. No

Segmentation method

Accuracy

1.

Fuzzy C-means clustering

94%

2.

K-means clustering

89%

3.

GLCM + BPNN classifier

95.8%

4.

GLRLM + BPNN classifier

96.4%

5.

Proposed method + BPNN classifier

98.6%

4. Conclusions

In this work a new matrix method gray level pixel count matrix (GLPCM) feature extraction for classification and segmentation of exudates in low quality retinal images using BPNN classifier is proposed. The previously existing GLCM method obtained only second order feature vector and GLRLM is a computationally expensive approach. In this work we have proposed a feature extraction matrix to obtain higher order features by counting pixel occurrence in single row so it is need lesser computation. Secondly GLPCM matrix method obtains a feature matrix depends upon on the size of input matrix. On other hand GLCM obtains always a square matrix of size equals to no of gray level in image.

The goal of this work is to present a new feature extraction matrix method and compared it with existing segmentation methods like Fuzzy C means, K means clustering, GLCM and GLRLM with BPNN classification. The all four different approaches produced good results. While GLPCM feature extraction matrix approach is better than existing segmentation approach. Hence it is concluded that GLPCM matrix method with BPNN classifier can be used for pixel-based exudates detection as shown in Figure 5.

  References

[1] Haleem, M.S., Han, L.X., van Hemert, J., Li, B.H. (2013). Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: A review. Computerized Medical Imaging and Graphics, 37(7-8): 581-596. https://doi.org/10.1016/j.compmedimag.2013.09.005

[2] Deepak, K.S., Sivaswamy, J. (2012). Automatic assessment of macular edema from color retinal images. IEEE Transactions on Medical Imaging, 31(3): 766-776. https://doi.org/10.1109/TMI.2011.2178856

[3] Ahmad, A., Mansoor, A.B., Mumtaz, R., Khan, M., Mirza, S.H. (2014). Image processing and classification in diabetic retinopathy: A review. 2014 5th European Workshop on Visual Information Processing (EUVIP), pp. 1-6. http://dx.doi.org/10.1109/EUVIP.2014.7018362

[4] Aguiree, F., Brown, A.D., Cho, N.H., Dahlquist, G., Dodd, S., Dunning, T., Hirst, M., Hwang, C.K., Magliano, D., Patterson, C., Scott, C., Shaw, J., Soltesz, G., Usher-Smith, J., Whiting, D. (2013). IDF diabetes atlas: Sixth edition. International Diabetes Federation.

[5] Wild, S., Roglic, G., Green, A., Sicree, R., King, H. (2004). Global prevalence of diabetes: Estimates for the year 2000 and projections for 2030. World Health, 27(5): 1047-1053. http://dx.doi.org/10.2337/diacare.27.5.1047

[6] Nyomba, B., Berard, L., Muephy, L. (2004). Facilitating access to glucometer reagent increases blood glucose self-monitoring freuqency and improves glycaemic control: A prospective study in inuslin-treated diabetic patients. Diabetic Medicine, 21(2): 129-135. http://dx.doi.org/10.1046/j.1464-5491.2003.01070.x

[7] Osareh, A., Shadgar, B., Markham, R. (2009). A computational-intelligence-based approach for detection of exudates in diabetic retinopathy images. IEEE Transactions on Information Technology in Biomedicine, 13(4): 535-545. https://doi.org/10.1109/TITB.2008.2007493

[8] Wisaeng, K., Hiransakolwong, N., Pothiruk, N. (2012). Automatic detection of exudates in diabetic retinopathy images. Journal of Computer Science, 8(8): 1304-1313. http://dx.doi.org/10.3844/jcssp.2012.1304.1313

[9] Chand, C.P.R., Dheeba, J. (2015). Automatic detection of exudates in color fundus retinopathy images. Indian Journal of Science and Technology, 8: 1-8. https://doi.org/10.17485/IJST%2F2015%2FV8I26%2F81049

[10] Ramasubramanian, B., Prabhakar, G. (2013). An early screening system for the detection of diabetic retinopathy using image processing. International Journal of Computer Applications, 61: 6-10. https://doi.org/10.5120/10002-4864

[11] Kauppi, T., Kalesnykiene, V., Kamarainen, J., Lensu, L., Sorri, I., Raninen, A., Voutilainen, R., Uusitalo, H., Kalviainen, H., Pietila, J. (2007). The DIARETDB1 diabetic retinopathy database and evaluation protocol. British Machine Vision Conference, pp. 1-10. https://doi.org/10.5244/C.21.15

[12] Kauppi, T., Kalesnykiene, V., Kamarainen, J.K., Lensu, L., Sorri, I., Uusitalo, H., Kalviainen, H., Pietila, J. (2007). DIARETDB0 : Evaluation database and methodology for diabetic retinopathy algorithms. Computer Science, Medicine, pp. 1-17. https://www.siue.edu/~sumbaug/RetinalProjectPapers/Diabetic%20Retinopathy%20Image%20Database%20Information.pdf.

[13] Goatman, K.A., Whitwam, A.D., Manivannan, A., Olson, J.A., Sharp, P.F. (2003). Colour normalisation of retinal images.Proceedings of Medical Image Understanding andAnalysis. http://www.biomed.abdn.ac.uk/Abstracts/A01128/kag_miua2003.pdf.

[14] Otsu, N. A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1): 62-66. https://doi.org/10.1109/TSMC.1979.4310076

[15] Aggarwal, M.K., Khare, V. (2015). Automatic localization and contour detection of optic disc. 2015 International Conference on Signal Processing and Communication (ICSC), Noida, India, pp. 406-409. https://doi.org/10.1109/ICSPCom.2015.7150686

[16] Haralick, R.M., Shanmugam, K., Dinstein, I.K. (1973). Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics, SMC-3(6): 610-621. https://doi.org/10.1109/TSMC.1973.4309314

[17] Tang, X.O. (1998). Texture information in run-length matrices. IEEE Transactions on Image Processing, 7(11): 1602-1609. https://doi.org/10.1109/83.725367

[18] Niemeijer, M., Staal, J., van Ginneken, B., Loog, M., Abràmoff, M.D. (2004). Comparative study of retinal vessel segmentation methods on a new publicly available database. Proceedings of SPIE - The International Society for Optical Engineering 5370, pp. 648-656. http://dx.doi.org/10.1117/12.535349

[19] Sinha, N., Babu, R.V. (2012). Sparse representation for optic disk detection. 2012 International Conference on Signal Processing and Communications (SPCOM), Bangalore, India, pp. 1-5. http://dx.doi.org/10.1109%2FSPCOM.2012.6290021

[20] Park, M., Jin, J.S., Luo, S.H. (2006). Locating the optic disc in retinal images. International Conference on Computer Graphics, Imaging and Visualisation (CGIV'06), Sydney, NSW, Australia, pp. 141-145. https://doi.org/10.1109/CGIV.2006.63