Retinex-Based Multiphase Algorithm for Low-Light Image Enhancement

Retinex-Based Multiphase Algorithm for Low-Light Image Enhancement

Mohammad Abid Al-Hashim Zohair Al-Ameen 

Department of Computer Science, College of Computer Science and Mathematics, University of Mosul, Mosul 41002, Nineveh, Iraq

Corresponding Author Email: 
qizohair@uomosul.edu.iq
Page: 
733-743
|
DOI: 
https://doi.org/10.18280/ts.370505
Received: 
25 June 2020
|
Revised: 
1 October 2020
|
Accepted: 
10 October 2020
|
Available online: 
25 November 2020
| Citation

© 2020 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

These days, digital images are one of the most profound methods used to represent information. Still, various images are obtained with a low-light effect due to numerous unavoidable reasons. It may be problematic for humans and computer-related applications to perceive and extract valuable information from such images properly. Hence, the observed quality of low-light images should be ameliorated for improved analysis, understanding, and interpretation. Currently, the enhancement of low-light images is a challenging task since various factors, including brightness, contrast, and colors should be considered effectively to produce results with adequate quality. Therefore, a retinex-based multiphase algorithm is developed in this study, in that it computes the illumination image somewhat similar to the single-scale retinex algorithm, takes the logs of both the original and the illumination images, subtract them using a modified approach, the result is then processed by a gamma-corrected sigmoid function and further processed by a normalization function to produce to the final result. The proposed algorithm is tested using natural low-light images, evaluated using specialized metrics, and compared with eight different sophisticated methods. The attained experiential outcomes revealed that the proposed algorithm has delivered the best performances concerning processing speed, perceived quality, and evaluation metrics.

Keywords: 

image enhancement, image processing, low-light images, retinex-based multiphase algorithm

1. Introduction

Digital images used with contemporary imaging- and vision-related applications must be of high quality so that the tasks of such applications can be accomplished efficiently [1]. Images captured in an inappropriate lighting environment have a low-light effect, deficient contrast, and improper colors [2]. Therefore, it is uneasy to capture images with high-quality in such an environment, in that the low-light effect may decrease the performance of applications related to image processing and computer vision [3, 4]. Moreover, such images usually comprise of vast dark regions with reduced visibility [5]. Low-light images are images that are captured at nighttime, have uneven illumination, captured in a shadowed environment, have a general dark appearance [6], in that samples of images in such conditions are shown in Figure 1.

This makes the application of low-light image enhancement methods a key necessity to reveal the latent information [7] since the observed quality of these images should be ameliorated for improved analysis, understanding, and interpretation [8]. The key goal of low-light enhancement methods is to restore well-perceived images with adequate quality that most of their important details are revealed properly without generating unwanted artifacts [9]. In recent years, significant attention has been made to develop methods related to low-light image enhancement. Still, many of these methods introduced artifacts to the restored images [10]. Therefore, this field remains active for research and new research works are being presented constantly. It is important to highlight some of the recent related works in this field to gain the essential knowledge on how to develop a proper algorithm that can well-process low-light images.

Figure 1. Different types of low-light images. (a) nighttime; (b) uneven illumination; (c) captured in a shadowed environment; (d) own a general dark appearance

Accordingly, Fu et al. [11] developed an algorithm that employs a bright channel prior (BCP) approach. It begins by determining the bright channel of the input and the luminance part using guided and Gaussian low-pass filters. Next, the reflectance part is determined and by a quadratic function. The luminance and reflectance are then further refined concurrently depending on channel priors and alternating optimization to create the final output. In another research, Wang et al. [12] introduced an algorithm that implements naturalness preserved enhancement (NPE). It starts by decomposing an image into illumination and reflectance by using a bright-pass filter. Then, the illumination part is filtered by a bi-log transform approach. Finally, the reflectance part is synthesized, and the illumination part is mapped to get the enhanced image.

Besides, Fu et al. [13] introduced a probabilistic and simultaneous illumination and reflectance estimation (PSIRE) method that ameliorates the standard retinex model through the application of a probabilistic approach to determine the reflectance and illumination of the image simultaneously by utilizing the concept of maximum a posteriori (MAP). Next, the logarithmic features are analyzed to recognize the reliability of the determined reflectance and illumination. Then, a transformation process is applied on the MAP to change it to minimized energy to acquire the reflectance and illumination, whereas an alternating direction approach is applied for simultaneous estimation of illumination and reflectance. Likewise, Yu et al. [14] proposed a physical lighting model (PLM) based method that includes four main processing phases. As an initial step, the retinex theory is applied to approximate the environmental light (EL). Next, the loss of information is determined locally based on the pre-determined EL and a given rate of light-scattering (LS). Then, a condition is checked that is if the rate of information loss is higher than a given threshold and the maximum number of iterations is not reached, the EL and LS are fine-tuned. Otherwise, the iteration process will terminate and further processing on EL and LS will occur through the application of a weighted guide filter to generate the result.

Furthermore, Park et al. [15] provided an algorithm that utilizes the concepts of BCP and retinex, in that it initially determines the bright channel to tune the level of brightness enhancement. After that, a variational retinex method is applied to approximate reflectance and illumination by utilizing the BCP concept. Next, the determined illumination is further processed by histogram equalization and gamma correction to decrease the appearance of noise and color distortions. Moreover, Ren et al. [16] proposed a method that uses a joint enhancement with denoising through sequential decomposition (JEDSD). The main aim of this method is to provide a simultaneous process that improves the illumination and attenuates the noise. Initially, a successive sequence approach is followed to determine the reflectance and illumination parts. Next, the reflectance is purified depending on the original image and the illumination component by using weighted matrices for noise attenuation. The result is generated by combining the purified reflectance with the gamma-adjusted illumination. As for Li et al. [17], another concept is utilized that is the robust retinex model (RRM) which works in the HSV color domain. This method determines the illumination part by using the standard retinex approach. The reflectance part, on the other hand, is estimated via a structure revealing approach. In this method, the illumination part is smoothed to prevent unwanted noise appearance. Also, an augmented Lagrange multiplier method is used to optimize the determined reflectance and illumination parts and generate the output.

Moreover, Tian et al. [18] introduced a variational-based fusion (VBF) method that tries to enhance the non-uniform image illumination through contrast enhancement and color correction. In the beginning, the input image is processed by a specified global enhancement method that preserves the hue element. Then, the input image is processed again by another local enhancement method that also preserves the hue element. The outcome is attained by applying a fusion method that utilizes a variational approach that includes contrast and color adjustments. Besides, Tang et al. [19] proposed an algorithm that improves the weak illumination while suppresses the halo artifact and noise generation. In this algorithm, the concept of BCP is initially implemented to weaken the highly illuminated parts of the image. Next, a specialized dehazing algorithm is applied to improve the entire image. Lastly, a modified non-local means denoising algorithm is used to attenuate the appeared noise. Moreover, Tanaka et al. [20] proposed an algorithm that implements gradient-based enhancement (GBE). It begins by changing the input to a luminance–chrominance color domain. After that, the luminance part has its gradients extracted by a distinct operation to improve the visibility of the details in the dark areas of the image. Next, the gradients are filtered for details enhancement. Then, a final integration operation with limited range consideration is implemented to create the output image.

Likewise, Dai et al. [21] introduced a fractional-order fusion model (FFM), in that it initially applies a fractional mask to determine the illumination part of the image. Next, an image exposure adjustment method is implemented to increase the visibility of the weakly illuminated regions. Lastly, a specialized image fusion method is applied to produce the resulting image. Xie et al. [22] on the other hand developed an algorithm that depends on a fusion map, in that it initially applies semantic segmentation to extract regions in an image with specific semantic features. Next, these regions are further refined and combined jointly using an estimated map of illumination-awareness that is determined from the image’s illumination. Using the semantic information, this algorithm becomes able to well-improve the dark regions of the image and provide fewer artifacts with better appearance. As aforementioned, many algorithms are used to enhance low-light images. However, various algorithms may introduce artifacts, halos, shadows around edges, extra smoothness, deficient contrast, and/or colors. Moreover, some algorithms are noticeably slow in producing the output image. Thus, providing a low-intricacy algorithm that produces adequate quality images is highly demanded.

In this study, a retinex-based multiphase (RBMP) algorithm is developed for the rapid enhancement of low-light images. The RBMP computes the illumination image. Then, the logarithms for both the original and illumination images is determined and subtracted using a modified approach. Next, the outcome is filtered by a gamma-corrected sigmoid function and further refined by a normalization function. This allows preserving the highly illuminated parts while increasing the illumination in the dark parts of the image. In the experiments and comparisons, only natural distorted low-light images are utilized, a comparison with eight algorithms is made, and three image evaluation metrics are used. By doing intensive tests and comparisons, the RBMP is proved to well-process various low-light images rapidly and outpaced the comparatives in several traits. The remaining sections of the manuscript are organized in the following manner: Section 2 explains the proposed algorithm in detail. Section 3 describes the acquired results. Section 4 provides a brief conclusion.

2. Proposed Algorithm

The key motivation behind developing the RBMP algorithm is to process the low-light images efficiently with the lowest possible calculations’ involvement. Accordingly, different low-intricacy concepts that try to improve the image illumination have been researched. Among such, the single-scale retinex (SSR) model proposed by Jobson et al. [23] was examined because it involves low-calculations and can improve the illumination of images. In brief, the SSR model works by estimating an illumination image from its degraded counterpart by performing a discrete convolution (*) between a degraded image I(x,y) and a discrete 2D Gaussian surround function (DGSF) G(x,y). The log of the output of this operation is taken then subtracted from the log of the degraded image to produce the reflectance image R(x,y) which represents an enhanced observation of the degraded image [24]. More specifically, to apply the SSR model on a degraded image, the DGSF is first computed as follows [25]:

$\begin{equation}

G_{(x, y)}=F \cdot e^{-\frac{\left(Q^{2}+W^{2}\right)}{2 \sigma^{2}}}

\end{equation}$     (1)

$\begin{equation}

F=\frac{1}{\sum_{i=1}^{N} \sum_{j=1}^{M} e^{-\frac{\left(Q^{2}+W^{2}\right)}{2 \sigma^{2}}}}

\end{equation}$     (2)

where, x and y are image coordinates, W and Q signify the vertical and horizontal grayscale gradients, in that they both have the same size as I(x,y), F is a normalization factor, N and M represent the image dimensions, (·) is a multiplication operator, σ is a numerical parameter that controls the illumination, in that it should be as (σ > 1), where a greater value delivers more image illumination. Next, the SSR model is determined using the following equation [23]:

$\begin{equation}

R_{(x, y)}=\log \left[I_{(x, y)}\right]-\log \left[G_{(x, y)} * I_{(x, y)}\right]

\end{equation}$     (3)

The discrete convolution is implemented by converting G(x,y) and I(x,y) to the frequency domain using the discrete Fourier transform, shift their zero-frequency elements to the spectrum’s center, multiply the shifted results, shift the zero-frequency elements again for the multiplication result, then utilize the inverse Fourier transform to get the resulting image in the spatial domain. The SSR algorithm has been used previously to improve the contrast and/or illumination of different types of images. As explained in the study [23], the SRR has some defects as it may fail to produce images with adequate visual quality since degradations may present such as color inconstancy and improper dynamic range. Therefore, it is tested intensively with different types of low-light color images, and samples of the obtained results are demonstrated in Figure 2.

Figure 2. Samples of the SSR results using different real low-light images. (a1-e1) real degraded images; (a2-e2) processed by the traditional SSR algorithm

As seen in Figure 2, the SSR algorithm enhanced the illumination yet several notes are recorded. First, it darkened some areas in the processed images leading to loss of visible information. Second, it amplified the brightness in other image regions leading to an abnormal look. Third, it darkened the colors and provided an overall unnatural appearance. Despite the mentioned drawback, the SSR algorithm has a great potential to be further developed since it showed the ability to be applied with low-light images and it involves simple calculations. Therefore, a new multiphase algorithm is developed in this study, in that it depends on the SRR model and other statistical and numerical methods to produce adequate quality results. The innovation of the proposed algorithm lies in the use of the simple approach of the SSR algorithm to detect the illumination image. Then, it utilizes additional low intricacy methods to deliver better quality results rapidly. The additional methods are simply a modified subtraction process, adapted gamma-corrected sigmoid function, and a standard normalization function. The proposed multiphase algorithm starts by determining the DGSF using Eq. (1) and Eq. (2), with (σ = M·N). The second phase involves determining the log of the illumination and the original images L(x,y) and O(x,y) using the following equations:

$\begin{equation}

O_{(x, y)}=\log \left[I_{(x, y)}+\varepsilon\right]

\end{equation}$     (4)

$\begin{equation}

L_{(x, y)}=\log \left[\left(G_{(x, y)} * I_{(x, y)}\right)+\varepsilon\right]

\end{equation}$     (5)

where, (ε = 0.001) is a small value that is added to the image to avoid computing the log of zero. Next, instead of subtracting the images as in the standard SSR in Eq. (3), a new subtraction method is utilized as the third phase. Jourlin and Pinoli [26] proposed a logarithmic image processing approach to add two images and form a third one J(x,y) that has the features of both images as in the subsequent equation:

$\begin{equation}

J_{(x, y)}=U_{(x, y)}+V_{(x, y)}+U_{(x, y)} \cdot V_{(x, y)}

\end{equation}$     (6)

Here U(x,y) and V(x,y) are two distinct images. In this study, this approach is modified experimentally to be used as a subtraction approach. The modified subtraction approach can be described as follows:

$\begin{equation}

P_{(x, y)}=\left(O_{(x, y)}+L_{(x, y)}\right)-\left(\frac{O_{(x, y)}}{L_{(x, y)}}\right)

\end{equation}$     (7)

where, P(x,y) is the reflectance image that its values are limited by a narrow dynamic range. Thus, a modified version of a sigmoid function is utilized to adjust the contrast of the reflectance image as the fourth phase. The sigmoid is an S-shape transformation function that has been utilized previously in various research works related to contrast enhancement [27-29]. The standard sigmoid function can be computed as follows [30]:

$\begin{equation}

f(z)=\frac{1}{\left(1+e^{-z}\right)}

\end{equation}$     (8)

where, z is the inputted array to be processed. In this study, a gamma-corrected sigmoid (GCS) function is introduced and utilized to control the amount of visible enhancement and suppress the highly illuminated areas of the image. The GCS function is simply the standard sigmoid function raised to the power of γ. This practice allowed to control the apparent enhancement and has been followed previously in the power-law transformation function to adjust the contrast [31]. The developed GCS function can be computed using the following equation:

$\begin{equation}

S_{(x, y)}=\left(\frac{1}{\left(1+e^{-P_{(x, y)}}\right)}\right)^{\gamma}

\end{equation}$     (9)

where, S(x,y) is the output of the GCS function, γ is a tuning parameter that is responsible for the amount of enhancement. As for γ, it should satisfy (γ > 0), where a higher value leads to less illumination enhancement but better contrast. Besides, intensive experiments revealed that acceptable quality results are obtained when the γ value is between 0.1 and 0.4. The dynamic range of S(x,y) is improved but remains not fit to the entire interval. Thus, the standard linear normalization method is applied as the fifth and final phase to reallocate the image intensities to the entire range. The reason for using this method is it can linearly stretch the limited range rapidly without involving high computations or requiring extra variables to be inputted. The used normalization method is determined via the subsequent equation [32]:

$\begin{equation}

E_{(x, y)}=\frac{S_{(x, y)}-\min \left(S_{(x, y)}\right)}{\max \left(S_{(x, y)}\right)-\min \left(S_{(x, y)}\right)}

\end{equation}$     (10)

where, E(x,y) is the algorithm output; min, max denote the lowest, highest pixel values in S(x,y). To properly describe the proposed algorithm, a block diagram that explains its operation specifics is given in Figure 3. Besides, the performance of the proposed algorithm with different gamma values is illustrated in Figure 4 and Figure 5.

Figure 3. Block diagram of the proposed RBMP algorithm

Figure 4. The outcomes of processing a low-light image using different gamma values: (a) real low-light image; (b) γ =0.1, (c) γ =0.15, (d) γ =0.2, (e) γ =0.25, (f) γ =0.3, (g) γ =0.35, (h) γ =0.4

Figure 5. The outcomes of processing another low-light image using different gamma values: (a) real low-light image; (b) γ=0.1, (c) γ=0.15, (d) γ=0.2, (e) γ=0.25, (f) γ=0.3, (g) γ=0.35, (h) γ=0.4

As mentioned earlier, when the gamma value is between 0.1 and 0.4, the results will be obtained with satisfactory visual quality. When increasing gamma, the brightness is reduced while the contrast is enhanced. Selecting the proper gamma value leads to obtain the desired results. Still, this also depends on the type of the processed image, in that if the image is extremely dark (ex. a nighttime image), the proper gamma value to produce satisfactory results can be around 0.1. If other types of low-light images are given, the proper gamma value can be around 0.25.

This means that choosing the appropriate gamma value from the given range (0.1 to 0.4) rests with the operator, as the value of gamma is chosen and inputted manually. In Figure 4, the optimal performance is obtained when gamma γ=0.25 as the natural brightness and contrast are attained with a pleasant overall appearance, whereas in Figure 5, the optimal performance is obtained when gamma γ=0.1, as the information becomes better observed with acceptable brightness and contrast. This indicates that selecting a suitable gamma value depends on the illumination nature of the low-light image being processed.

3. Results and Discussion

All the information regarding the experiments and comparisons are stated in this part of the study. Regarding the datasets, more than one dataset is used in this study to assess the performance of the developed RBMP algorithm in the experiments and comparisons, in that all datasets have real-degraded images. The first dataset contains images that are collected from different internet websites. The second dataset is the exclusive dark (ExDARK) [33], which includes more than seven thousand images captured in extremely low-light situations. The third dataset is provided by Bychkovsky et al. [34], in that it contains more than five thousand unprocessed images taken by different photographers to depict different lighting conditions, subjects, and scenes. The reason behind using only real-degraded images is to reveal its proficiency in enhancing images with low-light effects. Besides, a comparison is made with eight methods that are, JEDSD [16], BCP [11], GBE [20], RRM [17], NPE [12], PSIRE [13], PLM [14], and VBF [18], and the outcomes of such comparisons are evaluated by three dedicated metrics that are, lightness order error (LOE) [12], blind tone-mapped quality index (BTMQI) [35], and blind multiple pseudo reference images (BMPRI) [36].

The LOE is a method used to assess the lightness order relativity which is a key feature for preserving the naturalness in an image. The assessment happens between the degraded image and its recovered version; thus, this is a reduced-reference metric. Besides, the BTMQI assesses the quality by analyzing its naturalness, important information, and structure. For such a task, the entropy, local statistics, and Sobel operators have been utilized and their outcomes are combined using a dedicated regression module. Likewise, the BMPRI utilizes the local binary pattern (LBP) with a distortion aggravation (DA) approach to detect the change in the quality of the image. After applying five levels of DA, LBP is used to extract the features of the image. Next, these features are evaluated, and the results are pooled together using multiple PRIs to produce the final quality measure. The BMPRI metric is useful in detecting the visibility of image details with the presence of degradations and processing artifacts, in that it is better if they appear less in the resulting images. The BTMQI and BMPRI are no-reference metrics, while the LOE is a reduced-reference metric, in that all metrics output a numerical value, where smaller values indicate better quality results [12, 17]. Regarding the computer specs and the programming environment, all experiments have been made by utilizing a computer with 4 GB of RAM and a CPU of Core I5-7200U 2.7 GHz, and MATLAB 2018a environment.

Figures 6-11 show the outcomes of applying the RBMP algorithm to different real-degraded low-light images, whereas Figures 12-14 demonstrate the comparison results. Table 1 to Table 4 exhibit the recorded metrics scores and processing times of the conducted comparisons. Figures 15-17 represent the graphs of the average scores in Tables 1-3. From Figures 6-11, it is clear that the proposed RBMP algorithm successfully enhanced the perceived quality of different low-light images as more details are perceived from the resulting images that have balanced brightness, acceptable contrast, and satisfactory colors. Besides, the dark area of the image appears in a better way, and the bright areas are preserved from being highly brightened. Besides, no processing flaws appear on the recovered images which appear more genuine to the viewer. Moreover, as observed in the experimental results, the proposed algorithm did not provide any smoothness or affect the smooth regions in the processed images. As for the edge information, the remained intact in terms of acutance modification, which indicates that the proposed algorithm only modifies the illumination and does not change the smoothness or the acutance when processing an image.

Figure 6. Enhancing different low-light images obtained from the internet by the proposed algorithm (batch -1-)

(a1)–(d1) real low-light images, (a2)–(d2) enhanced by the proposed algorithm with γ=0.25

Figure 7. Enhancing different low-light images obtained from the internet by the proposed algorithm (batch -2-)

(a1)–(d1) real low-light images, (a2)–(d2) enhanced by the proposed algorithm with γ=0.25

Figure 8. Enhancing different low-light images obtained from Ref. [33] by the proposed algorithm (batch -1-)

(a1)–(d1) real low-light images, (a2)–(d2) enhanced by the proposed algorithm with γ values of (0.2, 0.2, 0.23, and 0.3)

Figure 9. Enhancing different low-light images obtained from Ref. [33] by the proposed algorithm (batch -2-)

(a1)–(d1) real low-light images, (a2)–(d2) enhanced by the proposed algorithm with γ values of (0.2, 0.2, 0.25, and 0.25)

Figure 10. Enhancing different low-light images obtained from Ref. [34] by the proposed algorithm (batch -1-)

(a1)–(d1) real low-light images, (a2)–(d2) enhanced by the proposed algorithm with γ values of (0.25, 0.25, 0.3, and 0.25)

Figure 11. Enhancing different low-light images obtained from Ref. [34] by the proposed algorithm (batch -2-)

(a1)–(d1) real low-light images, (a2)–(d2) enhanced by the proposed algorithm with γ values of (0.4, 0.3, 0.25, and 0.3)

Such findings are significant because visually pleasing outcomes are obtained with an uncomplicated algorithm that utilizes a few calculations to produce the results rapidly. From Figures 12-17 and Tables 1-4, it is obvious that dissimilar results were obtained by the comparatives, in that all the compared algorithms showed the ability to recover the latent details. However, each algorithm produced results that own remarks that needed to be discussed. The JEDSD algorithm introduced extra smoothness to the processed images with acceptable brightness, contrast, and colors. Therefore, its LOE readings were somewhat close to the proposed algorithm but its BTMQI readings were far due to the introduced smoothness and its BMPRI readings were moderate because the extra smoothness reduced the visibility of the image details, as well as, its processing time was somewhat high. The BCP algorithm introduced halo effects in some regions, some smoothness, and unnatural colors to the processed images. Therefore, its LOE readings were the worst yet its BTMQI scores were good due to brightness preservation. Its BMPRI readings were somewhat moderate because the smoothness and halo effects changed the visibility of the image details. As well, it provided reasonable implementation times.

Figure 12. The comparison outcomes (batch -1-) (a) real low-light image; The following images are enhanced by: (b) JEDSD [16], (c) BCP [11], (d) GBE [20], (e) RRM [17], (f) NPE [12], (g) PSIRE [13], (h) PLM [14], (i) VBF [18], (j) Proposed algorithm

Figure 13. The comparison outcomes (batch -2-) (a) real low-light image; The following images are enhanced by: (b) JEDSD [16], (c) BCP [11], (d) GBE [20], (e) RRM [17], (f) NPE [12], (g) PSIRE [13], (h) PLM [14], (i) VBF [18], (j) Proposed algorithm

Figure 14. The comparison outcomes (batch -3-) (a) real low-light image; The following images are enhanced by: (b) JEDSD [16], (c) BCP [11], (d) GBE [20], (e) RRM [17], (f) NPE [12], (g) PSIRE [13], (h) PLM [14], (i) VBF [18], (j) Proposed algorithm

Table 1. The recorded LOE scores for the comparatives

Methods

Figure 12

Figure 13

Figure 14

Average

JEDSD

131.6043

238.4949

111.0603

160.386

BCP

808.9075

1213.3000

977.0385

999.748

GBE

621.8173

623.5491

531.0359

592.134

RRM

140.7097

245.2698

116.6800

167.553

NPE

382.7056

399.1507

394.4125

392.089

PSIRE

159.7837

232.7425

279.4458

223.990

PLM

610.7681

952.2582

542.1336

701.719

VBF

265.9945

329.3802

165.6737

253.682

Proposed algorithm

105.4116

148.1967

60.8376

104.815

Table 2. The recorded BTMQI scores for the comparatives

Methods

Figure 12

Figure 13

Figure 14

Average

JEDSD

5.1941

4.1306

3.1455

4.156

BCP

3.1568

2.8919

4.8833

3.644

GBE

3.9176

3.7113

4.2802

3.969

RRM

4.9183

4.2115

3.2473

4.125

NPE

5.0161

3.4610

3.2655

3.914

PSIRE

5.5620

6.1918

3.3940

5.049

PLM

4.3399

3.2375

4.1796

3.919

VBF

3.2899

3.2787

3.6265

3.398

Proposed algorithm

3.5205

3.3014

3.2627

3.361

Table 3. The recorded BMPRI scores for the comparatives

Methods

Figure 12

Figure 13

Figure 14

Average

JEDSD

18.072

18.2215

40.1647

25.486

BCP

15.341

11.2373

48.9137

25.164

GBE

14.9591

16.1218

44.6264

25.235

RRM

19.9949

18.604

41.7961

26.798

NPE

12.0183

10.3096

34.8238

19.050

PSIRE

13.4798

10.5558

40.083

21.372

PLM

13.11

11.896

45.6818

23.562

VBF

13.9619

11.3874

37.4736

20.940

Proposed algorithm

12.9822

8.1572

31.115

17.418

Table 4. The recorded processing times (in seconds) for the comparatives

Methods

Figure 12

Figure 13

Figure 14

Average

JEDSD

48.296482

4.500845

62.103574

38.300

BCP

3.358511

1.617899

2.816158

2.597

GBE

3.115001

1.342931

2.751193

2.403

RRM

91.791681

32.007914

91.269488

71.689

NPE

12.423719

6.390939

17.560474

12.125

PSIRE

1.838996

0.835211

4.328055

2.334

PLM

2.679451

1.313764

4.429379

2.807

VBF

461.173469

98.561583

908.229094

489.321

Proposed algorithm

0.385495

0.200304

0.486872

0.357

Figure 15. The graph of the average LOE scores

Figure 16. The graph of the average BTMQI scores

Figure 17. The graph of the average BMPRI scores

The GBE produced saturation in certain areas, halos around edges, and color distortion. Therefore, its LOE and BTMQI readings were unsatisfactory but it provided reasonable implementation times. As well, its BMPRI readings were relatively moderate because of the generated artifacts. The RRM algorithm produced high smoothness to the processed images with high processing times resulting in moderate LOE and BTMQI scores with the worst BMPRI readings. As for the remaining methods, the NPE introduced smoothness, somewhat dark colors, relatively slow processing times, while the PSIRE did not provide sufficient illumination for some of the processed images but its processing times where practical. The PLM introduced noticeable halos and color distortions where some processed images were severely distorted, but its processing times were relatively moderate, while the VBF was the slowest among the competitors and its results have noticeable processing errors. That is why the LOE, BTMQI, and BMPRI scores were dissimilar and did not reach the performance of the developed RBMP algorithm. On the other hand, the proposed algorithm performed the best in terms of processing speed, visual quality, and recorded accuracy as its resulting images appear with acceptable brightness, adequate colors, and satisfactory contrast with no visible processing flaws. In terms of processing speed, it was the fastest among the competitors. This is a true indication that the proposed RBMP algorithm has been developed successfully and it can efficiently process different images with low-light effects. It is uneasy to develop an algorithm that can process different low-light images rapidly. This task is done as witnessed by the quality of the resulting images, the readings of the image evaluation method, and the processing speed records.

4. Conclusions

A retinex-based multiphase algorithm is developed in this article to enhance images with low-light effects. The proposed algorithm determines the illumination image somewhat similar to the SSR algorithm, computes the logs of the original and the illumination images, subtract the aforesaid images via a modified approach, the outcome is then processed by a gamma-corrected sigmoid function and further refined by a normalization function. As for the performance appraisal, numerous real low-light images have been used for empirical trials, eight algorithms have been utilized as comparison methods, and three specialized methods have been employed as the designated image evaluation metrics, as well as, the processing times of the proposed and the comparative algorithms have been considered. Using the obtained outcomes, adequate results have been given by the proposed algorithm, in that its resulting images have acceptable contrast, satisfying brightness, and adequate colors, as well as, it provided the best scores according to the used evaluation metrics with the least processing times. Future works on this algorithm can include implementing further development on it to be utterly automated or can be better adapted to process images related to specific modalities and have the same problem.

Acknowledgment

We thank the Department of Computer Science at the University of Mosul for providing different means to support the completion of this research.

  References

[1] Guo, X., Li, Y., Ling, H. (2016). LIME: Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2): 982-993. https://doi.org/10.1109/TIP.2016.2639450

[2] Wang, Y.F., Liu, H.M., Fu, Z.W. (2019). Low-light image enhancement via the absorption light scattering model. IEEE Transactions on Image Processing, 28(11): 5679-5690. https://doi.org/10.1109/TIP.2019.2922106

[3] Park, S., Yu, S., Kim, M., Park, K., Paik, J. (2018). Dual autoencoder network for retinex-based low-light image enhancement. IEEE Access, 6: 22084-22093. https://doi.org/10.1109/ACCESS.2018.2812809

[4] Dai, C., Lv, Y., Long, Y., Sui, H. (2018). A novel image enhancement technique for tunnel leakage image detection. Traitement du Signal, 35(3-4): 209-222. https://doi.org/10.3166/TS.35.209-222

[5] Jung, C., Yang, Q., Sun, T., Fu, Q., Song, H. (2017). Low light image enhancement with dual-tree complex wavelet transform. Journal of Visual Communication and Image Representation, 42: 28-36. https://doi.org/10.1016/j.jvcir.2016.11.001

[6] Kim, W., Lee, R., Park, M., Lee, S.H. (2019). Low-light image enhancement based on maximal diffusion values. IEEE Access, 7: 129150-129163. https://doi.org/10.1109/ACCESS.2019.2940452

[7] Ren, W., Liu, S., Ma, L., Xu, Q., Xu, X., Cao, X., Yang, M.H. (2019). Low-light image enhancement via a deep hybrid network. IEEE Transactions on Image Processing, 28(9): 4364-4375. https://doi.org/10.1109/TIP.2019.2910412

[8] Shi, Z., Zhu, M., Guo, B., Zhao, M., Zhang, C. (2018). Nighttime low illumination image enhancement with single image using bright/dark channel prior. EURASIP Journal on Image and Video Processing, 2018(1): 1-15. https://doi.org/10.1186/s13640-018-0251-4

[9] Wang, M., Tian, Z., Gui, W., Zhang, X., Wang, W. (2020). Low-light image enhancement based on nonsubsampled shearlet transform. IEEE Access, 8: 63162-63174. https://doi.org/10.1109/ACCESS.2020.2983457

[10] Park, S., Yu, S., Moon, B., Ko, S., Paik, J. (2017). Low-light image enhancement using variational optimization-based retinex model. IEEE Transactions on Consumer Electronics, 63(2): 178-184. https://doi.org/10.1109/TCE.2017.014847

[11] Fu, X., Zeng, D., Huang, Y., Ding, X., Zhang, X. (2013). A variational framework for single low light image enhancement using bright channel prior. In 2013 IEEE Global Conference on Signal and Information Processing, Austin, TX, pp. 1085-1088. https://doi.org/10.1109/GlobalSIP.2013.6737082

[12] Wang, S., Zheng, J., Hu, H.M., Li, B. (2013). Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing, 22(9): 3538-3548. https://doi.org/10.1109/TIP.2013.2261309

[13] Fu, X., Liao, Y., Zeng, D., Huang, Y., Zhang, X.P., Ding, X. (2015). A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation. IEEE Transactions on Image Processing, 24(12): 4965-4977. https://doi.org/10.1109/TIP.2015.2474701

[14] Yu, S.Y., Zhu, H. (2017). Low-illumination image enhancement algorithm based on a physical lighting model. IEEE Transactions on Circuits and Systems for Video Technology, 29(1): 28-37. https://doi.org/10.1109/TCSVT.2017.2763180

[15] Park, S., Moon, B., Ko, S., Yu, S., Paik, J. (2017). Low-light image restoration using bright channel prior-based variational retinex model. EURASIP Journal on Image and Video Processing, 2017: 1-11. https://doi.org/10.1186/s13640-017-0192-3

[16] Ren, X., Li, M., Cheng, W. H., Liu, J. (2018). Joint enhancement and denoising method via sequential decomposition. In 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, pp. 1-5. https://doi.org/10.1109/ISCAS.2018.8351427

[17] Li, M., Liu, J., Yang, W., Sun, X., Guo, Z. (2018). Structure-revealing low-light image enhancement via robust retinex model. IEEE Transactions on Image Processing, 27(6): 2828-2841. https://doi.org/10.1109/TIP.2018.2810539

[18] Tian, Q.C., Cohen, L.D. (2018). A variational-based fusion model for non-uniform illumination image enhancement via contrast optimization and color correction. Signal Processing, 153: 210-220. https://doi.org/10.1016/j.sigpro.2018.07.022

[19] Tang, C., Wang, Y., Feng, H., Xu, Z., Li, Q., Chen, Y. (2018). Low-light image enhancement with strong light weakening and bright halo suppressing. IET Image Processing, 13(3): 537-542. https://doi.org/10.1049/iet-ipr.2018.5505

[20] Tanaka, M., Shibata, T., Okutomi, M. (2019). Gradient-based low-light image enhancement. In 2019 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, pp. 1-2. https://doi.org/10.1109/ICCE.2019.8662059

[21] Dai, Q., Pu, Y.F., Rahman, Z., Aamir, M. (2019). Fractional-order fusion model for low-light image enhancement. Symmetry, 11(4): 1-17. https://doi.org/10.3390/sym11040574

[22] Xie, J., Bian, H., Wu, Y., Zhao, Y., Shan, L., Hao, S. (2020). Semantically-guided low-light image enhancement. Pattern Recognition Letters, 138: 308-314. https://doi.org/10.1016/j.patrec.2020.07.041

[23] Jobson, D.J., Rahman, Z.U., Woodell, G.A. (1997). Properties and performance of a center/surround retinex. IEEE Transactions on Image Processing, 6(3): 451-462. https://doi.org/10.1109/83.557356

[24] Si, L., Wang, Z., Xu, R., Tan, C., Liu, X., Xu, J. (2017). Image enhancement for surveillance video of coal mining face based on single-scale retinex algorithm combined with bilateral filtering. Symmetry, 9(6): 1-15. https://doi.org/10.3390/sym9060093

[25] Hanumantharaju, M.C., Ravishankar, M., Rameshbabu, D.R. (2013). Design and FPGA implementation of an 2D Gaussian surround function with reduced on-chip memory utilization. In 2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Mysore, pp. 604-609. https://doi.org/10.1109/ICACCI.2013.6637241

[26] Jourlin, M., Pinoli, J.C. (1988). A model for logarithmic image processing. Journal of Microscopy, 149(1): 21-35. https://doi.org/10.1111/j.1365-2818.1988.tb04559.x

[27] Arriaga-Garcia, E.F., Sanchez-Yanez, R.E., Ruiz-Pinales, J., de Guadalupe Garcia-Hernandez, M. (2015). Adaptive sigmoid function bi-histogram equalization for image contrast enhancement. Journal of Electronic Imaging, 24(5): 1-13. https://doi.org/10.1117/1.JEI.24.5.053009

[28] Lin, H., Shi, Z. (2014). Multi-scale retinex improvement for nighttime image enhancement. Optik, 125(24): 7143-7148. https://doi.org/10.1016/j.ijleo.2014.07.118

[29] Asmare, M.H., Asirvadam, V.S., Hani, A. (2015). Image enhancement based on contourlet transform. Signal, Image and Video Processing, 9(7): 1679-1690. https://doi.org/10.1007/s11760-014-0626-7

[30] Imtiaz, M.S., Wahid, K.A. (2015). Color enhancement in endoscopic images using adaptive sigmoid function and space variant color reproduction. Computational and Mathematical Methods in Medicine, 2015: 1-19. https://doi.org/10.1155/2015/607407

[31] Tsai, C.M. (2013). Adaptive local power-law transformation for color image enhancement. Applied Mathematics & Information Sciences, 7(5): 2019-2026. https://doi.org/10.12785/amis/070542

[32] Housman, I., Chastain, R., Finco, M. (2018). An evaluation of forest health insect and disease survey data and satellite-based remote sensing forest change detection methods: case studies in the united states. Remote Sensing, 10(8): 1-21. https://doi.org/10.3390/rs10081184

[33] Loh, Y.P., Chan, C.S. (2019). Getting to know low-light images with the exclusively dark dataset. Computer Vision and Image Understanding, 178: 30-42. https://doi.org/10.1016/j.cviu.2018.10.010

[34] Bychkovsky, V., Paris, S., Chan, E., Durand, F. (2011). Learning photographic global tonal adjustment with a database of input/output image pairs. In Computer Vision and Pattern Recognition (CVPR2011), Providence, RI, pp. 97-104. https://doi.org/10.1109/CVPR.2011.5995413

[35] Gu, K., Wang, S., Zhai, G., Ma, S., Yang, X., Lin, W., Gao, W. (2016). Blind quality assessment of tone-mapped images via analysis of information, naturalness, and structure. IEEE Transactions on Multimedia, 18(3): 432-443. https://doi.org/10.1109/TMM.2016.2518868

[36] Min, X., Zhai, G., Gu, K., Liu, Y., Yang, X. (2018). Blind image quality estimation via distortion aggravation. IEEE Transactions on Broadcasting, 64(2): 508-517. https://doi.org/10.1109/TBC.2018.2816783