OPEN ACCESS
The unfavorable shooting environment severely hinders the acquisition of actual landscape information in garden landscape design. Low quality, low illumination garden landscape images (GLIs) can be enhanced through advanced digital image processing. However, the current color enhancement models have poor applicability. When the environment changes, these models are easy to lose image details, and perform with a low robustness. Therefore, this paper tries to enhance the color of low illumination GLIs. Specifically, the color restoration of GLIs was realized based on modified dynamic threshold. After color correction, the low illumination GLI were restored and enhanced by a selfdesigned convolutional neural network (CNN). In this way, the authors achieved ideal effects of color restoration and clarity enhancement, while solving the difficulty of manual feature design in landscape design renderings. Finally, experiments were carried out to verify the feasibility and effectiveness of the proposed image color enhancement approach.
low illumination, garden landscape images (GLIs), color enhancement, convolutional neural network (CNN)
Photography fans of garden landscape convey their perception of the beauty of garden landscapes and their understanding of garden landscape design to the human visual system in the form of images [111]. The generation of landscape design renderings is greatly affected by the color matching, design layout and other information in the original image [1214]. Based on high quality garden landscape images (GLIs), designers can effectively visualize the design intent and conception of the actual landscape effect [1619]. In the real world, GLIs taken in an unfavorable shooting environment are insufficiently exposed, unevenly illuminated, and generally dark. These features severely hinder the acquisition of actual landscape information in garden landscape design. With the continuous development of computer technology, low quality, low illumination GLIs can be enhanced through advanced digital image processing [2022]. The enhanced GLIs can promote the final expressiveness of the landscape scheme, and fully reflect the designer's personal aesthetics.
Currently, many cities in China lack landscape images. Yao and Kang [23] introduced the principle of big data visualization to urban landscape images, and discussed the application of urban landscape image enhancement in China. The improving effect of big data visualization on urban landscape images was discussed from multiple dimensions, including online questionnaire survey, big data software visualization, and urban landscape image improvement. On this basis, several countermeasures were developed for enhancing landscape images of Chinese cities. In harsh environments (e.g., low illumination environment), the images collected by sensors may degrade, and feature low visibility, low brightness, and low contrast. To improve such images, Ma et al. [24] proposed a lowlight level sensor image enhancement algorithm based on the huesaturationintensity (HSI) color model: the piecewise exponential method was adopted to process the saturation of the original image; a deep convolutional network (DCN) was specially designed to enhance the intensity (I) component. Yamashita et al. [25] and Yamashita et al. [26] suggested using a single sensor to simultaneously capture red, green, blue (RGB) and nearinfrared (NIR) information, trying to enhance the color images from lowlight scenes. Under the guidance of the NIR information, the joint denoising technique was adopted to reconstruct the corresponding color image, and the estimated color image was iteratively restored based on the constructed guide image. Jung [27] presented a selective image fusion technique, which applies adaptive guided filterbased denoising and selective detail transfer to pixels considered reliable in binocular image fusion. By constructing an experimental system of colorplusmono camera, it was demonstrated that the binocular justnoticeabledifference (BJND)aware denoising and selective detail transfer are helpful in improving the image quality during low light shooting.
Deep learningbased image enhancement requires lots of images to support network training. During the training, the joint estimation of intermediate parameters is far from sufficient. As a result, the models thus trained have a low applicability. When the environment changes, these models are easy to lose image details, and perform with a low robustness. Therefore, this paper tries to enhance the color of low illumination GLIs. Firstly, Section 2 explains the color restoration of GLIs based on modified dynamic threshold, establishes a color correction framework, and expounds the principle of color transform of GLIs. Next, Section 3 designs a convolutional neural network (CNN) to restore and enhance the color of corrected low illumination GLIs. In this way, the authors achieved ideal effects of color restoration and clarity enhancement, while solving the difficulty of manual feature design in landscape design renderings. Finally, experiments were carried out to verify the feasibility and effectiveness of the proposed image color enhancement approach.
After being converted to the lumablue differencered difference (YCbCr) color space, the original GLI is divided into multiple blocks. The mean value AV_{o} of Cr and mean value AV_{e }of Cb of each block are calculated. The cumulative absolute differences RP_{o} and RP_{e} of Cr and Cb of each block can be respectively calculated by:
$R P_{o}=\sum_{i, j}\left(\leftC_{r}(i, j)A V_{o}\right\right) / M$ (1)
$R P_{e}=\sum_{i, j}\left(\leftC_{b}(i, j)A V_{e}\right\right) / M$ (2)
The blocks with relatively small RP_{o} and RP_{e} are identified. These blocks should be removed, for they cannot provide sufficient color information. The mean values of AV_{o}, AV_{e}, RP_{o} and RP_{e} of the remaining blocks are taken as the AV_{o}, AV_{e}, RP_{o} and RP_{e} of the entire GLI. Next, the candidate set of white pixels can be judged and generated by:
$\begin{aligned}
&\leftC_{r}(i, j)\left(A V_{o}+R P_{o} \times \operatorname{sign}\left(A V_{o}\right)\right)\right \\
&<1.5 \times R P_{o}
\end{aligned}$ (3)
$\begin{aligned}
&\leftC_{b}(i, j)\left(1.5 \times A V_{e}+R P_{e} \times \operatorname{sign}\left(A V_{e}\right)\right)\right \\
&<1.5 \times R P_{e}
\end{aligned}$ (4)
The pixels with the top 10% brightness in the set are selected as the final white pixels. After that, all white pixels are adjusted. The first step is to compute the reference values of each white pixel in the three channels, i.e., the mean gray values of the three channels r_{RV}, g_{RV}, and b_{RV}. Next, the gain of each channel can be calculated by:
$\begin{aligned}
&r_{G}=Y_{\max } / r_{R V} \\
&g_{G}=Y_{\max } / g_{R V} \\
&b_{G}=Y_{\max } / b_{R V}
\end{aligned}$ (5)
Based on the results of formula (5), the color values of the three channels of the GLI are modified by the framework shown in Figure 1. The pixels with color values surpassing the threshold can be identified by:
$\begin{aligned}
&r^{\prime}=r \times r_{G} \\
&g^{\prime}=g \times g_{G} \\
&b^{\prime}=b \times b_{G}
\end{aligned}$ (6)
For a GLI taken in a variable environment, if a single channel has a low gray value, then the mean gray value of that channel must be low, and the gain of that channel must be large. Through the abovementioned processing, the weak singlechannel features could be compensated for.
Figure 1. Color correction framework
If the GLI has a small overall brightness, the Y value after color restoration will be relatively high, pushing up the overall brightness of the image. This will interfere with the judgement of candidate white pixels. To solve the problem, this paper introduces the attenuation offset parameter matrix ψ to quantify the dynamic threshold to a fixed range. The ψ depends on the photoelectric imaging environment. Based on this matrix, the Y, Cb and Cr of the original parameters are quantified again through the following derivation process:
$\left[\begin{array}{l}
Y^{\prime} \\
C_{b}{ }^{\prime} \\
C_{r}{ }^{\prime}
\end{array}\right]=W \cdot\left[\begin{array}{l}
Y \\
C_{b} \\
C_{r}
\end{array}\right]$ (7)
First, the attenuation of each color in the low illumination image is considered. Let ∫_{ω}γ^{d}(τ)dτ, ξ^{d}, and η^{d} be the vector integral, scattering coefficient, and attenuation coefficient of the light scattered into the sensor from all directions, respectively. The value of ∫_{ω}γ^{d}(τ)dτ is positively proportional to ξ^{d}. Since the global background light is a function of wavelength, we have:
$\gamma^{d}(\infty)=\frac{l_{k} l_{x}}{\eta^{d}} \int_{\omega} \gamma^{d}(\tau) d \tau$ (8)
where, l_{k} and S_{x} are constants. Let ξ(μ_{d}) be the reference wavelength scattering coefficient. Then, the linear relationship between scattering coefficient ξ^{d} and wavelength μ can be expressed as:
$\xi^{d}=\left(0.00113 \mu_{d}+1.62517\right) \xi\left(\mu_{d}\right)$ (9)
Further, it can be derived that the global background light is proportional to ξ^{d} and inversely proportional to η^{d}:
$\gamma^{d}(\infty) \propto \frac{\xi_{\mu}}{\eta^{d}}$ (10)
The color channel with the smallest attenuation in the low illumination environment is defined as channel o. Based on the color attenuation of channel o, the attenuation ratio of any other color channel can be deduced as:
$\frac{\eta^{d}}{\eta^{o}}=\frac{\xi^{d} \gamma^{o}(\infty)}{\xi^{o} \gamma^{d}(\infty)} \quad d \in\{r, g\}$ (11)
The relationship between the attenuation ratios of the three channels can be expressed as:
$\begin{aligned}
&\sigma^{o}(a)=e^{\delta(a)} \\
&\sigma^{d}(a)=\left(\sigma^{o}(a)\right)^{\frac{\eta^{d}}{\eta^{o}}} \quad d \in\{r, g\}
\end{aligned}$ (12)
Let W be the color space conversion matrix. According to the image transmission and display rules of the International Telecommunication Union (ITU), the RGBYCbCr color space conversion can be expressed as:
$\begin{aligned}
&{\left[\begin{array}{c}
Y^{\prime} \\
C_{b}^{\prime} \\
C_{r}^{\prime}
\end{array}\right]=} \\
&{\left[\begin{array}{ccc}
0.3 & 0.59 & 0.12 \\
0.17 & 0.33 & 0.5 \\
0.5 & 0.42 & 0.08
\end{array}\right] \cdot\left[\begin{array}{l}
R^{\prime} \\
G^{\prime} \\
B^{\prime}
\end{array}\right]=W \cdot\left[\begin{array}{l}
R^{\prime} \\
G^{\prime} \\
B^{\prime}
\end{array}\right]}
\end{aligned}$ (13)
The color space conversion matrix can be obtained based on the threechannel attenuation formulas:
$\left[\begin{array}{c}
R^{\prime} \\
G^{\prime} \\
B^{\prime}
\end{array}\right]=\left[\begin{array}{ccc}
\sigma^{r}(a) & & \\
& \sigma^{g}(a) & \\
& & \sigma^{b}(a)
\end{array}\right] \cdot\left[\begin{array}{c}
R \\
G \\
B
\end{array}\right]$ (14)
Combining formulas (13) and (14):
$\begin{aligned}
&{\left[\begin{array}{l}
Y^{\prime} \\
C_{b}^{\prime} \\
C_{r}^{\prime}
\end{array}\right]=W \cdot\left[\begin{array}{l}
R^{\prime} \\
G^{\prime} \\
B^{\prime}
\end{array}\right]} \\
&=W \cdot\left[\begin{array}{ll}
\sigma^{\prime}(a) & \\
& \sigma^{g}(a) & \\
& & \sigma^{b}(a)
\end{array}\right] \cdot\left[\begin{array}{l}
R \\
G \\
B
\end{array}\right]
\end{aligned}$ (15)
The color transformation of the original GLI can be expressed as:
$\left[\begin{array}{l}
Y \\
C_{b} \\
C_{r}
\end{array}\right]=W \cdot\left[\begin{array}{l}
R \\
G \\
B
\end{array}\right] \Rightarrow W^{1} \cdot\left[\begin{array}{l}
Y \\
C_{b} \\
C_{r}
\end{array}\right]=\left[\begin{array}{l}
R \\
G \\
B
\end{array}\right]$ (16)
Substituting formula (16) into formula (15):
$\begin{aligned}
&{\left[\begin{array}{l}
Y^{\prime} \\
C_{b}^{\prime} \\
C_{r}^{\prime}
\end{array}\right]=W \cdot\left[\begin{array}{lll}
\sigma^{r}(a) & & \\
& \sigma^{g}(a) & \\
& & \sigma^{b}(a)
\end{array}\right]} \\
&W^{1} \cdot\left[\begin{array}{l}
Y \\
C_{b} \\
C_{r}
\end{array}\right]=\psi \cdot\left[\begin{array}{l}
Y \\
C_{b} \\
C_{r}
\end{array}\right]
\end{aligned}$ (17)
The attenuation offset parameter matrix ψ can be derived by:
$\begin{aligned}
&\psi=W \cdot\left[\begin{array}{lll}
\sigma^{r}(a) & & \\
& \sigma^{g}(a) & \\
& & \sigma^{b}(a)
\end{array}\right] W^{1} \\
&=W \cdot\left[\begin{array}{lll}
\left(\sigma^{o}(a)\right)^{\frac{\eta^{r}}{\eta^{o}}} & & \\
& \left(\sigma^{o}(a)\right)^{\frac{\eta^{g}}{\rho^{\circ}}} & \\
& & \left(\sigma^{o}(a)\right)^{\frac{\eta^{b}}{p^{\rho}}}
\end{array}\right] W^{1}
\end{aligned}$ (18)
For the color channel with the least attenuation in the low illumination environment, the attenuation coefficient is assumed to satisfy η^{d}_{=}η^{o}. After this treatment, the single channel of the original GLI with a relatively low gray value can be compensated for, thereby balancing the gray value distribution across the three channels. The GLI color transform is illustrated in Figure 2.
Figure 2. GLI color transform
This paper designs a CNN to restore and enhance the color of corrected low illumination GLIs. In this way, the authors achieved ideal effects of color restoration and clarity enhancement, while solving the difficulty of manual feature design in landscape design renderings. The proposed network consists of a color restoration module and a color enhancement module.
3.1 Color restoration
The architecture of the color restoration module is illustrated in Figure 3. In the color restoration module, the convolutional kernel is of the size 3×3. The convolution operation is expressed as g_{3×3}. For the color channels, the number of convolutional filters is denoted by i, belonging to {1:32}. Each channel of the input GLI LSP is processed by the convolutional layer to obtain a false color mapping SU_{t}:
$S U_{t}^{i}=\left\{g_{3 \times 3}\left(L S P_{r}^{i}\right), g_{3 \times 3}\left(L S P_{g}^{i}\right), g_{3 \times 3}\left(L S P_{b}^{i}\right)\right\}$ (19)
Figure 3. Architecture of color restoration module
Through the false color correction of each SU_{t}, it is possible to obtain the enhanced false color mapping RF_{t}. Let function F be global average pooling. Then, the mean, i.e., the gray value, of the three channels can be calculated by:
$S D_{t}=\underset{N \times M \times D}{F}\left(S U_{t}\right)$ (20)
The singlechannel mean of SU_{t} can be calculated by:
$D T^{i}=\underset{N \times M}{F}\left(S U_{p}^{i}\right)$ (21)
Let SD_{t}/DT^{i} be the gain coefficient; D be the number of color channels of SU_{t}; i∈{r,g,b} be the serial number of a color channel; N and M be the space size. Then, the false color mapping can be expressed as:
$R F_{t}^{i}=S U_{t}^{i} \cdot \frac{S D_{t}}{D T^{i}}$ (22)
3.2 Color enhancement
To fully integrate the GLI outputted by the color restoration module into the color enhancement module, this paper introduces an adaptive instance normalization module to the constructed model. The mean and standard deviation of the feature map f of the color enhancement module are calculated, and then normalized by the said module. Let F* and Q* be the height and width of the feature map f, respectively. Then, the calculation results can be expressed as:
$\lambda_{d}=\frac{1}{F^{*} Q^{*}} \sum_{t}^{F^{*}} \sum_{w}^{Q^{*}} f_{t, w, d}$ (23)
$\rho_{d}^{2}=\frac{1}{F^{*} Q^{*}} \sum_{t}^{F^{*}} \sum_{w}^{Q^{*}}\left(f_{t, w, d}\lambda_{d}\right)^{2}+\theta$ (24)
here, θ=0.00001. The affine transformation parameters Φ* and χ* can be obtained through convolution of the color restored GLI. The feature map normalized by the color enhancement module is subjected to affine transformation. In this paper, an adaptive instance normalization module with color restoration function is added to the residual block to improve the color enhancement effect. The adaptive instance normalization module operates on a pixelbypixel basis: Based on Φ* and χ*, the feature points of the entire image are restored pixel by pixel. Let λ_{d} and ρ_{d} be the mean and standard deviation of feature map f of color channel d, respectively. Then, we have:
$f_{t, w, d}^{*}=\Phi_{t, w, d}^{*}\left(\frac{f_{t, w, d}\lambda_{d}}{\rho_{d}}\right)+\chi_{t, w, d}^{*}$ (25)
3.3 Loss function
To generate a more realistic enhanced GLI and achieve the learning objective of the neural network, this paper adopts the minimum absolute error (MAE) as the loss in the color restoration module and the color enhancement module. Let B be the input clear GLI; a be the input lowillumination GLI; g(a) be the processed image; f(a) be the image outputted after color restoration. Then, the loss function can be expressed as:
$\operatorname{Loss}_{M A E}=\Bg(a)\_{1}+\Bf(a)\_{1}$ (26)
The two terms in the MAE loss function are of equal importance. They are eventually merged into the total loss for backpropagation.
To minimize the percentual feature difference between the enhanced image and the real image, this paper introduces the perceptual loss function based on the pretrained VGG16 network. The VGG16 training can enhance the visual authenticity of the GLI. Let Ψ_{i}(g(a)), Ψ_{i}(f(a)) and Ψ_{i}(B) be the feature maps of g(a), f(a), and B, respectively; X_{i}, Y_{i}, and Z_{i} be the number of channels, height, and width of the feature map, respectively. Then, the perceptual loss function can be expressed as:
$\operatorname{LosS}_{N E T}=\frac{1}{X_{i} Y_{i} Z_{i}}\left(\begin{array}{l}
\left\\Psi_{i}(g(a))\Psi_{i}(B)\right\_{2} \\
+\left\\Psi_{i}(f(a))\Psi_{i}(B)\right\_{2}
\end{array}\right)$ (27)
To better restore the color details and design structure of GLIs, this paper uses the gradient loss functions in the horizontal and vertical directions to train the constructed neural network. The gradient losses in the two directions GR_{r} and GR_{f} can be respectively calculated by:
$G R_{r}=\left\g_{r}(a)B_{r}\right\_{2}$ (28)
$G R_{f}=\left\g_{f}(a)B_{f}\right\_{2}$ (29)
Let ϕ be the adjustment parameter of the loss function. Then, the total loss of the color restoration and enhancement neural network for GLIs can be given by:
$\operatorname{LosS}_{T}=\operatorname{LosS}_{M A E}+\phi \operatorname{LosS}_{N E T}+G R_{r}+G R_{f}$ (30)
To verify the effectiveness of the proposed GLI color restoration algorithm, the performance of the modified dynamic threshold algorithms with and without ψ were quantified and analyzed. Table 1 presents the results of color cast detection based on equivalent circle, and evaluation results of GLI color quality.
The results show that the modified dynamic threshold algorithm with ψ outperformed that without ψ in GLI color restoration, evidenced by the relatively good restored color quality of all three types of GLIs (landscape architecture, landscape plants, and landscape water system). Despite a few color offsets in some images, the modified dynamic threshold algorithm with ψ perform excellently in the overall color restoration of GLIs. After enlarging the restored images, it can be found that some details of sculptures and artificial landscapes were better restored, and the color of water surfaces involving reflection/deflection was expressed accurately without overexposure, after the modified dynamic threshold algorithm was coupled with ψ.
Table 1. Qualified evaluation of GLI color restoration

Landscape architecture 
Landscape plants 
Landscape water system 

Comparative images 
Original image 
Without ψ 
With ψ 
Original image 
Without ψ 
With ψ 
Original image 
Without ψ 
With ψ 
Test results 
1.253 
1.362 
1.045 
4.263 
0.8526 
1.074 
8.256 
1.627 
1.362 
Evaluation results 
0.3628 
0.5284 
0.6281 
0.4812 
0.4628 
0.5326 
0.4158 
0.4785 
0.5529 
The histogram equalization simulation was carried out on GLIs captured in a low illumination environment. Figure 4 displays the histogram changes of the images before and after equalization.
(1) Before equalization
(2) After equalization
Figure 4. Histograms before and after introducing MAE loss and perpetual loss
Based on the principle of the color restoration and enhancement model and Figure 4, the CNNbased image enhancement of GLIs can be regarded as an approximate calculation process from continuous state to discrete state. As shown in Figure 4, the quantization error was small after introducing MAE loss and perpetual loss. As a result, there was a certain difference in the gray levels outputted by different gray pixel values after mapping. This effectively prevents the problems of traditional image enhancement methods: the merge of grayscales and the color information loss of GLIs. It can be intuitively seen from Figure 4 that, before introducing the MAE loss and perceptual loss, the pixels in the enhanced image were discretely distributed, and the histogram failed to retain the shape of the original image; after the two losses were introduced, the grayscale was uniformly distributed across the interval, and the histogram matched the shape of the original image.
The restoration of the color information of the output GLI is greatly affected by the gain and offset of different values. By changing the size of the affine transformation parameters, it is possible to control the degree of color information restoration. Under the same experimental environment, a series of simulations were conducted with our algorithm, singlescale Retinex, and multiscale Retinex. The color restoration results of these algorithms are compared in Figure 5.
Figure 5. Experimental results of color recovery of different algorithms
Figure 5 clearly displays that the GLI processed by our algorithm was much clearer, better in quality, and higher in brightness and contrast than that handled by singlescale Retinex, or multiscale Retinex. By contrast, the image processed by singlescale Retinex had a low contrast, and that processed by multiscale Retinex, and multiscale was too white. The two contrastive algorithms fail to output realistic enhanced images that conform to our visual perception.
Tables 24 compare the quality of the GLIs enhanced by different algorithms. All three algorithms managed to enhance the color effect of the original GLIs. However, our algorithm was superior than the two traditional color enhancement algorithms, in terms of discrete entropy, clarity and contrast, and effectively improved the readability of low illuminance GLIs.
Table 2. Discrete entropy metric of each algorithm
Objects 
Lawns 
Trees 
Pools 
Roads 
Rockeries 
Original image 
5.326 
5.625 
5.124 
5.826 
5.392 
Singlescale Retinex 
5.842 
7.362 
6.495 
7.152 
7.025 
Multiscale Retinex 
6.114 
7.285 
6.295 
7.025 
7.952 
Our algorithm 
6.174 
7.025 
6.385 
6.119 
6.258 
Objects 
Flowers 
Sculptures 
Benches 
Fences 
Landscape stones 
Original image 
5.482 
5.112 
5.386 
5.924 
5.628 
Singlescale Retinex 
7.114 
7.258 
7.062 
6.258 
7.385 
Multiscale Retinex 
7.415 
7.228 
6.958 
7.151 
7.335 
Our algorithm 
6.745 
6.185 
7.284 
5.296 
7.118 
Table 3. Clarity metric of each algorithm
Objects 
Lawns 
Trees 
Pools 
Roads 
Rockeries 
Original image 
0.316 
3.048 
0.859 
0.527 
0.524 
Singlescale Retinex 
1.328 
6.582 
2.748 
2.563 
2.162 
Multiscale Retinex 
1.428 
7.259 
2.748 
2.625 
2.147 
Our algorithm 
1.002 
6.285 
2.115 
1.172 
0.851 
Objects 
Flowers 
Sculptures 
Benches 
Fences 
Landscape stones 
Original image 
0.263 
0.485 
0.274 
0.857 
1.864 
Singlescale Retinex 
0.748 
1.625 
1.147 
3.265 
5.185 
Multiscale Retinex 
0.859 
1.285 
1.759 
2.185 
7.629 
Our algorithm 
0.952 
1.425 
0.852 
3.625 
5.362 
Table 4. Contrast metric of each algorithm
Objects 
Lawns 
Trees 
Pools 
Roads 
Rockeries 
Original image 
0.041 
0.362 
0.148 
0.015 
0.026 
Singlescale Retinex 
0.057 
0.085 
0.485 
0.152 
0.263 
Multiscale Retinex 
0.396 
1.258 
0.248 
0.152 
0.263 
Our algorithm 
0.544 
0.984 
0.557 
0.421 
0.442 
Objects 
Flowers 
Sculptures 
Benches 
Fences 
Landscape stones 
Original image 
0.014 
0.025 
0.041 
0.074 
0.085 
Singlescale Retinex 
0.525 
0.142 
0.824 
0.148 
0.362 
Multiscale Retinex 
0.048 
0.157 
0.748 
0.596 
0.724 
Our algorithm 
0.413 
0.154 
0.799 
0.642 
0.821 
This paper designs a novel method for color enhancement of low illumination GLIs. To achieve the ideal effects of color recovery and clarify enhancement, the authors detailed how to restore the color of GLIs based on modified dynamic threshold, and constructed a CNN for restoring and enhancing the color of low illumination GLIs, which overcomes the difficulty of manual feature design in landscape design renderings. Through experiments, the performance of the modified dynamic threshold algorithms with and without ψ were quantified and analyzed. According to results of color cast detection based on equivalent circle, and evaluation results of GLI color quality, the modified dynamic threshold algorithm with ψ outperformed that without ψ in GLI color restoration. In addition, the histogram changes of the GLIs before and after introducing the MAE loss and perceptual loss were recorded. The results show that, after the two losses were introduced, the grayscale was uniformly distributed across the interval, and the histogram matched the shape of the original image. Finally, the color restoration results of different algorithms were compared. The comparison further confirms that our algorithm was superior than the two traditional color enhancement algorithms, in terms of discrete entropy, clarity and contrast, and effectively improved the readability of low illuminance GLIs.
2021 Philosophy and Social Science planning project of Henan Province, Research on the strategy of screening, protection and Utilization of rural red cultural resources in Central Plains, Grant No.: 2021BYS051; 2021 Philosophy and Social Science project of Henan Province, Research on the protection of traditional Village landscape features in Henan province, Grant No.: 2021BYS048; Research on color identification system and planning path of traditional villages in central China under the background of rural revitalization strategy, Special Application for Key Research and Development and Promotion of Henan Province, Grant No.: 212400410381; Research on Strategies for Memory Protection and Inheritance of Industrial and Trade Traditional Villages in Henan from the Perspective of Village Culture, Grant No.: 2021ZZJH453; Research on Spatial Satisfaction Evaluation and Renewal Protection Strategy for Inheritance of Traditional Village Context in Southern Henan province, Grant No.: 2021ZDJh422; Research on promoting the characteristic development of Henan cultural industry with social innovation, Subject of Henan social science planning, Grant No.: 2018BYS022; Research on Spatial Feature Improvement design of Traditional Village Landscape in Southern Henan Under Protection Early Warning Strategy, Grant No.: 2020ZZJH519.
[1] Wangda, P., Hussin, Y.A., Bronsveld, M.C., Karna, Y.K. (2019). Species stratification and upscaling of forest carbon estimates to landscape scale using GeoEye1 image and lidar data in subtropical forests of Nepal. International Journal of Remote Sensing, 40(20): 79417965. https://doi.org/10.1080/01431161.2019.1607981
[2] Li, Z., Han, X., Wang, L.Y., Zhu, T.Y., Yuan, F.T. (2020). Feature extraction and image retrieval of landscape images based on image processing. Traitement du Signal, 37(6): 10091018. https://doi.org/10.18280/ts.370613
[3] Gudmann, A., Csikós, N., Szilassi, P., Mucsi, L. (2020). Improvement in satellite imagebased land cover classification with landscape metrics. Remote Sensing, 12(21): 3580. https://doi.org/10.3390/rs12213580
[4] Snavely, R.A., Uyeda, K.A., Stow, D.A., O’Leary, J.F., Lambert, J. (2019). Mapping vegetation community types in a highly disturbed landscape: integrating hierarchical objectbased image analysis with lidarderived canopy height data. International Journal of Remote Sensing, 40(11): 43844400. https://doi.org/10.1080/01431161.2018.1562588
[5] Endo, Y., Kanamori, Y., Kuriyama, S. (2019). Animating landscape: selfsupervised learning of decoupled motion and appearance for singleimage video synthesis. arXiv preprint arXiv:1910.07192.
[6] Trimble, J., Berezovsky, J. (2021). Barkhausen Imaging: A magnetooptical approach to mapping the pinning landscape in soft ferromagnetic films. Journal of Magnetism and Magnetic Materials, 523: 167585. https://doi.org/10.1016/j.jmmm.2020.167585
[7] Park, C., Lee, I.K. (2020). Emotional landscape image generation using generative adversarial networks. In Proceedings of the Asian Conference on Computer Vision.
[8] Popelková, R., Mulková, M. (2016). Multitemporal aerial image analysis for the monitoring of the processes in the landscape affected by deep coal mining. European Journal of Remote Sensing, 49(1): 9731009. https://doi.org/10.5721/EuJRS20164951
[9] Lu, X., Zhang, J., Hong, J., Wang, L. (2016). Analysis of wetland landscape evaluation and its driving factors in Yellow River Delta based on remote sensing image. Transactions of the Chinese Society of Agricultural Engineering, 32(1): 214223. https://doi.org/10.11975/j.issn.10026819.2016.z1.030
[10] Lu, S., Zhang, Q., Liu, Y., Liu, L., Zhu, Q., Jing, K. (2020). Retrieval of multiple spatiotemporally correlated images on tourist attractions based on image processing. Traitement du Signal, 37(5): 847854. https://doi.org/10.18280/ts.370518
[11] Kim, D., Noh, Y. (2021). An aerosol extinction coefficient retrieval method and characteristics analysis of landscape images. Sensors, 21(21): 7282. https://doi.org/10.3390/s21217282
[12] Yin, S. (2014). Explore the use of computeraided design in the landscape renderings. In Applied Mechanics and Materials, 687: 11661169. https://doi.org/10.4028/www.scientific.net/AMM.687691.1166
[13] Li, D. (2020). Explore the application of computer modeling and rendering technology in rural landscape color design. In Journal of Physics: Conference Series, 1578(1): 012024. https://doi.org/10.1088/17426596/1578/1/012024
[14] Kim, S.Y., Lee, K. (2006). Design and implementation of mobile 3D city landscape authoring/rendering system. In Innovations in 3D Geo Information Systems, pp. 439446. https://doi.org/10.1007/9783540369981_35
[15] Omodani, M., Ohta, M., Tanaka, T., Hoshino, Y. (1993). Highquality photographic color image reproduction using ion flow printing and its application to color facsimile. The Journal of imaging science and technology, 37(1): 3742.
[16] Zhao, Y. (2021). Fast image blending for highquality panoramic images on mobile phones. Multimedia Tools and Applications, 80(1): 499516. https://doi.org/10.1007/s11042020097175
[17] Wang, B., He, J., Yu, L., Xia, G.S., Yang, W. (2020). Event enhanced highquality image recovery. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16, pp. 155171. https://doi.org/10.1007/9783030586010_10
[18] RuizSantaquiteria, J., EspinosaAranda, J.L., Deniz, O., Sanchez, C., BorregoRamos, M., Blanco, S. (2018). Lowcost oblique illumination: An image quality assessment. Journal of Biomedical Optics, 23(1): 016001. https://doi.org/10.1117/1.JBO.23.1.016001
[19] Shi, Z., Guo, B., Zhao, M., Zhang, C. (2018). Nighttime low illumination image enhancement with single image using bright/dark channel prior. EURASIP Journal on Image and Video Processing, (1): 115. https://doi.org/10.1186/s1364001802514
[20] Song, M.Z., Qu, H.S., Zhang, G.X., Tao, S.P., Jin, G. (2018). Lowillumination image denoising method for widearea search of nighttime sea surface. Optoelectronics Letters, 14(3): 226231. https://doi.org/10.1007/s118010187268x
[21] Song, M.Z., Qu, H.S., Li, L.M., Zhang, G.X., Jin, G. (2017). Pooling strategy for quality evaluation of fullreference model low illumination image. Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 25: 160167. https://doi.org/10.3788/OPE.20172514.0160
[22] Zhang, S.T., Ning, D.Q., Wang, L. (2015). Realtime image intensification in tobacco sorting system under low illumination. Tobacco Science and Technology, 48(1): 96100. https://doi.org/10.16135/j.issn10020861.20150117
[23] Yao, L., Kang, Z.M. (2020). Research on urban landscape image enhancement under the background of big data visualization. In 2020 International Conference on Big Data and Social Sciences (ICBDSS), pp. 2932. https://doi.org/10.1109/ICBDSS51270.2020.00014
[24] Ma, S., Ma, H., Xu, Y., Li, S., Lv, C., Zhu, M. (2018). A lowlight sensor image enhancement algorithm based on HSI color model. Sensors, 18(10): 3583. https://doi.org/10.3390/s18103583
[25] Yamashita, H., Sugimura, D., Hamamoto, T. (2017). Lowlight color image enhancement via iterative noise reduction using RGB/NIR sensor. Journal of Electronic Imaging, 26(4): 043017. https://doi.org/10.1117/1.JEI.26.4.043017
[26] Yamashita, H., Sugimura, D., Hamamoto, T. (2015). Enhancing lowlight color images using an RGBNIR single sensor. In 2015 Visual Communications and Image Processing (VCIP), pp. 14. https://doi.org/10.1109/VCIP.2015.7457844
[27] Jung, Y.J. (2017). Enhancement of low light level images using colorplusmono dual camera. Optics Express, 25(10): 1202912051. https://doi.org/10.1364/OE.25.012029