No-Reference Quality Assessment of Blurred Images by Combining Hybrid Metrics

No-Reference Quality Assessment of Blurred Images by Combining Hybrid Metrics

Basma Ahmed Osama A. Omer Amal Rashed Mohamed Abdel-Nasser*

Faculty of Computers and Information, South Valley University, Qena 83523, Egypt

Electrical Engineering Department, Aswan University, Aswan 81542, Egypt

Corresponding Author Email: 
mohamed.abdelnasser@aswu.edu.eg
Page: 
2069-2080
|
DOI: 
https://doi.org/10.18280/ts.410435
Received: 
14 November 2023
|
Revised: 
10 February 2024
|
Accepted: 
29 March 2024
|
Available online: 
31 August 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

No reference or blind image quality assessment (NR-IQA) pertains to the challenge of evaluating image visual quality in the absence of a reference image. NR-IQA is necessary for many applications such as medical imaging and surveillance. Consequently, there is a need to devise a novel metric independent of the pristine reference image. The current NR-IQA metrics' performance may be satisfactory for a specific type of blurring but may prove inadequate for other types. This paper focuses on blurred images and presents a novel NR-IQA metric based on restoration schemes and hybrid metrics. Specifically, we utilize a blind restoration technique to address the issue of image blurring. This restoration technique includes three steps: 1) estimating a point spread function (PSF) from the input blurred image, 2) applying a Winner filter to the blurred image to obtain a deblurred image, and 3) convolving the estimated PSF with the deblurred image to produce the reblur image, which is used as a reference image. Furthermore, we utilize the gradient magnitude similarity deviation (GMSD), structure similarity index method (SSIM), peak signal-to-noise ratio (PSNR), and as potent full reference metrics. These metrics are combined to form a viable strategy to enhance the system's performance. The metric under consideration can promptly evaluate an image's quality without necessitating prior learning or training. Compared to existing IQA models, the proposed metric requires no reference, prior learning, or training procedures, making it more convenient and time-efficient. The experimental findings obtained from the analysis of five IQA databases demonstrate that the metric proposed in this study exhibits a level of performance that is on par with the current leading NR-IQA metrics. The comparative results demonstrate that the proposed method outperforms existing NR-IQA methods such as SSEQ, ENIQA, BMPRI, and BLIINDS-II, with Spearman's rank ordered correlation coefficient (SROCC) values higher than 0.87, 0.78, and 0.88 for Gaussian, motion, and out-of-focus blur, respectively.

Keywords: 

non-referential image quality assessment (NR-IQA), reblurring, gradient magnitude similarity deviation (GMSD), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), combined metrics, point spread function (PSF)

1. Introduction

Digital images possess a substantial amount of legitimate data and have found utility in diverse everyday applications, including object recognition, image steganography, facial emotion recognition, and image retrieval. Throughout the various stages of capture, enhancement, and other image processing procedures, it is inevitable that there will ultimately be a decline in quality, leading to the loss of valuable information within the image. Consequently, the assessment of image quality has a crucial function in determining the usability of images from the user's side [1, 2].

Images can experience different types of distortion when they are transmitted and processed. Hence, it is essential to evaluate or address their quality prior to their utilization. Image quality assessment is utilized in numerous applications, such as acquisition, enhancement, compression, and restoration [3].

IQA techniques can be classified as full reference (FR), reduced reference (RR), and non-reference (NR) [4, 5]. NR IQA pertains to the automated evaluation of image quality through an algorithm. This algorithm is designed to predict distorted image quality dependent solely on the information contained within that image [6, 7]. The process of FR IQA requires the inclusion of both a distorted image and a pristine image in order to evaluate the distorted image quality. The RR IQA dataset contains information pertaining to the reference image. Nevertheless, it does not include the actual reference image, irrespective of the presence of a distorted image [8].

Currently, digital images have become an indispensable component of various domains, encompassing scientific endeavors as well as social networking platforms. In the realm of digital imagery, the occurrence of image blurring is a prevalent phenomenon. Blurring is a primary contributor to image degradation, resulting in a reduction in image quality. Therefore, this study focuses on blurry image assessment by introducing a new hybrid NR-IQA metric.

Digital images can exhibit three distinct types of blur effects, namely motion blur, out-of-focus blur, and Gaussian blur [9]. Blur is a phenomenon that arises due to the presence of air noise and improper camera configuration. In addition to its capacity to induce blurring, the presence of noise significantly compromises the quality of the captured image. There are several primary factors that can contribute to the occurrence of a blurry image, including errors in the process of capturing pictures, lens defocusing, atmospheric problems, and low intensity during camera exposure. The human visual system possesses a high level of awareness regarding this phenomenon. Nevertheless, there remains a lack of comprehensive understanding regarding the intricacies of this particular processing mechanism [10, 11]. Hence, the development of metrics for evaluating image blurring is a challenging task.

Over the last years, there has been a focus on the development of blind image quality metrics, including blind image quality metrics (IQMs) comprising spectral kurtosis [12, 13], blind reference-less image spatial quality evaluator (BRISQUE) [14], and the natural image quality evaluator (NIQE) [15]. Some IQMs have employed the statistical characteristics of the deblurred image. In contrast, other researchers have estimated image quality by incorporating the Human Visual System (HVS) [16-19].

It should be noted that each quality metric may produce satisfactory performance when applied to a specific blurring type, such as Gaussian blur, while potentially yielding unsatisfactory results when applied to other types of blurring, such as motion blur and out-of-focus blur. Recent studies [20-23] have demonstrated that the combination of different IQA metrics can help enhance the full-reference and no-reference image quality assessment results. However, the few existing combined NR-IQA metrics require a training phase that may vary from one dataset to another. Most existing combined NR-IQA metrics show limited performance, and there is still significant room for improvement.

In this paper, a novel NR-IQA metric is proposed to enhance the results of existing NR-IQA methods for blurred images. The proposed metric comprises three steps. First, a pseudo-reference image (PRI) is generated from the input blurry image using a re-blur algorithm. Second, quality scores of robust full-reference metrics are computed. Third, the quality scores for the input blurred image are integrated to produce the overall quality score. A non-linear mapping function employed in the studies [21, 23] is used in this study to align the quality scores of the proposed method.

The main contributions of this study are listed below:

  • A novel hybrid metric for assessing the quality of blurred images is proposed. The proposed method does not necessitate a training phase.
  • Combinations of various individual IQA metrics, including the structural similarity (SSIM) [24], Gradient Magnitude Similarity Deviation (GMSD) [25], and peak signal-to-noise ratio (PSNR) [26], and the use of two fusion techniques are studied and presented.
  • Comprehensive evaluations of the proposed method and comparisons with state-of-the-art methods are provided using five IQA databases: SCID [27], SIQAD [28], LIVE [29], TID2013 [30], and KADID-10k [31]. Additionally, three different distortion types-out-of-focus blur, motion blur, and Gaussian blur-are considered in the study.

The remainder of this article includes four sections: Section 2 discusses the related literature, Section 3 presents the proposed NR-IQA method, Section 4 provides the experimental results, and Section 5 presents the conclusions.

2. Related Work

NR-IQA has garnered considerable attention in the past few decades. However, due to the absence of a reference image, NR-IQA algorithms must make assumptions about the distortions present in each input image. For instance, Moorthy and Bovik [32] proposed a NR-IQA algorithm known as the DIIVINE index. The DIIVINE algorithm relies on a two-stage framework involving the identification of distortions and the evaluation of their quality. Although this method has achieved good performance with some IQA datasets, it yields limited results with unseen distortions. Rajevenceltha and Gaidhane [33] proposed a NR-IQA method based on texture and structural features extracted from input images. Texture features are extracted using a local binary pattern (LBP), while structural features are extracted using hyper-smoothing LBP (H-LBP) and Laplacian of H-LBP (LH-LBP). These extracted features are then fed into a support vector regression (SVR) algorithm to estimate image quality. While the method demonstrate promising results, it requires an extensive training phase and time-consuming feature hand-crafting, which may vary from one dataset to another. Besides, Min et al. [34] proposed a method known as the PRI-based blind scale (BPRI), which utilizes PRI. However, BPRI method demonstrates good performance, it has a quite high computational complexity.

Furthermore, the BRISQUE method proposed by Mittal et al. [14] utilizes scene statistics to train a SVR algorithm for predicting perceptual quality. Although BRISQUE shows performance improvements over state-of-the-art methods, it remains unclear how well the BRISQUE model performs on images with complex distortions.

In turn, several deep learning-based NR-IQA methods have been proposed in the last years. Zhang et al. [35] proposed a deep bilinear convolutional neural network (CNN) for NR-IQA. This model comprises two distinct CNN streams, each tailored to address specific distortion scenarios individually. However, one limitation of this method is its separate handling of synthetic and authentic distortions through fine-tuning the NR-IQA CNN model on either synthetic or authentic datasets. Consequently, the method's performance may vary depending on the diversity and complexity of the datasets used for fine-tuning, potentially limiting its generalizability across different image databases and real-world applications. Liu et al. [36] proposed the RankIQA method, a NR-IQA approach that learns from ranking. A Siamese network is trained on synthetic distorted images to rank them. However, it does not consider modeling the type of distortion.

Although various deep learning-based methods like [37-43] have been proposed to improve the results of NR-IQA, the use of such methods faces challenges with small-sized datasets and resource-limited computing devices. Differently, this study is focused on presenting NR-IQA that can work with small-sized datasets and resource-limited computing devices. The proposed method is based on restoration schemes and the combination of simple hybrid metrics.

Different attempts have been made in the literature to combine image quality metrics for both full-reference and no-reference image assessment. For instance, Bouida et al. [44] proposed a combined FR-IQA method by fusing structural quality, texture-based quality, and edge-based quality. Specifically, they combined SSIM, image texture quality (ITQ), and EdgeIQA metrics [45]. Ieremeiev et al. [20] proposed a FR-IQA method for remote sensing images by combining five IQMs using alpha-trimmed mean.

Few combined metrics have been presented in the literature for NR-IQA. For instance, Rubel et al. [21] introduced a no-reference neural network-based combined metric for remote sensing images. They employed a Lasso algorithm to select the best combinations of IQMs. One limitation of the study presented in the study [21] is that the effect of various distortion types on the performance of the combined metric has not been investigated. Additionally, the correlation of the combined metric with subjective quality scores is high.

Indeed, the development of stable and robust no-reference IQA metrics remains challenging and essential for numerous tasks. In an effort to enhance the results of no-reference IQA, a new combined NR-IQA metric is introduced for blurred images. Specifically, combinations of various efficient IQMs (SSIM, GMSD, PSNR) computed from PRIs, along with the use of two fusion techniques, are presented. One advantage of the proposed method is that it does not require a training phase.

3. Methodology

Figure 1 depicts the steps of the proposed NR-IQA method for blurred image quality assessment: 1) generating a pseudo reference image (PRI) using a re-blur algorithm, 2) calculating the quality scores of hybrid yet robust full reference metrics (here we used SSIM, GMSD and PSNR), and 3) computing the final quality score (qs) for the input blurred image by integrating the quality scores of the individual metrics (i.e., q1, q2 and q3). It should be noted that each selected metric has a certain advantage. Perceptual differences are captured by SSIM, gradient information preservation is emphasized by GMSD, and fidelity is fundamentally measured by PSNR. The proposed hybrid NR-IQA metric maximizes the benefits of each individual statistic by integrating them. Such combination makes it possible to evaluate image quality more thoroughly, taking into account various factors that influence both subjective and objective evaluations.

Figure 1. The proposed NR-IQA method

In this section, we demonstrate the proposed metric. First, we illustrate the re-blur process, providing the PRI (twice-blurred image), and then revisit the FR SSIM, GMSD, and PSNR. In addition, we explain the combination of the individual quality metrics' quality scores to generate the final quality score to assess the quality of blurred images.

3.1 Re-blur algorithm

A 're-blurred' image is created by purposely blurring the test image. The following equation describes the image degradation model:

$g=H * f+n$            (1)

where, g denotes the blurry image, n is the added noise, * represents the convolution operator, and H is the distortion factor, known as the point-spread function (PSF) [46]. In the spatial field, the PSF characterizes the rate at which the optical system blurs the spotlight [3, 47].

An initial input was made as a two-dimensional image f(x, y) as the following:

$g(x, y)=H(x, y) * f(x, y)+n(x, y)$             (2)

where, g(x, y) is the degraded image, * represents the convolution operator, H(x, y) indicates the distortion factor, known as PSF, and n(x, y) is the added noise [3, 46, 47]. Digital image recovery may be seen as a process in which we try to approximate f(x, y).

In this study, the parameters of the PSF, specifically the length and angle, are estimated through an initial, relatively accurate assessment of the angle using analysis in the Cepstrum domain, following the method presented by Kumar [48]. For a given angle, the blur length in the image is determined using the method presented by Chang et al. [3].

The proposed algorithm involves the acquisition of a blurred image through a blurring kernel (i.e., PSF) as well as convolving the original image. The Winner filter is applied to the blurred image in order to obtain the image, as represented by the following equation:

$G(u, v)=\frac{H^*(m, n)}{|H(m, n)|^2+N S R}$          (3)

where, NSR is the noise variance, and H refers to the blurring filter. When the filtered PSF rembles to real PSF (h), it can reproduce the identical blur in the reblurred image, and the restoration filter will produce decreased resonance and noise.

The image that has been blurred is subsequently processed through the Winner filter, which aims to remove the blur and generate an approximation of the original image. Following this, the PSF is convolved once more to modify the blur level in the image, resulting in the re-blurred image denoted as "g" or PRI, as depicted in Figure 2.

Figure 2. The steps of the re-blur algorithm used to generate the PRI

An example to illustrate the generation of the PRI is shown in Figure 3, where the I01 image in the KADID-10k dataset [31] is used as an input image (Figure 3 (a)). Figure 3 (b) shows the I01 image blurred with motion PSF at 46 degrees. Figure 3 (c) and (d) (e) (f) demonstrate the deblurred images due to PSF angles of 25, 36, 46, and 57 degrees, respectively. Figure 3 (e) demonstrates that the deblurred image is like the original image, and the ringing as well as the level of noise are tolerated. The image that has been deblurred using a blur kernel resembling the PSF is the one that will exhibit a comparable level of blurring when subjected to re-blurring.

Figure 3. Example of the image deblurring process used to generate the PRI

3.2 Combined NR IQA image quality metrics

As depicted in Figure 1, we employ SSIM, GMSD, and PSNR in this study to compute the quality scores q1, q2, and q3 and then combine them into one quality score value qs. Many metrics vary from 0 to 1, but not all. In the case of scales that do not have the same bounds, we make a normalization for them. This study uses the mean fusion function of three metrics to obtain the final quality score qs.

It should be noted that each of the three metrics used in this study has its own advantages: SSIM is designed to mimic human visual perception. It considers structure, contrast, and luminance, making it a better indicator of perceived image quality. GMSD is particularly sensitive to changes in the gradient magnitude of the image. It is good at capturing blurriness and other distortions that affect image edges. Since PSNR is based on a simple mathematical formula, it can be calculated quickly and with little processing complexity. In some circumstances, this simplicity may be helpful.

Below, we introduce the mathematical formulation of the SSIM, GMSD, and PSNR metrics and explain the combination of their quality scores to generate the final quality score (qs) to evaluate the quality of blurred images.

3.2.1 Structure similarity index method (SSIM)

The method known as the Structural Similarity Index is a model based on perception. The structural similarity index (SSIM), which has been widely studied, represents a shift in IQA from the pixel-based phase to the structure-based phase [24, 49]. The SSIM Quality Assessment Index depends on calculating three terms, namely the structural, contrast, and luminance terms [50]. This method involves the perceptional alteration of structural information, resulting in image degradation. Additionally, it collaborates with various other significant factors pertaining to perception, including contrast masking and luminance masking. SSIM values the perceived quality of images and videos. It measures the similarity between the original and the recovered images. The structural Similarity Index Method is defined through these three terms:

${SSIM}(x, y)=[l(x, y)]^\alpha \cdot[c(x, y)]^\beta \cdot[s(x, y)]^\gamma$            (4)

In this context, l represents luminance, which is utilized for comparing the brightness levels between two images. c denotes contrast, serving to distinguish the ranges between the brightest and darkest regions of the two images. s stands for structure, employed to compare the local luminance patterns between the images, revealing their similarities and dissimilarities. The positive constants α, β, and γ are also introduced [13]. The individual representations of luminance, contrast, and structure for an image are expressed as follows:

$l(x, y)=\frac{\left(2 \mu_x \mu_y+C_1\right)}{\left(\mu_x^2+\mu_y^2+C_1\right)}$       (5)

$c(x, y)=\frac{\left(2 \sigma_x \sigma_y+C_2\right)}{\left(\sigma_x^2+\sigma_y^2+C_2\right)}$           (6)

$s(x, y)=\frac{\left(\sigma_{x y}+C_3\right)}{\left(\sigma_x \sigma_y+C_3\right)}$           (7)

Here, $\mu_x$ and $\mu_y$ represent the local means, while $\alpha_x$ and $\alpha_y$ denote the standard deviations and $\alpha_{x y}$ represents the cross-covariance for images x and y respectively. If α=β=γ=1, the index is simplified as the following structure utilizing Eq. (2), Eq. (3) and Eq. (4):

${SSIM}(x, y)=\frac{\left(2 \mu_x \mu_y+C_1\right)\left(2 \sigma_x \sigma_y+C_2\right)}{\left(\mu_x^2+\mu_y^2+C_1\right)\left(\sigma_x^2+\sigma_y^2+C_2\right)}$          (8)

From Eq. (8), SSIM is on the normalized scale (the values between 0 to 1).

3.2.2 Gradient magnitude similarity deviation (GMSD)

GMSD, a method influenced by SSIM, utilizes the concept of structure similarity to evaluate digital image visual quality by measuring the similarity of their gradient magnitudes [25]. The gradient magnitudes of the reference image r and the distorted image d, denoted mr and md, are determined as follows:

$m_r=\sqrt{\left(r * h_x\right)^2+\left(r * h_y\right)^2}$           (9)

$m_d=\sqrt{\left(d * h_x\right)^2+\left(d * h_y\right)^2}$        (10)

where, the symbol "*" denotes the convolution operation and hx and hy are the Prewitt filters along the horizontal and vertical directions, respectively. GMS is estimated as follows:

${GMS}(r, d)=\frac{2 m_r m_d+c}{m_r^2+m_d^2+c}$         (11)

where, c is a positive constant that provides numerical stability. The lighter the gray level in the GMS map, the higher the similarity and the higher the predicted local quality. The GMSD algorithm uses the standard deviation of the GMS map as the IQA index indicates.

${GMSD}(r, d)={STD}(G M S(r, d))$          (12)

In this context, STD(●) denotes the computation of the standard deviation for the input variable. The GMSD value serves as an indicator of the extent of distortion present in an image. A higher GMSD score corresponds to lower perceived image quality. In the absence of distortion, the GMSD value is 0, showcasing its effectiveness in detecting structural distortions with a minimal computational cost.

3.2.3 Peak signal-to-noise ratio (PSNR)

The equation representing PSNR can be written as [26]:

$P S N R=10 * \log _{10}\left[\frac{225^2}{M S E}\right]$          (13)

The value of MSE is estimated by the difference between the blurred and reblurred image as:

$MSE =\frac{1}{M N} \sum_{X=1}^M \sum_{Y=1}^N\left[T(x, y)-T^{\prime}(x, y)\right]^2$           (14)

where, $T(x, y)$ and $T^{\prime}(x, y)$ denote the pixel value at position (x, y) of the blurred and reblurred images, respectively.

3.3 Combining the quality scores

After computing the quality scores of SSIM, GMSD, and PSNR (i.e., q1, q2, and q3), we calculate the final quality score, qs, of the input blurred image using the mean fusion function:

$q s=\frac{1}{N S} \sum_{i=1}^{N S} q_i$           (15)

where, qi is the quality score of ith quality assessment method (SSIM, GMSD, or PSNR), and NS is the number of quality assessment methods. In this study, we experimentally set NS to 3.

Following Sheikh et al. [51], we use a nonlinear mapping before the calculation of evaluation metrics. Firstly, we intend to align the quality scores. After that, we utilize the following five-parameter logistic function for mapping the quality scores:

$q s=\beta_1\left(\frac{1}{2}-\frac{1}{1+\exp \left(\beta_2\left(q-\beta_3\right)\right)}\right)+\beta_4 q+\beta_5$,           (16)

where, qs and q stand for the original and mapped quality scores, respectively; $\left\{\beta_j \mid j=1,2, \ldots .5\right\}$ are five parameters identified based on curve fitting. In prior research, the qs values are considered for evaluation metrics' computation.

Notably, the MATLAB function 'nlinfit' is used in this study to estimate the coefficients of the nonlinear mapping function shown in (16).

3.4 Evaluation metrics

In this study, three evaluation metrics are used to compare the performance of the different indices: Pearson Linear Correlation Coefficient (PLCC), Spearman's Rank Ordered Correlation Coefficient (SROCC), and Root-Mean-Square Error (RMSE). Both PLCC and SROCC are statistical methods used to measure the correlation between two variables. In the context of IQA, PLCC and SROCC are widely used to evaluate the performance of image quality metrics by comparing their predicted scores with human-rated scores. Higher PLCC and SROCC values stand for better agreement between the predicted quality scores and human scores.

The PLCC index can be expressed as follows:

$\operatorname{PLCC}=\frac{1}{n-1} \sum_{j=1}^n\left(\frac{x_j-\bar{x}}{\sigma_x}\right)\left(\frac{y_j-\bar{y}}{\sigma_y}\right)$            (17)

where, $\left\{x_1, x_2, \ldots, x_n\right\}$ are subjective scores, $\left\{y_1, y_2, \ldots, y_n\right\}$ are objective scores, $\sigma_x$ and $\sigma_y$ are their variances and $\bar{x}$ and $\bar{y}$ are their average scores. PLCC is a metric measuring how well the objective scores are associated with the subjective scores.

The SROCC index can be formulated as follows:

$\operatorname{SROCC}=1-\frac{6}{n\left(n^2-1\right)} \sum_{j=1}^n\left(r_{x_j}-r_{y_j}\right)^2$,            (18)

where, $r_{x_j}$ and $r_{y_j}$ are the rank positions of $x_j$ and $y_j$ in Arrays {x} and {y}, respectively. SROCC is a metric measuring the relative monotonicity between the objective and subjective and objective sores.

RMSE can be expressed as follows:

$\operatorname{RMSE}=\left[\frac{1}{\mathrm{n}} \sum_{\mathrm{j}=1}^{\mathrm{n}}\left(\mathrm{x}_{\mathrm{j}}-\mathrm{y}_{\mathrm{j}}\right)^2\right] \frac{1}{2}$,           (19)

RMSE is a metric utilized to determine the absolute error between the objective as well as subjective scores. A good algorithm exhibits reduced RMSE value. RMSE is a prevalent measure representing the quadratic mean’s square root of the differences between the objective predicted scores and subjective quality scores.

Through the utilization of these three statistical measures, it becomes feasible to effectively examine the uniformity between the subjective quality score as well as the objectively predicted score signifying the IQA methods' performance.

This subsection provides an overview of evaluating NR-IQA metrics. NR-IQA ranking algorithms and performance assessment depend on the association between the ground as well as predicted truth quality scores. To quantitatively assess our proposed model performance, we examine the proposed IQA metrics utilizing RMSE, PLCC, SROCC.

4. Results and Discussion

4.1 Databases

Five IQA databases are utilized as testing platforms, comprising SCID [27], SIQAD [28], LIVE [29], TID2013 [30] and KADID-10k [31]. All datasets include multiple subsets (of various types of distortion). Detailed information is provided in Table 1. In this paper, we utilized blur distortion for experiments. Specifically, the SCID dataset contains 200 Gaussian blur images, 200 motion-blurred images, and 40 original screen images. The SIQAD dataset contains 140 motion-blurred images 20 original screen images, and 140 Gaussian blur images. In contrast, the LIVE dataset contains 145 Gaussian blur images and 29 original images. The TID2013 dataset contains 125 Gaussian blur images and 25 original images. The KADID-10k dataset contains 405 Out-of-focus blur images, 405 Gaussian blur images, 405 motion-blurred images, and 81 reference images.

Table 1. IQA benchmark databases utilized in this study

Databases

Original Images

Motion Blurred Images

Gaussian Blur Images

Out-of-Focus Blur Images

SCID [27]

40

200

200

---

SIQAD [28]

20

140

140

---

LIVE [29]

29

---

145

---

TID2013 [30]

25

---

125

---

KADID-10k [31]

81

405

405

405

4.2 Deblurring results

This section presents deblurring outcomes using the proposed scheme for images, focusing on images affected by motion blur. The experiments conducted for the proposed algorithm involve testing on a variety of images. These include commonly referenced "Standard" test images such as cameraman, peppers, and Lena, all in uncompressed tif format and size of 512x512. Additionally, some medical images (COVID-19_CT_image, breast MRI, head CT image, and eye fundus image) are demonstrated in Figure 4. Some results of images for our proposed method using combined metrics are shown in Figure 5.

Figure 4. Example of test images

Figure 5. Sample images from the KADID-10k database and corresponding results of the proposed method

Table 2. Deblurring results of test images with SSIM and GMSD metrics

Image

Length Using Cepstrum Analysis

Original Angle

Estimated Angle

SSIM

GMSD

Blurred Reblurred

Original Blurred

Original Deblurred

Blurred Reblurred

Original Blurred

Original Deblurred

Camera Man

17

33

33.2

0.8456

0.5114

0.5571

0.000044

0.2294

0.0615

Goldhill

23

27

26.9

0.8716

0.4177

0.4733

0.000014

0.2302

0.0457

Lena

25

13

13

0.9130

0.4747

0.5303

0.000011

0.2444

0.0534

Peppers

20

42

41.9

0.8442

0.4004

0.5921

0.00013

0.1697

0.0503

Eye Fundus Image

31

41

40.3

0.9525

0.8887

0.8964

0.0021

0.1314

0.0771

Tomosynthesis

23

26

26.1

0.9743

0.8658

0.8677

0.1902

0.2206

0.4218

COVID-19_CT_image

11

32

32.2

0.7826

0.4298

0.4910

0.00049

0.1732

0.0291

MRI_animation.ogv

13

45

45.8

0.7050

0.5105

0.5781

0.0036

0.1865

0.0472

AbdomenCT

17

26

26.1

0.9135

0.9812

0.6568

0.000021

0.1845

0.0616

Brain MRI segmentation

19

44

44.1

0.6289

0.7745

0.5410

0.0017

0.1501

0.0459

BreastMRI

9

28

29

0.7012

0. 9873

0.6125

0.0032

0.1850

0.0112

ChestCT

7

46

46.3

0.9998

0.9977

0. 5854

0.000012

0.0524

0.0170

Hand

9

37

37.6

0.7225

0.9759

0. 4819

0.00074

0.0531

0.0318

HeadCT

23

56

56.1

0.7657

0.9160

0.3513

0.0002

0.1075

0.0651

x_ray_covid

28

48

48.1

0.8297

0.7948

0. 1989

0.00019

0.1432

0.0520

Table 3. Deblurring results of test images with PSNR metric

Image

Length Using Cepstrum Analysis

Original Angle

Estimated Angle

PSNR

Blurred Reblurred

Original Blurred

Original Deblurred

Camera Man

17

33

33.2

65.2

19.9

30.3

Goldhill

23

27

26.9

70

21.1

30.3

Lena

25

13

13

69.9

18.5

30.6

Peppers

20

42

41.9

63.7

23.6

32.3

Eye Fundus Image

31

41

40.3

50.1

29.1

32.7

Tomosynthesis

23

26

26.1

66.1

32.4

44.7

COVID-19_CT_Image

11

32

32.2

56.9

20.8

31.5

MRI_animation.ogv

13

45

45.8

46.7

21.3

33.3

AbdomenCT

17

26

26.1

71.4

23.8

32.7

Brain MRI segmentation

19

44

44.1

51.3

27.7

34.1

BreastMRI

9

28

29

46.3

20.1

33

ChestCT

7

46

46.3

50.7

32.3

39

Hand

9

37

37.6

58.9

36.8

36.8

HeadCT

23

56

56.1

63.9

27.2

34.5

x_ray_covid

28

48

48.1

63.3

26.9

32.8

Table 4. SROCC, PLCC, and RMSE of the SSIM, GMSD, PSNR, SC, NAE, and NCC for SIQAD motion blur distortion datasets

Criteria

PSNR

GMSD

SSIM

SC

NAE

NCC

SROCC

0.25206

0.3325

0.5721

0.06180

0.07201

0.1929

PLCC

0.14738

0.2914

0.5926

0.08557

0.13692

0.1856

RMSE

12.8598

12.4375

10.4739

12.9548

12.8794

12.974

Tables 2 and 3 summarize the blur's PSNR, GMSD, and SSIM results. Motion-blurred images were utilized in this case, and the PSF parameter angle theta was determined utilizing Cepstrum analysis. SSIM, GMSD, and PSNR were calculated for three sets of images: original and blurred, original and blurred, and blurred and reblurred. The reblurred and original blurred images are compared utilizing PSNR, GMSD, and SSIM, as demonstrated in Tables 2 and 3. Therefore, re-blurring can determine the blur PSFs in the case of motion-blurred images. An elevated SSIM, GMSD, and PSNR value indicates a high-quality picture.

In Tables 2 and 3 PSNR, GMSD, and SSIM were scrutinized. The re-blurred images are acquired from re-convolved deblurred images (with the determined PSF). It has been determined that when the PSF of the original closely resembles the original PSF, the resulting blurring in the re-blurred image will be equivalent to that of the original blurred image. It is apparent that in the context of motion deblurring, the utilization of SSIM, GMSD, and PSNR has resulted in the production of deblurred images with enhanced visual quality.

Table 4 provides an overview of the results for motion-blurred images, including SSIM, GMSD, PSNR, and other metrics (SC, NAE, and NCC) [52] values. Based on the obtained SROCC, PLCC, and RMSE values, it is evident that the metrics SSIM, GMSD, and PSNR outperform the other metrics, such as SC, NAE, and NCC.

4.3 Ablation study

The subsequent selection of primary metrics for the amalgamation of two or more metrics was predicated on the correlation observed within their respective subsets. Specifically, the four combinations of three metrics were chosen, and all combinations of two and three metrics were evaluated across all datasets.

In our experiments, we have tested other methods to combine the quality scores of the individual quality assessment methods, like the median fusion functions. Based on the findings presented in Table 5, it was determined that the utilization of the mean fusion function yielded the most favorable outcome.

Table 5. Performance of proposed fusion function using mean, median for SSIM, GMSD, PSNR metrics on KADID-10k, SIQAD and SCID for motion blur distortion

Criteria

Database

Fusion Function

Mean

Median

SROCC

KADID-10k

0.7812

0.7243

SIQAD

0.5741

0.4761

SCID

0.5810

0.5492

PLCC

KADID-10k

0.5848

0.5977

SIQAD

0.7913

0.7213

SCID

0.4072

0.3041

Table 6 shows SSIM+GMSD, SSIM+PSNR, GMSD+PSNR, and SSIM+GMSD+PSNR combinations for LIVE, SIQAD, and TID2013 data sets. The SSIM+GMSD combination performed better for SIQAD than other combinations, which scored 8.6605 by RMSE. The SSIM+PSNR combination achieved better performance for SIQAD than other combinations, with a score reaching 0.8955 from PLCC. The GMSD+PSNR combination achieved better performance for TID2013 than other combinations in SROCC and better performance for LIVE than other combinations in PLCC and RMSE, respectively. It achieved SROCC, PLCC, and RMSE of 0.8951, 0.9748, and 6.2601, respectively. The SSIM+GMSD+PSNR combination achieved better performance for LIVE and SIQAD than other combinations in SROCC. It achieved 0.9864 and 0.6205, respectively. In addition, it achieved better performance for TID2013 than other combinations in PLCC and RMSE, yielding 0.8935 and 0.5015, respectively.

Table 7 shows the combinations of SSIM+GMSD, SSIM+PSNR, GMSD+PSNR, SSIM+GMSD+PSNR for SIQAD and SCID records. The SSIM+GMSD combination outperformed alternative combinations for SIQAD, attaining a PLCC score of 0.7976. The SSIM+PSNR combination outperformed other combinations for SCID and SIQAD, with respective RMSE values of 8.7675 and 0.4122 obtained from PLCC. Compared to SSIM+GMSD, SSIM+PSNR, and SSIM+GMSD+PSNR, the performance of SIQAD and SCID was lowest when GMSD+PSNR was utilized. The SSIM+GMSD+PSNR combination outperformed other combinations for SIQAD, with an SROCC value of 0.5741. Additionally, it outperformed other combinations for SCID, attaining SROCC and RMSE values of 0.5810 and 8.0471, respectively.

Table 8 shows the mixtures of SSIM+GMSD, SSIM+PSNR, GMSD+PSNR and SSIM+GMSD+PSNR for the KADID-10k dataset. GMSD+PSNR combination obtained better performance for Gaussian Blur distortion than other combinations which value reaches 0.874 by SROCC. SSIM+GMSD+PSNR combination obtained better performance for Motion Blur and Out-of-focus Blur distortion than other combinations in SROCC, it achieved 0.781, 0.884 respectively.

Table 6. Performance of proposed SSIM+GMSD, SSIM+PSNR, GMSD+PSNR, SSIM+GMSD+PSNR on LIVE, SIQAD, and TID2013 data combinations sets for Gaussian blurring

Criteria

Database

SSIM+GMSD

SSIM+PSNR

GMSD+PSNR

SSIM+GMSD+PSNR

SROCC

LIVE

0.9841

0.9827

0.9798

0.9864

SIQAD

0.6181

0.5824

0.6201

0.6205

TID2013

0.8873

0.8710

0.8951

0.8911

PLCC

LIVE

0.9669

0.9704

0.9748

0.9712

SIQAD

0.8312

0.8955

0.8871

0.8916

TID2013

0.8103

0.8546

0.8759

0.8935

RMSE

LIVE

7.3170

7.1521

6.2601

6.7128

SIQAD

8.6605

9.1265

9.8504

9.0472

TID2013

0.5381

0.6238

0.6426

0.5015

Table 7. Performance of proposed combinations of SSIM+GMSD, SSIM+PSNR, GMSD+PSNR, SSIM+GMSD+PSNR on SIQAD and SCID motion blur distortion datasets

Criteria

Database

SSIM+GMSD

SSIM+PSNR

GMSD+PSNR

SSIM+GMSD+PSNR

SROCC

SIQAD

0.5607

0.5712

0.5689

0.5741

SCID

0.5795

0.5736

0.5747

0.5810

PLCC

SIQAD

0.7976

0.7962

0.7881

0.7913

SCID

0.4034

0.4122

0.4011

0.4072

RMSE

SIQAD

9.0521

8.7675

9.0549

9.0358

SCID

8.1123

8.0874

8.1426

8.0471

Table 8. Blur performance of the proposed mixtures of SSIM+GMSD, SSIM+PSNR, GMSD+PSNR, SSIM+GMSD+PSNR on KADID-10k dataset for Motion Blur, Out-of-focus Blur, and Gaussian distortions

Criteria

Database

Dist. Type

SSIM+GMSD

SSIM+PSNR

GMSD+PSNR

SSIM+GMSD+PSNR

SROCC

KADID-10k

Gaussian Blur

0.851

0.866

0.874

0.873

Motion Blur

0.761

0.701

0.775

0.781

Out-of-focus Blur

0.820

0.814

0.742

0.884

4.4 Comparison to the state-of-the-art

We compared the performance of the proposed combinations of SSIM+GMSD, SSIM+PSNR, GMSD+PSNR, SSIM+GMSD+PSNR with NR IQA models such as BRISQUE [14], BLIINDS-II [53], IL-NIQE [54], NIQE [15], BMPRI [55], ENIQA [56] and SSEQ [57]. As already mentioned, five benchmark IQA databases are used in this study: SCID [27], SIQAD [28], LIVE [29], TID2013 [30], and KADID-10k [31].

Gaussian blurring as in Table 9, the highest SROCC correlation for proposed SSIM+GMSD+PSNR and the highest PLCC correlation for proposed GMSD+PSNR and the highest RMSE correlation for NIQE in LIVE dataset can be easily observed, the highest SROCC, PLCC and RMSE correlation for proposed SSIM+GMSD+PSNR, SSIM+PSNR and SSIM+GMSD, in SIQAD dataset, respectively. The highest SROCC correlation for proposed GMSD+PSNR and highest PLCC, RMSE correlation for proposed SSIM+GMSD+PSNR, in TID2013 dataset, respectively.

These results for the proposed method are still better compared to the results obtained for some alternative metrics. As can be observed, the SROCC and PLCC correlation of the BLINDS-II and RMSE correlation of the BRISQUE metric obtained for the LIVE dataset vary noticeably less than other metrics. The SROCC, PLCC, and RMSE correlation of the BLIINDS-II metric obtained for the SIQAD dataset varies significantly less than other metrics, and the SROCC and RMSE correlation of the NIQE metric that obtained for the TID2013 dataset varies significantly less than other metrics.

The results obtained are presented in Table 10 for motion blur, with the highest SROCC correlation for proposed SSIM+GMSD+PSNR and the highest PLCC correlation for SSIM+ GMSD and the highest RMSE correlation for proposed SSIM+ PSNR in SIQAD can be easily observed. the highest SROCC, RMSE correlation for proposed SSIM+GMSD+PSNR and the highest PLCC correlation for SSIM+ PSNR in SCID can be easily observed. As can be seen, the SROCC and correlation of BLINDS-II and PLCC, RMSE of the NIQE metric obtained for the SIQAD dataset vary significantly less than other metrics. The SROCC, PLCC correlation of the BRISQUE and RMSE correlation of the BLIINDS-II metric obtained for the SCID dataset varies significantly less than other metrics.

Table 9. SROCC, PLCC, RMSE comparison in three databases on Gaussian Blur distortion type

Criteria

Database

Method

BLIINDS-II [53]

BRISQUE [14]

NIQE [15]

ILNIQE [54]

Proposed

SSIM+GMSD

SSIM+PSNR

GMSD+PSNR

SSIM+GMSD+PSNR

SROCC

LIVE

0.9150

0.9513

0.9326

0.9154

0.9841

0.9827

0.9798

0.9864

SIQAD

0.4404

0.6318

0.5266

0.4556

0.6181

0.5824

0.6201

0.6205

TID2013

0.8367

0.8137

0.7986

0.8148

0.8873

0.8710

0.8951

0.8911

PLCC

LIVE

0. 9232

0.9501

0.9446

0.9327

0.9669

0.9704

0.9748

0.9712

SIQAD

0.4585

0.6597

0.6066

0.5505

0.8312

0.8955

0.8871

0.8916

TID2013

0.8492

0.8476

0.8190

0.8475

0.8103

0.8546

0.8759

0.8935

RMSE

LIVE

6.5417

7.5814

6.0625

6.6621

7.3170

7.1521

6.2601

6.7128

SIQAD

13.487

11.405

12.065

12.669

8.6605

9.1265

9.8504

9.0472

TID2013

0.6589

0.6622

0.7160

0.6623

0.5381

0.6238

0.6426

0.5015

Table 10. SROCC, PLCC, and RMSE comparison in three databases on Motion Blur distortion type

Criteria

Database

Method

BLIINDS-II [53]

BRISQUE [14]

NIQE [15]

ILNIQE [54]

Proposed

SSIM+GMSD

SSIM+PSNR

GMSD+PSNR

SSIM+GMSD+PSNR

SROCC

SIQAD

0.2512

0.4401

0.3514

0.4480

0.5607

0.5712

0.5689

0.5741

SCID

0.2378

0.2050

0.3190

0.3354

0.5795

0.5736

0.5747

0.5810

PLCC

SIQAD

0.3425

0.5318

0.1842

0.4681

0.7976

0.7962

0.7881

0.7913

SCID

0.3048

0.2592

0.2790

0.3212

0.4034

0.4122

0.4011

0.4072

RMSE

SIQAD

12.3921

11.0113

12.6856

11.2463

9.0521

8.7675

9.0549

9.0358

SCID

10.9752

10.5570

10.4966

10.5149

8.1123

8.0874

8.1426

8.0471

Table 11. SROCC in the KADID-10k database on three distortion types: Gaussian blur, Out-of-focus blur and motion blur

Criteria

Database

Distortion Type

Method

BLIINDS-II [53]

BMPRI [55]

ENIQA [56]

SSEQ [57]

Proposed

SSIM+GMSD

SSIM+PSNR

GMSD+PSNR

SSIM+GMSD+PSNR

SROCC

KADID-10k

Gaussian blur

0.789

0.839

0.785

0.714

0.851

0.866

0.874

0.873

Motion blur

0.416

0.390

0.574

0.368

0.761

0.701

0.775

0.781

Out-of-focus blur

0.755

0.815

0.797

0.739

0.820

0.814

0.742

0.884

4.5 Performance over different distortion types

We test the state-of-the-art NR-IQA methods’ performance over the three distortion types, including lens blur (i.e., out-of-focus blur), Gaussian blur, and motion blur. We compare the proposed combinations of SSIM+GMSD, SSIM+PSNR, GMSD+PSNR, SSIM+GMSD+PSNR with five NR-IQA metrics (BLIINDS-II [53], BMPRI [55], ENIQA [56] and SSEQ [57]) on each distortion type of the KADID-10k database.

In particular, we report SROCC values measured across the different bias types of the KADID-10k database [31]. This database includes images with 25 various types of distortions, such as Gaussian blur, Out-of-focus blur and motion blur. The outcomes are depicted in Table 11. As one can see, the highest correlation for proposed GMSD+ PSNR and the lowest for SSEQ for Gaussian blur can be easily observed, and motion blur as well as out-of-focus blur the highest correlation for proposed SSIM+GMSD+PSNR and the lowest for SSEQ can easily be observed.

The experiments performed have confirmed the proposed NR-IQA metric based on SSIM, GMSD, and PSNR using blind recovery schemes for blurred images that the multiply distorted images' specificity requires a combination of various metrics. Furthermore, the application of the reblurring model, as proposed in the study, demonstrates a substantial enhancement in performance across the majority of the datasets examined.

It is worth noting that the improved performance of SSIM+GMSD+PSNR combination can be attributed to its ability to capture complementary aspects of image quality. Indeed, the combination of SSIM, GMSD, and PSNR leverages perceptual differences, gradient information preservation, and fidelity measurement, respectively. These metrics address distinct aspects of image quality, providing a more comprehensive evaluation by considering factors that influence both subjective and objective assessments. The synergistic effects of these complementary metrics lead to a more thorough and accurate representation of image quality across diverse scenarios, enhancing the overall effectiveness of the proposed hybrid NR-IQA method.

4.6 Computational complexity

Table 12 presents the execution time of the proposed method as well as other NR-IQA metrics. All experiments were carried out on a computer with a 2.4GHz Intel Core i5-2430M CPU and 8GB RAM. As shown, the execution time is 9.7s, with 6.2s spent on estimating the PSF. The average execution time of DIIVINE is much longer than that of the proposed method. However, NR-IQA metrics like NIQE and BRISQUE have shorter execution times than the proposed method, though their performance is lower. It should be noted that we are using a non-optimized version of our code, and optimizing our implementation will significantly reduce the execution time of our method, especially during the estimation of PSF.

Table 12. Comparing the computational complexity of the proposed method with other methods in terms of the average execution time in seconds

Method

Average Time (s)

Proposed

9.7

DIIVINE [32]

15.1

NIQE [15]

3.3

BRISQUE [14]

1.4

4.7 Limitations of the proposed method

This paper presents a comprehensive analysis of experimental results to illustrate the benefits of the proposed approach. The methodology demonstrates favorable performance in NR-IQA, substantiating its efficacy in evaluating the quality of blurred images. However, the proposed approach is validated using five publicly available IQA datasets with multiply distorted images, which may not be representative of all possible image distortions. The proposed NR-IQA method does not exhibit substantial enhancements across all datasets with respect to all indices. Nevertheless, considering variations in image complexity, including factors such as image size, contrast, and diversity across different datasets, the experimental findings indicate that the suggested approach can be successfully applied to various image categories. Finally, the execution time of the proposed method is about 9s, with the highest time spent in estimating PSF. It is worth noting that optimizing the implementation of the proposed method will decrease the computational time.

5. Conclusions

Digital images are vital components of various domains, including scientific and industrial applications. Therefore, there is a need for accurate image quality assessment methods to determine the usability of images for each specific application. In this paper, we proposed a novel NR-IQA metric based on SSIM, GMSD, and PSNR. Additionally, we focus on blurred IQA. The proposed method uses the advantages of SSIM, GMSD, and PSNR, like mathematical ease, and extends it from FR to NR. The distorted image is subsequently subjected to a blurring process, followed by the application of a blurring operation to serve as a reference. The combination of elementary metrics is considered to be a highly effective approach for enhancing performance. The experimental results suggest that the proposed method demonstrates promising performance and exhibits high credibility in relation to the HVS. In contrast to the existing models for IQA, the method proposed in this study does not necessitate the use of a reference image. This characteristic enhances the efficiency and convenience of its application.

The proposed NR-IQA method, designed for blurred images, is invaluable in diverse domains, such as medical imaging for ensuring diagnostic accuracy and in surveillance applications for enhancing the reliability of image analysis, contributing to improved outcomes in healthcare and security domains.

In future studies, we will investigate more efficient methods to achieve IQA. For NR-IQA, it may be interesting to explore other types of distortion. However, it should be noted that the development of any objective measure closely related to human perception of multiple abnormalities would remain limited by the availability of appropriate databases. Additionally, we will apply metrics based on trained CNNs using images affected by multiple distortions.

  References

[1] Yang, X., Wang, T., Ji, G. (2022). Image quality assessment via multiple features. Multimedia Tools and Applications, 81(4): 5459-5483. https://doi.org/10.1007/s11042-021-11788-x

[2] Chen, Z., Xu, J., Lin, C., Zhou, W. (2020). Stereoscopic omnidirectional image quality assessment based on predictive coding theory. IEEE Journal of Selected Topics in Signal Processing, 14(1): 103-117. https://doi.org/10.1109/JSTSP.2020.2968182

[3] Chang, C.F., Wu, J.L., Tsai, T.Y. (2017). A single image deblurring algorithm for nonuniform motion blur using uniform defocus map estimation. Mathematical Problems in Engineering, 2017. https://doi.org/10.1155/2017/6089650

[4] Hu, B., Li, L., Wu, J., Qian, J. (2020). Subjective and objective quality assessment for image restoration: A critical survey. Signal Processing: Image Communication, 85: 115839. https://doi.org/10.1016/j.image.2020.115839

[5] Ahmed, I.T., Der Chen, S., Jamil, N., Hammad, B.T. (2020). Contrast image quality assessment algorithm based on probability density functions features. In International Conference of Reliable Information and Communication Technology. Cham: Springer International Publishing, pp. 1030-1040. https://doi.org/10.1007/978-3-030-70713-2_92

[6] Golestaneh, S.A., Kitani, K. (2020). No-reference image quality assessment via feature fusion and multi-task learning. arXiv Preprint arXiv: 2006.03783. https://doi.org/10.48550/arXiv.2006.03783

[7] Leonardi, M., Napoletano, P., Schettini, R., Rozza, A. (2021). No reference, opinion unaware image quality assessment by anomaly detection. Sensors, 21(3): 994. https://doi.org/10.3390/s21030994

[8] Ortiz-Jaramillo, B., Kumcu, A., Platisa, L., Philips, W. (2018). Content-aware contrast ratio measure for images. Signal Processing: Image Communication, 62: 51-63. https://doi.org/10.1016/j.image.2017.12.007

[9] Zhai, G., Min, X. (2020). Perceptual image quality assessment: A survey. Science China Information Sciences, 63: 1-52. https://doi.org/10.1007/s11432-019-2757-1

[10] Ljubenović, M., Figueiredo, M.A. (2019). Plug-and-play approach to class-adapted blind image deblurring. International Journal on Document Analysis and Recognition (IJDAR), 22(2): 79-97. https://doi.org/10.1007/s10032-019-00318-z

[11] Oszust, M. (2019). Local feature descriptor and derivative filters for blind image quality assessment. IEEE Signal Processing Letters, 26(2): 322-326. https://doi.org/10.1109/LSP.2019.2891416

[12] Yao, H., Ma, B., Zou, M., Xu, D., Yao, J. (2021). No-reference noisy image quality assessment incorporating features of entropy, gradient, and kurtosis. Frontiers of Information Technology & Electronic Engineering, 22(12): 1565-1582. https://doi.org/10.1631/FITEE.2000716

[13] Cui, Y. (2020). No-reference image quality assessment based on dual-domain feature fusion. Entropy, 22(3): 344. https://doi.org/10.3390/e22030344

[14] Mittal, A., Moorthy, A.K., Bovik, A.C. (2012). No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 21(12): 4695-4708. https://doi.org/10.1109/TIP.2012.2214050

[15] Mittal, A., Soundararajan, R., Bovik, A.C. (2012). Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters, 20(3): 209-212. https://doi.org/10.1109/LSP.2012.2227726

[16] Han, Y., Kan, J. (2019). Blind image deblurring based on local edges selection. Applied Sciences, 9(16): 3274. https://doi.org/10.3390/app9163274

[17] Tu, Z., Yu, X., Wang, Y., Birkbeck, N., Adsumilli, B., Bovik, A.C. (2021). RAPIQUE: Rapid and accurate video quality prediction of user generated content. IEEE Open Journal of Signal Processing, 2: 425-440. https://doi.org/10.1109/OJSP.2021.3090333

[18] Li, C., Guan, T., Zheng, Y., Zhong, X., Wu, X., Bovik, A. (2021). Blind image quality assessment in the contourlet domain. Signal Processing: Image Communication, 91: 116064. https://doi.org/10.1016/j.image.2020.116064

[19] Nasr, N., Moussaid, N., Gouasnouane, O. (2021). A Nash-game approach to blind image deblurring. In 2021 Third International Conference on Transportation and Smart Technologies (TST), Tangier, Morocco, pp. 36-41. https://doi.org/10.1109/TST52996.2021.00013

[20] Ieremeiev, O., Lukin, V., Ponomarenko, N., Egiazarian, K. (2018). Robust linearized combined metrics of image visual quality. Electronic Imaging, 30: 1-6. https://doi.org/10.2352/ISSN.2470-1173.2018.13.IPAS-260

[21] Rubel, A., Ieremeiev, O., Lukin, V., Fastowicz, J., Okarma, K. (2022). Combined no-reference image quality metrics for visual quality assessment optimized for remote sensing images. Applied Sciences, 12(4): 1986. https://doi.org/10.3390/app12041986

[22] Ieremeiev, O.I., Lukin, V.V., Ponomarenko, N.N., Egiazarian, K.O., Astola, J. (2016). Combined full-reference image visual quality metrics. Electronic Imaging, 28: 1-10. https://doi.org/10.2352/ISSN.2470-1173.2016.15.IPAS-180

[23] Ieremeiev, O., Lukin, V., Ponomarenko, N., Egiazarian, K. (2019). Combined no-reference IQA metric and its performance analysis. In Image Processing: Algorithms and Systems Conference. https://doi.org/10.2352/ISSN.2470-1173.2019.11.IPAS-260

[24] Bakurov, I., Buzzelli, M., Schettini, R., Castelli, M., Vanneschi, L. (2022). Structural similarity index (SSIM) revisited: A data-driven approach. Expert Systems with Applications, 189: 116087. https://doi.org/10.1016/j.eswa.2021.116087

[25] Xue, W., Zhang, L., Mou, X., Bovik, A.C. (2013). Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Transactions on Image Processing, 23(2): 684-695. https://doi.org/10.1109/TIP.2013.2293423

[26] Sara, U., Akter, M., Uddin, M.S. (2019). Image quality assessment through FSIM, SSIM, MSE and PSNR-A comparative study. Journal of Computer and Communications, 7(3): 8-18. https://doi.org/10.4236/jcc.2019.73002

[27] Ni, Z., Ma, L., Zeng, H., Fu, Y., Xing, L., Ma, K.K. (2017). SCID: A database for screen content images quality assessment. In 2017 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), Xiamen, China, pp. 774-779. https://doi.org/10.1109/ISPACS.2017.8266580

[28] Yang, H., Fang, Y., Lin, W. (2015). Perceptual quality assessment of screen content images. IEEE Transactions on Image Processing, 24(11): 4408-4421. https://doi.org/10.1109/TIP.2015.2465145

[29] Sheikh, H. (2005). LIVE image quality assessment database release 2. http://live.ece.utexas.edu/research/quality.

[30] Ponomarenko, N., Jin, L., Ieremeiev, O., Lukin, V., Egiazarian, K., Astola, J., Vozel, B., Chehdi, K., Carli, M., Battisti, F., Kuo, C.C.J. (2015). Image database TID2013: Peculiarities, results and perspectives. Signal Processing: Image Communication, 30: 57-77. https://doi.org/10.1016/j.image.2014.10.009

[31] Lin, H., Hosu, V., Saupe, D. (2019). KADID-10k: A large-scale artificially distorted IQA database. In 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, pp. 1-3. https://doi.org/10.1109/QoMEX.2019.8743252

[32] Moorthy, A.K., Bovik, A.C. (2011). Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Transactions on Image Processing, 20(12): 3350-3364. https://doi.org/10.1109/TIP.2011.2147325

[33] Rajevenceltha, J., Gaidhane, V.H. (2022). An efficient approach for no-reference image quality assessment based on statistical texture and structural features. Engineering Science and Technology, an International Journal, 30: 101039. https://doi.org/10.1016/j.jestch.2021.07.002

[34] Min, X., Gu, K., Zhai, G., Liu, J., Yang, X., Chen, C.W. (2017). Blind quality assessment based on pseudo-reference image. IEEE Transactions on Multimedia, 20(8): 2049-2062. https://doi.org/10.1109/TMM.2017.2788206

[35] Zhang, W., Ma, K., Yan, J., Deng, D., Wang, Z. (2018). Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Transactions on Circuits and Systems for Video Technology, 30(1): 36-47.‏ https://doi.org/10.1109/TCSVT.2018.2886771

[36] Liu, X., van de Weijer, J., Bagdanov, A.D. (2017). Rankiqa: Learning from rankings for no-reference image quality assessment. arXiv:1707.08347. https://doi.org/10.48550/arXiv.1707.08347

[37] Ravela, R., Shirvaikar, M., Grecos, C. (2019). No-reference image quality assessment based on deep convolutional neural networks. In Real-Time Image Processing and Deep Learning 2019, SPIE, 10996: 1099604. https://doi.org/10.1117/12.2518438

[38] Bianco, S., Celona, L., Napoletano, P., Schettini, R. (2018). On the use of deep learning for blind image quality assessment. Signal, Image and Video Processing, 12: 355-362. https://doi.org/10.1007/s11760-017-1166-8

[39] Gao, F., Yu, J., Zhu, S., Huang, Q., Tian, Q. (2018). Blind image quality prediction by exploiting multi-level deep representations. Pattern Recognition, 81: 432-442. https://doi.org/10.1016/j.patcog.2018.04.016

[40] Varga, D. (2021). No-reference image quality assessment with multi-scale orderless pooling of deep features. Journal of Imaging, 7(7): 112. https://doi.org/10.3390/jimaging7070112

[41] Sun, S., Yu, T., Xu, J., Zhou, W., Chen, Z. (2022). GraphIQA: Learning distortion graph representations for blind image quality assessment. IEEE Transactions on Multimedia, 25: 2912-2925. https://doi.org/10.1109/TMM.2022.3152942

[42] Conde, M.V., Burchi, M., Timofte, R. (2022). Conformer and blind noisy students for improved image quality assessment.  arXiv:2204.12819. https://doi.org/10.48550/arXiv.2204.12819

[43] Ayyoubzadeh, S.M., Royat, A. (2021). (ASNA) an attention-based siamese-difference neural network with surrogate ranking loss function for perceptual image quality assessment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, pp. 388-397. https://doi.org/10.1109/CVPRW53098.2021.00049

[44] Bouida, A., Khelifi, M., Beladgham, M., Hamlili, F.Z. (2021). Monte carlo optimization of a combined image quality assessment for compressed images evaluation. Traitement du Signal, 38(2): 281-289. https://doi.org/10.18280/ts.380204

[45] Bouida, A., Beladgham, M., Bassou, A., Benyahia, I., Ahmed-Taleb, A., Haouam, I., Kamline, M. (2020). Evaluation of textural degradation in compressed medical and biometric images by analyzing image texture features and edges. Traitement du Signal, 37(5): 753-762. https://doi.org/10.18280/ts.370507

[46] Leclaire, A., Moisan, L. (2015). No-reference image quality assessment and blind deblurring with sharpness metrics exploiting fourier phase information. Journal of Mathematical Imaging and Vision, 52: 145-172. https://doi.org/10.1007/s10851-015-0560-5

[47] Muthana, R., Alshareefi, A.N. (2020). Techniques in deblurring image. In Journal of Physics: Conference Series. IOP Publishing, 1530(1): 012115. https://doi.org/10.1088/1742-6596/1530/1/012115

[48] Kumar, A. (2017). Deblurring of motion blurred images using histogram of oriented gradients and geometric moments. Signal Processing: Image Communication, 55: 55-65. https://doi.org/10.1016/j.image.2017.03.016

[49] Chen, C., Zhao, H., Yang, H., Yu, T., Peng, C., Qin, H. (2021). Full-reference screen content image quality assessment by fusing multilevel structure similarity. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 17(3): 1-21. https://doi.org/10.1145/3447393

[50] Frackiewicz, M., Szolc, G., Palus, H. (2021). An improved SPSIM index for image quality assessment. Symmetry, 13(3): 518. https://doi.org/10.3390/sym13030518

[51] Sheikh, H.R., Sabir, M.F., Bovik, A.C. (2006). A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing, 15(11): 3440-3451. https://doi.org/10.1109/TIP.2006.881959

[52] Memon, F., Unar, M.A., Memon, S. (2015). Image quality assessment for performance evaluation of focus measure operators. Mehran University Research Journal of Engineering & Technology, 34(4): 379-386. https://search.informit.org/doi/10.3316/informit.157515897052285.

[53] Saad, M.A., Bovik, A.C., Charrier, C. (2012). Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Transactions on Image Processing, 21(8): 3339-3352. https://doi.org/10.1109/TIP.2012.2191563

[54] Zhang, L., Zhang, L., Bovik, A.C. (2015). A feature-enriched completely blind image quality evaluator. IEEE Transactions on Image Processing, 24(8): 2579-2591. https://doi.org/10.1109/TIP.2015.2426416

[55] Min, X., Zhai, G., Gu, K., Liu, Y., Yang, X. (2018). Blind image quality estimation via distortion aggravation. IEEE Transactions on Broadcasting, 64(2): 508-517. https://doi.org/10.1109/TBC.2018.2816783

[56] Chen, X., Zhang, Q., Lin, M., Yang, G., He, C. (2019). No-reference color image quality assessment: From entropy to perceptual quality. EURASIP Journal on Image and Video Processing, 2019: 1-14. https://doi.org/10.1186/s13640-019-0479-7

[57] Liu, L., Liu, B., Huang, H., Bovik, A.C. (2014). No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication, 29(8): 856-863. https://doi.org/10.1016/j.image.2014.06.006