© 2022 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
In this paper, we report a modeling of approximation for images by finding numerical rank in the wavelet domain through singular value decomposition of approximation coefficients. Firstly, the digital image is transformed into the frequency domain. Then highfrequency subbands are quantized to zero. This is quite obvious in waveletbased image compression. Simultaneously, the lowfrequency subbands are compressing by using truncated singular value decomposition (TSVD) through a numerical rank. Finally, reconstruct the approximation matrix via inverse discrete wavelet transform with low computational intricacy. This mathematical model is more adequate for solving engineering problems arises in digital image processing such as the transmission of image (reducing the bandwidth size of a communication channel) and storage capacity (space saving). The simulation results on gray and color images show that there is a gain in: (i) the compression ratio with acceptable visual quality as per human vision system; (ii) balancing of performance measures over conventional SVD methods.
image compression, TSVD, numerical rank, discrete wavelet transform, PSNR, CR
With the rapid development of the digital world, extensive applications of multimedia technology and the internet, the huge size of digital image(s) is represented by a matrix $\left(A \in \mathbb{R}^{m \times n}\right)$. This carries all kind of information's are produced and transmitted over the network [1]. The size of digital multimedia data (images, videos) constantly increasing and these image files can be very large and they require more memory. For example, a grayscale (8bit) image is 512 $\times $ 512 pixels have 262144 elements to be stored, and a typical RGB (24bit) image 1024 $\times $ 768 has nearly a million. Therefore, data compression in the form of an image(s) plays a crucial in the era of the digital world.
The problem of image compression is to achieve a high compression ratio (CR) in the digital representation of an input image $\left(A \in \mathbb{R}^{m \times n}\right)$ with minimum perceived loss of image quality. Ultimately, the criterion of image quality is usually that judged or measured by the human receiver on the basis of human vision system [2]. There are two approaches to compression methods: Lossless and lossy compression. Lossless compression is generally used for medical database management and telediagnosis applications [3] because it cannot tolerate any difference between original and reconstructed data [4]. An image can be lossycompressed by removing irrelevant information even if the original image does not have any redundancy various methods of lossy image compression algorithms have been proposed [5]. One of the popular methods is singular value decomposition (SVD) which achieves a sequence of good approximation images with low compression ratios. On the other hand, JPEG2000 is based on a discrete wavelet transform (DWT) and it provides a better compression ratio [6]. The author(s) Stoica et al. [7] introduce the integration method of that knowledge for the improvement of perceptual JPEG2000 image compression quality.
During past few years, immense data compression schemes and their applications in digital image processing have been proposed. Numerous efficient lossy image compression techniques [8, 9] motivated us to contribute in this field to combining SVD and DWT in different prospective.
SVD of a matrix proposed by Eugenio Beltrami, Camille Jordan helps to approximate a matrix by lower rank matrix [10]. It is a lossy image compression technique that achieves compression by truncation of smaller singular values to zero [11]. Tian et al. [12] discuss the usage possibility of SVD in image compression. Rufai et al. [13] described image compression carried out by Hoffman coding with SVD which gives the image, better quality with low $CR$ . Khalid et al. [14] introduced two new approaches for image compression. Further, Khalid et al. [15] progressed the compression studies to present an application of linear algebra to image compression using SVD and also recently, proposed by Khalid [16] image compression method based on the block SVD power method (BSPM). Liu et al. [17] to apply sequence imagebased fractal compression method to compression threedimensional MRI images. Xu et al. [18] propose a novel image compression method based on linear regression to the domain of pixel prediction of the image. Bascones et al. [19] designs aimed toward FPGA implementation and focus on the low complexity predictive lossy compression. Bouida et al. [20] an investigation to examine and evaluate image compression degradation by the use of new tendency concept of image quality assessment based on texture and edge analysis.
Wavelet's theory has placed a prominent role in image processing applications (Especially, image compression). It is characterized by high $CR$ and maintains essentially the same characteristics of moments after the compression [21]. The author Mulcahy [22] has developed a compression method using the Haar wavelet transform. Grgic et al. [23] examine a set of wavelet functions for implementation of image compression. Usevitch [24] waveletbased coding has been adopted as the underlying method to implement the JPEG 2000 standard. Extends investigation of improvement in compression techniques, researcher(s) Yang et al. [25] proposed novel hybrid techniques of SVD and vector quantization for image coding. Rufai et al. [6] also described lossy image compression based on SVD and wavelet difference reduction. Krishnaraj et al. [26] introduce a wavelet transformbased deep learningbased image compression model especially for communication in IoUT.
Based on the study of prevailing image compression techniques, an alternative method is proposed by adopting a relatively less computational complexity method to meet the requirements of getting a superior image consuming a lesser memory.
In order to create a mathematical model with which to build wavelet transform, it is important that to work in a vector space that lends itself to applications in digital imaging or signal processing. We can view a digital image $\left(A \in \mathbb{R}^{m \times n}\right)$ as a function of two variables where the function value is the greylevel intensity and the audio signal as functions of time where the function values are the frequencies of the signal. Since rows or columns of $A$ usually are of finite dimension and 1D signals (audio) taper off, we want to make sure that the functions $f(t)$ in space decay sufficiently fast $t \rightarrow \pm \infty$. The rate of decay must be fast enough to ensure that the energy of the signal is finite. Finally, it is desirable from a mathematical standpoint to use a space where the inner product of a function with itself is related to the size (norm) of the function [27]. Following is the Hilbert space $L^2(\mathbb{R})$ defined as
$L^2(\mathbb{R})=\left\{f:\left.\mathbb{R} \rightarrow \mathbb{C}\left\int_{\mathbb{R}}\right f(t)\right^2 d t<\infty\right\}$ （1）
with the inner product $f(t)$ and $g(t)$ as $\langle f(t), g(t)\rangle=\int_{\mathbb{R}} f(t) \overline{g(t)} d t$.
2.1 Wavelet transform
The familiar Fourier function $f(t)$ is sinusoids of changing frequency and unbounded duration [28] and too much Fourier data is needed to reconstruct the signal locally and it is a useful tool to analyze the frequency component of the signal. In these cases, the wavelet investigation is often very effective because it provides a simple approach for dealing with the local aspects of a signal. Therefore, the wavelet transforms expansion functions are little waves of finite duration and varying frequency. One of the basic and prominent wavelet techniques is Haar and it is a good representation of the image with fewer coefficients [29].
2.2 The Haar function
Haar functions have been utilized from 1910 when they were presented by the Hungarian mathematician Alfred Haar [30].
Haar scaling function defined as
$\phi(t)= \begin{cases}1, & 0 \leq t<1 \\ 0, & \text { otherwise. }\end{cases}$ (2)
Definition: (The Haar Space $V_j$ ) Let $\phi(t)$ be the Haar function in Eq. (2) and $j \in \mathbb{Z}$. We define
$V_j=\operatorname{span}\left\{\phi\left(2^j tl\right)\right\}_{l \in \mathbb{Z}} \cap L^2(\mathbb{R})$ (3)
2.3 Haar wavelet function
The Haar wavelet function $\psi(t)$ is given by
$\psi(t)=\phi(2 t)\phi(2 t1)$;
$\psi (t)=\left\{ \begin{align}& \,\,\,\,1,\,\,\,\,\,\,\,\,\,0\,\le \,t\,<\,\frac{1}{2} \\& 1,\,\,\,\,\,\,\,\,\,\frac{1}{2}\,\le \,t\,<\,\,1 \\& \,\,\,0,\,\,\,\,\,\,\,\,\,otherwise \\\end{align}\right..$ (4)
Definition: (The Haar wavelet Space $\mathrm{H}_{\mathrm{j}}$) Let $\psi(t)$ be the Haar wavelet function in Eq. (4), $j \in \mathbb{Z}$. We define
$H_j=\operatorname{span}\left\{\psi\left(2^j tl\right)\right\}_{l \in \mathbb{Z}} \cap L^2(\mathbb{R})$ (5)
Therefore, the sets $\left\{\phi_{j, l}(t)=2^{\frac{j}{2}} \phi\left(2^j tl\right): l \in \mathbb{Z}\right\}$ and $\left\{\psi_{j, l}(t)=2^{\frac{j}{2}} \psi\left(2^j tl\right): l \in \mathbb{Z}\right\}$ are orthonormal basis for $V_j$ and $H_j$ [25]. Scale j determines dilation or the visibility in frequency and l represents translation, $2^{\frac{j}{2}}$ controls their amplitude of the signal. The wavelet subspaces $H_j$ fill the gaps between successive scales:
$V_{j+1}=V_j \oplus H_j$ (6)
We can start with an approximation on some scale $V_0$ and then use wavelets to fill in the missing details on finer and finer scales. The finest resolution levels includes all squareintegrable functions
$L^2(\mathbb{R})=V_0+\oplus_{j=0}^{\infty} H_j$ (7)
One of the enormous discoveries for wavelet analysis was that perfect reconstruction filter banks could be formed using the wavelet coefficient sequences for 1D or 2D signal.
2.4 Discrete wavelet transform
Suppose $n$ is an even positive integer. We define the discrete Haar wavelet transform (DHWT) matrix as
$H_n=\left[\frac{W}{G}\right]=\frac{1}{\sqrt{2}}\left[\begin{array}{ccccccc}1 & 1 & 0 & 0 & & 0 & 0 \\ 0 & 0 & 1 & 1 & & 0 & 0 \\ \vdots & & & & \ddots & & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 1 & 1 \\ \hline 1 & 1 & 0 & 0 & & 0 & 0 \\ 0 & 0 & 1 & 1 & & 0 & 0 \\ \vdots & & & & \ddots & & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 1 & 1\end{array}\right]$ (8)
For each decomposition level, it generates the $\frac{n}{2} \times n$ block W is called the averages block and the $\frac{n}{2} \times n$ block G is called the detail block and other family of matrices can be easily obtained using filter coefficients.
The filter
$h=\left(h_0, h_1\right)=\left(\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}\right)$ (9)
is called the Haar (lowpass) filter and then we will call
$g=\left(g_0, g_1\right)=\left(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right)$, (10)
the Haar wavelet (high pass) filter. Similarly, we can obtain discrete higher order Daubechies wavelet transformation matrix obtained as [27]
Here, $L=2 l$ where l is natural number. For l=1, we get Haar matrix.
The matrix $H_n$ is orthogonal and still essentially computes averages and differences. The benefit, of course, is that we have an easy formula for the inverse:
$H_n^{1}=H_n^T$ (12)
A twodimensional DHWT of a discrete image A can be performed whenever an even number of rows (m) and an even number of columns (n). 1level wavelet transform of an image A is defined as
$\overset{\tilde{\ }}{\mathop{A}}\,={{H}_{m}}AH_{n}^{T},$ (13)
$\overset{\tilde{\ }}{\mathop{A}}\,=\left[ \frac{W}{G} \right]A{{\left[ \frac{W}{G} \right]}^{T}}$ (14)
$\overset{\tilde{\ }}{\mathop{A}}\,=\left[ \frac{LL}{LH}\,\frac{HL}{HH} \right].$ (15)
For each level of decomposition, wavelet coefficients are decimated by a factor two, which achieves better compression ratio [31]. In Haar wavelet decomposition produces, approximate wavelet coefficient (LL), horizontal (HL), vertical (LH) and diagonal (HH) detail coefficients as in Eq. (15). Then block matrix (LL) compressed by SVD method and other detail coefficients block(s): LH, HL, HH truncated to zero [32].
Remark: When an image $A$ has an odd number of $m$ or $n$, it can be extended by appending 0's so that it has an even number of rows and columns.
Let $A \in \mathbb{R}^{m \times n}$. The full space and comlun space of $A$ and $A^T$ provide sufficient details to understand A. Basis and dimension of each subspce can be obtained by knowing its SVD. SVD of A is given in the following [33].
$A=\left\{ \begin{align}& P\,\,\left( \frac{\sum }{0} \right)\,{{Q}^{T}},\,\text{Where}\,\,\sum \,=diag({{\sigma }_{1}},\,\cdot \,\cdot \,\cdot \,,{{\sigma }_{n}}),\,{{\sigma }_{1}}\,\ge \,{{\sigma }_{2\,}}\ge \,\,\cdot \,\cdot \,\cdot \,\ge \,{{\sigma }_{n}}\,\ge \,0,\,\,\,\,\,if\,\,\,m\,\ge \,n, \\& p\,(\sum \,\,\,\,0)\,{{Q}^{T}},\text{Where}\,\,\sum \,=diag({{\sigma }_{1}},\cdot \,\cdot \,\cdot \,,{{\sigma }_{m}}),{{\sigma }_{1}}\ge \,{{\sigma }_{2}}\,\ge \,\,\cdot \,\cdot \,\cdot \,\,\ge \,{{\sigma }_{m}}\,\ge \,0,\,\,\,\,\,if\,\,\,m\,\le \,n. \\\end{align} \right.\,.$ （16）
Here $P=\left[p_1, p_2, \cdots, p_m\right] \in \mathbb{R}^{m \times m}$ and $Q=\left[q_1, q_2, \cdots, q_n\right] \in \mathbb{R}^{n \times n}$ are orthogonal. The columns of P and Q are called left and right singular vectors respectively. The non – negative real numbers $\left\{\sigma_i\right\}_{i=1}^{\min (m, n)}$ are called singular values. Singular values are unique, but the singular vectors are not. The number of nonzero singular values is called the rank of $A$ and denoted by rank(A) [11].
$A=\sum\limits_{i=1}^{rank(A)}{{{\sigma }_{i}}{{p}_{i}}q_{i}^{T}}$ (17)
Truncated singular value decomposition or approximation of a matrix by lower ranks (decaying of smaller singular values to zero) has prominent role in lossy image compression, because it supports psychovisual redundancy related to human eye [6].
3.1 Optimality of SVD
We consider the SVD of matrix A of size m×n as in Eq. (16). For any $k$ with $1 \leq k<r=\operatorname{rank}(A)$, then define.
${{A}_{k}}=\sum\limits_{i=1}^{k}{{{\sigma }_{i}}{{p}_{i}}}q_{i}^{T}$ (18)
EkartYoung theorem says $A_k$ is the best approximation to A among all $kr a n k$ matrices with respect to Frobenius norm [34]. This result has been used to compress an image (matrix) A of size $m \times n$ has initially $m \times n$ entries to store in computer memory disk space and measured in kilobytes (KB). If we consider $A_k$, instead of A, then we have an approximation of A which can be stored with $k(m+n+1)$ values in spatial domain, i.e., the entries of the vectors $p_i$, $q_i$ and singular values $\sigma_i, i=1,2, \cdots, k$. For the point of image compression choosing k value is a challenging task and this leads to limitation of SVD. Khalid et al. [14] considered new upper bound for k in terms of size of original image instead of its rank,
i.e., $k \leq \frac{m n}{m+n+1}$ (19)
According to Eq. (19) choosing k value has a lot of choices between 1 to $\frac{m n}{m+n+1}$. The concept of numerical rank of matrix help us to finalize suitable value for k.
3.2 Estimating numerical rank
In matrix computations, the numerical rank of $A \in \mathbb{R}^{m \times n}$ , can be defined as the number of singular values greater than a specified tolerance $\varepsilon>0$ and it is denoted by $r_{\varepsilon}$ [35].
The singular values of matrix A by Eq. (16) with the numerical $r_{\varepsilon}$ satisfy
${{\sigma }_{{{r}_{\varepsilon }}+1}}\le \varepsilon <{{\sigma }_{{{r}_{\varepsilon }}}}$ (20)
It is important to note that the notion of numerical rank $r_{\varepsilon}$ is useful only when there is a welldefined gap between $\sigma_{r_{\varepsilon}}$ and $\sigma_{r_{\varepsilon}+1}$ [36], as well. Then decaying singular values of A to get low numerical rank and its needful to SVD compression method [8]. Clearly, the little value of $\varepsilon>0$, we get $A_{r_{\varepsilon}}$ with rank $r_{\varepsilon}$ such that $\left\AA_{r_{\varepsilon}}\right\_2=\sigma_{r_{\varepsilon}+1}<\varepsilon$.
3.3 Methodology to image compression
SVD methods heavily depend on a small number of dominant singular values to reconstruct the image, because there is a large number of singular values close to zero represents redundancy information of the digital image. A small number of dominant singular values are closely associated with numerical rank $\left(r_{\varepsilon}\right)$ for a specified tolerance $\varepsilon>0$. This fact is utilized for finding $r_{\varepsilon}$ in the wavelet domain because dominant singular values of the original image are found in SVD of its smooth / average coefficients.These facts are presented in the form of an algorithm:
________________________________________________
ALGORITHM:
________________________________________________
Input: Digital image A and tolerance $\varepsilon\left(0<\varepsilon<\sigma_1\right)$
Output: $B \approx A$ with $\operatorname{rank}(B)=r_{\varepsilon}$.
1. Apply 1level discrete Haar wavelet transform to A to obtain $\stackrel{\sim}{A}=H_m A H_n^T$.
2. We next quatize $\stackrel{\sim}{A}$ by replacing each $H L$, $\mathrm{LH}$ and $H H$ by the $\frac{m}{2} \times \frac{n}{2}$ zero matrix, i.e.,
$\stackrel{\sim}{A}_q=\left[\begin{array}{ll}\frac{L L}{0} & \frac{0}{0}\end{array}\right]$.
3. Compute $r_{\varepsilon}$ from SVD of $L L$. Here $L L$ represents coaser representation of $\stackrel{\sim}{A}$,
4. Obtain the best $r_{\varepsilon}$ rank matrix which approximations $L L$ i.e., $\stackrel{\sim}{LL}=\sum_{i=1}^{r_{\varepsilon}} \sigma_i p_i q_i^T$ by Eq. (18).
5. Construct B by applying 1level inverse Haar wavelet transform with smooth coefficients as $\stackrel{\sim}{L L}$ and all other detail coefficients are zero.
________________________________________________
A picture quality in image compression systems is important phase in describing the type and amount of degradation in reconstructed image. Among many objective numerical measures of image quality, that are based on comparable distortion measures is defined as follows:
4.1 Meansquare error (MSE) and peak signaltonoise ratio (PSNR)
Let $A \in \mathbb{R}^{m \times n}$ be a given digital image and the approximate matrix $B \in \mathbb{R}^{m \times n}$. To evaluate $M S E$ and $P S N R$ between A and B defined as [37].
$MSE(A,\,B)=\frac{\sum\limits_{i=1}^{m}{\sum\limits_{j=1}^{n}{{{a}_{ij}}\overset{\tilde{\ }}{\mathop{{{b}_{ij}}}}\,{{}^{2}}}}}{mn}$ (21)
$PSNR(A,\,B)=10\,{{\log }_{10}}\left( \frac{{{({{L}^{'}}1)}^{2}}}{MSE(A,\,B)} \right),$ (22)
$PSNR(A,\,B)=10\,{{\log }_{10}}\left( \frac{{{({{L}^{'}}1)}^{2}}}{\left\ AB \right\_{F}^{2}} \right)$ (23)
where, $L^{\prime}$ is the number of gray levels.The unit of $P S N R$ is decibel (dB). $M S E$ is inversely proportional to $P S N R$. Smaller values of $M S E$ means better approximation.
4.2 Compression ratio
Compresssion ratio defined as the ratio of size of original image to the size of compressed image.
$C R(A, B)=\frac{\text { Required memory space to store entries of sample image } A}{\text { Required memory space to store entries of sample image } B}$ （24）
Vigor of image compression is measured by CR.
We implemented the proposed algorithm using MATLAB R2009a on machine with: i52450M CPU @2.50 GHz, 4GB RAM and 64bits operating system.
An innovative study of image compression with respect to numerical rank $\left(r_{\varepsilon}\right)$ constructed in the wavelet domain for a given tolerance $\mathcal{E}$ is presented. For comparison purposes, we have designed/considered various tolerance values in such a way that the proposed algorithm produces a platform to compare other related works. It is implemented on test images and obtained observations are noted in Tables and Figures.
Table 1 is showing the comparision between the proposed technique test with grayscale image pigeon and spatial domain (SVD alone method). In order to see the performance of the method is test with fruits image shown in Table 2.
Table 3 reveals that proposed algorithm meets the reasonable MSE $\left(C_2 \& C_7\right)$ and PSNR $\left(C_3 \& C_8\right)$ with high CR $\left(C_4 \& C_9\right)$ . The behavior of compression ratio for Lena image with respect to tolerances as shown in Figure 1.
The performance of proposed routine as compared to scheme BSPM for color image (Desert.jpg, 1024 X 768) and results are shown in Table 4.
Remark: In the Table 4, $P S N R$ value (see column 2) to mention 92.97 for $M S E=0.36$, its apparently correct answer is 52.56 [16].
We designed various tolerances for three different channels $\left(\varepsilon_R, \varepsilon_G\right.$ and $\left.\varepsilon_B\right)$ to match $r_{\varepsilon}$ for comparison with existing methods Khalid et al. [15] and Khalid [16]. Observations in Table 4 report that proposed algorithm achieves a higher compression ratio by reasonably balancing PSNR. Also, PSNR values are increased as numerical rank $\left(r_{\varepsilon}\right)$ increases (see $C_7 \& C_8$ in Table 4). In the context of a human visual comparison between sample image(s) and compressed image(s) as shown in Figure 2.
Visual analysis of compressed images obtained from the proposed compression methodology is illustrated in Figure 2. Original (a, b, c and d) and compressed images (e, f, g and h) have been shown, which illustrates the compressed images with fewer amounts of data. Different visual characteristics are tested with different images. From this analysis, illustration clearly demonstrates the proposed method is evident from simulation results that better compression is achieved with acceptable visual quality (as per HVS) as compared to conventional SVD methods [9].
Table 1. PSNR and CR values for SVD and proposed mathod on pigeon image (768×512)
SVD Compression 
Proposed method 

k 
MSE 
PSNR 
CR 
ε 
r_{ε} 
MSE 
PSNR 
CR 
08 
0.0118 
67.4267 
38.3700 
20.1 
08 
0.0119 
67.3785 
76.6802 
16 
0.0065 
70.0304 
19.1850 
11.1 
16 
0.0067 
69.8859 
38.3401 
32 
0.0036 
72.5418 
9.5925 
5.7 
32 
0.0040 
72.1188 
19.1700 
64 
0.0019 
75.2900 
4.7963 
3.0 
64 
0.0026 
74.0490 
9.5850 
128 
0.0008 
78.9971 
2.3981 
1.4 
128 
0.0018 
75.4607 
4.7925 
Table 2. Comparison of SVD techniques with proposed method using fruits image (512×512)
SVD Compression 


Proposed method 

k 
MSE 
PSNR 
CR 
$\varepsilon_R$ 
$\varepsilon_G$ 
$\varepsilon_B$ 
$r_{\varepsilon}$ 
MSE 
PSNR 
CR 
08 
0.0185 
65.4489 
1.7838 
14.1 
13.5 
13.5 
08 
0.0077 
69.2828 
3.9741 
16 
0.0150 
66.3756 
1.5503 
7.0 
7.1 
7.4 
16 
0.0044 
71.7090 
3.4402 
32 
0.0114 
67.5643 
1.3509 
4.0 
4.1 
4.5 
32 
0.0029 
74.2223 
3.0733 
64 
0.0088 
68.6738 
1.2000 
1.9 
2.0 
2.1 
64 
0.0014 
76.6132 
2.8456 
128 
0.0017 
75.7361 
1.0974 
0.71 
0.79 
0.76 
128 
0.0010 
78.1028 
2.7604 
Table 3. Obtained numerical results of image quality parameters tested on Lena image (512×512) and comparison with the work of Tian et al. [12]
SVD Compression 
Proposed method 

k 
MSE 
PSNR 
CR 
ε 
$r_{\varepsilon}$ 
MSE 
PSNR 
CR 
08 
0.0070 
69.6843 
31.48186 
14.1 
08 
0.0071 
69.6333 
63.8752 
16 
0.0038 
72.3836 
15.98439 
8.1 
16 
0.0039 
72.2171 
31.9376 
32 
0.0018 
75.6716 
7.992195 
3.9 
32 
0.0020 
75.0684 
15.9688 
64 
0.0006 
80.0204 
3.996098 
1.9 
64 
0.0011 
77.7585 
7.8744 
128 
0.0001 
86.2687 
1.998043 
0.61 
128 
0.0007 
79.4549 
3.9922 
Table 4. Comparision of obtained outcomes against block SVD power method (BSPM) for various tolerances [15, 16]
BSPM 
Proposed method 

k 
PSNR 
CR 
$\varepsilon_R$ 
$\varepsilon_G$ 
$\varepsilon_B$ 
$r_{\varepsilon}$ 
PSNR 
CR 
50 
48.44 
6.98 
6.38 
2.61 
1.12 
50 
75.5155 
10.8256 
100 
51.20 
6.94 
3.65 
1.57 
0.69 
100 
77.2613 
9.6046 
122 
92.97(52.56) 
6.92 
3.02 
1.31 
0.6 
122 
77.7329 
9.2808 
150 
53.60 
6.92 
2.36 
1.07 
0.5 
150 
78.1662 
9.1879 
200 
54.43 
6.92 
1.51 
0.71 
0.36 
200 
78.6212 
9.0471 
250 
58.88 
6.89 
0.93 
0.44 
0.24 
250 
78.8227 
8.9978 
300 
63.26 
6.88 
0.47 
0.24 
0.15 
300 
78.8946 
8.5507 
350 
67.19 
6.87 
0.17 
0.095 
0.08 
350 
78.9100 
8.5243 
Figure 1. Variation of compression ratio ( $C R$ ) with different numerical rank ( $r_{\varepsilon}$ ) for Lena image
Figure 2. Human visual comparison of a, b,c,d are sample images and e,f,g, h are after compression with suitable $\varepsilon$ and acquired memory space in KB (Kilobytes)
Let A be a given digital image of size $m \times n$. In the case of SVD, the required complexity to compute the compressed image with numerical rank $r_{\varepsilon}$, it found to be $O(m \times n \times \min \{m, n\})$ [38]. Whereas the proposed algorithm requires half of both because we are applying SVD on a matrix size $\frac{m}{2} \times \frac{n}{2}$. For this half (50%) reduction, the proposed method has to pay O(m×n) [12] operations to obtain the wavelet (Haar takes fewer operations compare to other wavelets) transform of A. Whenever the image database is large, proposed algorithm places a crucial role in reducing complexity (hence CPU time) and memory compared to conventional SVD.
The CPU time is taken to construct compressed image with various numerical rank ($r_{\varepsilon}$) corresponding to conventional SVD and the proposed method for Lena image is shown in the following Figure 3. In this figure, x and y axes represent the numerical rank and corresponding processing time in seconds respectively. Further, this figure demonstrates the above discussed reduction in half computational process characterized in terms of CPU time concerned with the work of Tian et al. [12] presented a SVD alone method.
Figure 3. Shows the plot of CPU time rate across the numerical rank
The algorithm proposed with the significant role of numerical rank ($r_{\varepsilon}$) for image compression in the wavelet domain. It has two advantageous over conventional SVD and its variants: (i) the comparison clearly illustrates that the proposed model achieves high compression ratio with tolerable visual quality as per human vision system; (ii) less computational complexity because of fast wavelet transform. Test images considered in results and discussion act as testimonials to demonstrate the features of the proposed algorithm over the conventional SVD methods. Haar wavelet transform outperforms to reduce the computational processing time and speeds up the compression and decompression process as compared to the higherorder level of wavelet families. The proposed algorithm applied to video compression justifies the high computational complexity. Future researches may lead to continuing on developing more effective transformbased algorithms and explore a better performance on data compression.
[1] Jain, A.K. (1989). Fundamentals of digital image processing. Englewood Cliffs, NJ: Prentice Hall.
[2] Jayant, N., Jounson, J., Safranek, R. (1993). Signal compression based on models of huaman perception. Proc, IEEE, 81(10): 13851422. https://doi.org/10.1109/5.241504
[3] Munteanu, A., Cornelis, J., Cristea, P. (1999). Waveletbased lossless compression of coronary angiographic images. IEEE Transactions on Medical Imaging, 18(3): 272281. https://doi.org/10.1109/42.764904
[4] Sayood, K.. (2006). Introduction to Data Compression. Elsevier, Third Edition.
[5] Salomon, D. (2007). Data compression. Springer, Fourth Edition.
[6] Rufai, A.M., Anbarjafari, G., Demirel, H. (2014). Lossy image compression using singular value decomposition and wavelet difference reduction. Digital Signal Processing, 24: 117123. https://doi.org/10.1016/j.dsp.2013.09.008
[7] Stoica, A., Larabi, M.C., FernandezMaloigne, C. (2004). Amélioration de la qualité visuelle d’images couleur dans le cadre du standard de compression JPEG2000  Visual quality enhancement for color images in the framework of the JPEG2000 compression standard. Traitement du signal, 21(6): 661677.
[8] Ashino, R., Morimoto, A., Nagase, M., Vaillancourt, R. (2005). Comparing multiresolution SVD with other methods for image compression. Advance in Analysis, pp. 457470. https://doi.org/10.1142/9789812701732_0042
[9] Kumar, R., Patbhaje, U., Kumar, A. (2019). An efficient for image compression and quality retrieval using matrix completion. Journal of King Saud UniversityComputer and Information Sciences, 19. https://doi.org/10.1016/j.jksuci.2019.08.002
[10] Stewart, G.W. (1993). On the early history of the singular value decomposition. SIAM Review, 35(4): 551566. https://doi.org/10.1137/1035134
[11] Golub, G.H., Van Loan, C.F. (2015). Matrix Computations. Hindustan Book Agency.
[12] Tian, M., Luo, S.W., Liao, L.Z. (2005). An investigation into using singular value decomposition as a method of image compression. Prc. of Fourth Int. Conf. on Machine Learning and Cybernetics, 8: 52005204. https://doi.org/10.1109/ICMLC.2005.1527861
[13] Rufai, A.M., Anbarjafari, G., Demirel, H. (2013). Lossy medical image compression using Huffman coding and singular value decomposition. IEEE Signal Processing and Communications Applications Conference, pp. 14 https://doi.org/10.1109/SIU.2013.6531592
[14] Khalid. E.A., Youness, C. (2015). Two new methods for image compression. International Journal of Imaging and Robotics, 15(4): 111.
[15] Khalid, E.A., Ouhda, M., Aksasse, B., Ouanan, M. (2018). An application of linear algebra to image compression. Homological and Combinations Methods, Springer Proceedings in Mathematics and Statistics, 228: 4154. https://doi.org/10.1007/9783319741956_4
[16] Khalid E.A. (2019). Image compression based on block SVD power method. Journal of Intelligent Systems, 115. https://doi.org/10.1515/jisys20180034
[17] Liu, S., Bai, W., Zeng, N., Wang, S. (2019). A fast fractal based compression for MRI images. IEEE Access, 7: 6241262420. https://doi.org/10.1109/ACCESS.2019.2916934
[18] Xu, S., Chang, C.C., Liu, Y. (2020). A novel image compression technology based on vector quanisation and linear regression pridiction. Connection Science, 33(6): 118. https://doi.org/10.1080/09540091.2020.1806206
[19] Bascones, D., Gonzalez, C., Mozos, S. (2020). An extremely pipelined FPGA implementation of a lossy hyperspectral image compression algorithm. IEEE Transactions on Geoscience and Remote Sensing, 58(10): 74357447. https://doi.org/10.1109/TGRS.2020.2982586
[20] Bouida, A., Beladgham, M., Bassou, A., Benyahia, I., AhmedTaleb, A., Haouam, I., Kamline, M. (2020). Evaluation of textural degradation in compressed image texture feature and edges. Traitement du Signal, 37(5): 753762. https://doi.org/10.18280/ts.370507
[21] Hu, S.G., Wu, X.F., Xi, Z.F. (2012). Application of 2D wavelet transform in image compression based on Matlab. In Computer, Informatics, Cybernetics and Applications, Springer, Dordrecht, 491499. https://doi.org/10.1007/9789400718395_52
[22] Mulcahy, C. (1997). Image compression using the Haar wavelet transform. Spelman Science and Mathematics Journal, 1(1): 2231.
[23] Grgic, S., Grgic, M., Cihar, B.Z. (2001). Performance analysis of image copression using wavelets. IEEE Transactions on Industrial Electronics, 48(3): 682695. https://doi.org/10.1109/41.925596
[24] Usevitch, B.E. (2001). A tutorial on modern lossy wavelet image compression: Foundations of JPEG 2000. IEEE Signal Processing Magazine, 18(5): 2135. https://doi.org/10.1109/79.952803
[25] Yang, J.F., Lu, C.L. (1995). Combined techniques of singular value decomposition and vector quantization for image coding. IEEE Transactions on Image Processing, 4(8): 11411146. https://doi.org/10.1109/83.403419
[26] Krishnaraj, N., Elhoseny, M., Thenmozhi, M., Selim, M.M., Shankar, K. (2020). Deep learning model for realtime image compression in internet of underwater things (IoUT). Journal of RealTime Image Processing, 17(6): 20972111. https://doi.org/10.1007/s11554019008796
[27] Ruch, D.K., Fleet, P.J.V. (2009). Wavelet theory: An elementary approach with applications. A John Wiley & Sons, INC.
[28] Gonzalez, R.C., Woods, R.E., Eddins, S.L. (2016). Digital Image Processing Using MATLAB. McGraw Hill Eucation.
[29] Dan, L., Li, J., Gu, X., Liao, J., Zhan, S. (2006). The application of image compression based on discrete wavelet transform. In Wavelet Active Media Technology and Information Processing, 2: 291298. https://doi.org/10.1142/9789812772763_0045
[30] Stanković, R.S., Falkowski. B.J. (2003). The Haar wavelet transform: Its status and achievements. Computers and Electrical Engineering, 29(1): 2544. https://doi.org/10.1016/S00457906(01)000118
[31] Kumar, R.N., Jagadale, B.N., Bhat, J.S. (2019). A lossless image compression algorithm using wavelets and fractional fourier transform. SN Applied Sciences, 1(3): 1266. https://doi.org/10.1007/s424520190276z
[32] Fleet, P.J.V. (2011). Discrete Wavelet Transformations: An Elementary Approach with Applications. John Wiley and Sons.
[33] Ipsen, I.C.F. (2009). Numerical Matrix Analysis: Linear Systems and Least Squares, SIAM.
[34] Trefethen, L.N., Bau III, D. (1997). Numerical Linear Algebra (Vol. 50). SIAM.
[35] Foster, L.V., Davis, T.A. (2013). Algorithm 933: Reliable calculation of numerical rank, null space bases, pseudoinverse solutions, and basic solutions using suitesparseQR. ACM Transactions on Mathematical Software, 40(1): 124. https://doi.org/10.1145/2513109.2513116
[36] Ubaru, S., Saad, Y. (2016). Fast methods for estimating the numerical rank of large matrices. In International Conference on Machine Learning, 48: 468477.
[37] Mrak, M., Grgic, S., Grgic, M. (2003). Picture quality measures in image compression systems. In The IEEE Region 8 EUROCON 2003. Computer as a Tool., 1: 233236. https://doi.org/10.1109/EURCON.2003.1248017
[38] Liu, W., Lin, W. (2011). Additive white Gaussian noise level estimation in SVD domain for images. IEEE Transactions on Image Processing, 22(3): 872883. https://doi.org/10.1109/TIP.2012.2219544