Evaluation of Textural Degradation in Compressed Medical and Biometric Images by Analyzing Image Texture Features and Edges

Evaluation of Textural Degradation in Compressed Medical and Biometric Images by Analyzing Image Texture Features and Edges

Ahmed Bouida* Mohammed Beladgham Abdesselam Bassou Ismahane Benyahia Abdelmalek Ahmed-Taleb Imene Haouam Miloud Kamline 

Information Processing and Telecommunication Laboratory (LTIT), University TAHRI Mohammed Bechar, Bechar 08000, Algeria

Laboratory of IEMN DOAE. UMR CNRS 852, University of Valenciennes, Valenciennes Cedex 59313, France

Corresponding Author Email: 
bouida.ahmed@univ-bechar.dz
Page: 
753-762
|
DOI: 
https://doi.org/10.18280/ts.370507
Received: 
28 July 2020
|
Revised: 
12 September 2020
|
Accepted: 
23 September 2020
|
Available online: 
25 November 2020
| Citation

© 2020 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

The importance of image compression is now essential during transmission or storage processes in various data applications, especially in medical and biometric systems. To perform the effectiveness of the compression process on images and evaluate degradation caused by this process, image quality assessment becomes an important tool in image services. We note that the objective criteria in image quality depend especially on the image type and image texture composition. The actual tendency is to find metrics making better qualification on errors in compressed images and correlate with the human visual system. This paper presents an investigation to examine and evaluate image compression degradation by the use of a new tendency concept of image quality assessment based on texture and edge analysis. To perform and practice this evaluation, we compress the medical and biometric images using second-generation wavelet compression algorithms and study the degradation of texture information in these images.

Keywords: 

image quality assessment, image texture analysis, image edge detection, wavelet-based compression, medical and biometric images

1. Introduction

In the field of medical or biometric imaging, the exchange and storage of data started to take a crucial consideration. For this, compression and reduction of the size of these images becomes a necessary tool [1]. The principal goal of compression is representing an image with a small number of bits with the best fidelity for an available communication or storage bit rate capacity archiving. We also know that to have fairly important compression rates, we must use lossy compression techniques, which essentially induce changes of information in the compressed images and create considerable degradation during the exploitation of the reconstructed images [2]. The foremost important concern will therefore be to prevent this degradation from affecting the final use of those images; in diagnostic in medical images and identification/authentication in biometric images; like identification and diagnostic in medical images [3] and classification, identification, or authentication in biometric images [4].

The most used lossy compression techniques use a generally three-step scheme [1, 2]: A transform phase followed by a quantization and entropy coding phases. Within the first phase, image pre-processing is used to manipulate and decorrelate the raw image data by the use of a transform method. In the second step, a quantization scheme is employed to reduce the amount of unnecessary information decreasing as well as the size of these images without affecting its quality when used by any system. Finally, a coding process, generally of entropic type, that offers a binary representation to these images during storage or transmission. In the JPEG standard (ISO/CEI 10918-1), acronyme de Joint Photographic Experts Group, the basic used transform is the Discrete Cosine Transform (DCT). However, the Discrete Wavelet Transform (DWT) is used in JPEG2000 standard (ISO/CEI 15444-1). To optimize compression algorithms, many scientists attempt to present and use other combinations of algorithms in goal to reduce compression-induced degradation by having high compression ratios, the most of these techniques use wavelet transforms of one kind or another [5, 6].

To decompose any image using the DWT, we use two complementary waveform functions [7]. The primary represents the low frequencies corresponding to the approximation parts of an image and the second stands for the high frequencies which correspond to the detailed parts. Technically and by using two functions representing a combination of filters bank for low and high pass, an image can be decomposed into four sub-bands over rows and columns namely; Low-Low band (or approximation band), High-Low band (or horizontal detail band), Low-High band (or vertical detail band), and High-High band (or diagonal detail band). To realize another level decomposition, we decompose only the approximation image to generate a second level approximation and details sub-bands.

Unlike the decomposition with the DWT which allows a multi-level decomposing only in the approximation band without touching the other details bands, the Wavelet Packet Transform (PWT) makes it possible to analyze and decompose all the sub-bands approximation and details to carry out a true multi-resolution analysis [8]. This version of Wavelet decomposition has proven effective in compressing signals and images [9]. On the other hand, the use of separable dyadic in image analysis with classical wavelets presents a disadvantage when requiring three families of wavelets and the expansion factor between two successive scales is generally equal to 4. This has motivated scientists to create a non-separable wavelet transforms with a new analysis using only one family of wavelets with an expansion factor equal to 2 between two successive resolutions. This analysis is called Quincunx Wavelet Transform (QWT) [10], which suggests good results applied to image compression [11].

Generally and to perform compression algorithms and evaluate the quality of the obtained images, two ways are used [12]. Subjective methods with several measures developed involving the human observer for assigning a note to degraded image and respecting recommendations of ISO 3664-2000 and ITU-BT.500-13. Objective methods allowing scientists to use numerical image quality assessment (IQA) methods to qualify various image processing applications such as image compression methods. With the reference to the original image, full reference methods (FR-IQA) use the total original image as a reference, reduced reference ones (RR-IQA) use partial reference parameters from the original image and no reference methods (NR-IQA) don’t use any reference from the original image [12].

The classical quality techniques that use a full-reference image, aim to calculate the differential errors induced by compression on the spatial approach of the image [13], on the structural composition of the image [14], or even on the image spacio-frequency representation [15]. Other concepts focused on the quality evaluation using the textural aspect of images [16] or by studying geometrical structures including edges or boundaries in these images to evaluate the degradation level during the compression process [17].

Based on all these considerations, our goal in this work is to study and evaluate the degradation in compressed medical and biometric images to qualify the impact of some compression algorithms applied to this kind of images using an analysis over three ways of quality assessment; a classical structural evaluation quality, a textural analysis quality and finally an evaluation quality of edge degradation.

For this aim, our paper presents in the first section the related works that investigate compressed image quality, especially those who study and evaluate medical images and biometric images. Our chosen methodology that describes the image quality evaluation in the textural and edge approach is presented in the second section. Finally, an experimental study is also presented with a discussion of the obtained textural degradation analysis results applied to compressed medical and biometric images.

2. Related Works

Some interesting research works investigate the quality evaluation of compressed images in general. Sakuldee and Udomhunsakul [18] and Khosravy et al. [19] resume and review the objective performance in compressed image quality assessments. Vishal et al. proposed a structural content Laplacian mean square error developed from a few fundamental objective quality measurements [20] and Hu et al. [21] study the quality in compressed images using a weighted distortion in term of the MSE. A content-weighted mean-squared error used in quality assessment of compressed images was proposed by Gu et al. [22]. Another work uses the fine-grained present in compressed images to evaluate the quality of compression [23] and Krivenko et al. [24] proposed an approach to predict visual quality metrics for lossy compressed images.

In medical images, Kocsis et al. [25] proposed a set of quantitative measurements related to medical image quality parameters, Perez-Diaz in his work [26] presents a study using some known quality metrics to evaluate the quality in medical images, and Chow and Paramesran [27] publish a review and a classification of medical image quality assessment. An improved SSIM quality for compressed medical images was developed by Kumar et al. [28]. Gaudeau et al. [29] used some compressed image quality assessment in interactive upper limb radiology images and Liu et al. [30] studied a perceptual quality assessment used in medical images.

In biometric images, the most founded works in literature evaluate and study the quality of images only in acquisition condition and evaluate their impact on biometric recognition [31-34]. In the compression biometric image quality, we can find a study of the evaluation of some compression algorithms applied in fingerprint and face recognition systems [35] or only in the fingerprint system [36]. Thanki et al. [37] studied the compressive sensing concept applied in biometric systems and in another work [38] exploited the compressive sensing concept in the hybrid compression method for various biometric and biomedical data.

3. Texture Image Analysis

To perform and evaluate the effect of compression on the medical and biometric images, we focus our analysis on the use of textural structure degradation. The texture is an elementary characteristic that defines a structural content of all or a specified area of an image. It offers data content in the color or intensity spatial structure of an image [39]. Generally, we cannot describe any texture for a single point but it will be defined by using spatial neighborhoods distribution of intensity rates. For this, the texture performance is affected by various texture attributes: such as density, depth, size, orientation, and others. We note also that the resolution and coding at which an image is identified defines the perception scale of the texture [39].

In quality standards and to evaluate the structural quality of compressed images, the most used full-reference objective metrics are based on pixel-wise error, the Peak Signal-to-Noise Ratio (PSNR) [13] which represents the error noise that affects the fidelity on the power image signal. As shown in Eq. (1) and (2), this parameter is calculated, according to the ( MSE ) parameter, between the original image ( X0) and the compressed image ( Xc):

$P S N R=10 \times \log _{10}\left(\frac{d^{2}}{M S E}\right)$      (1)

With the Mean Squared Error (MSE) is calculating by:

$M S E=\frac{1}{N \times M} \sum_{i, j=1}^{N, M}\left(X_{0}(i, j)-X_{c}(i, j)\right)^{2}$       (2)

The objective of the PSNR parameter is to evaluate the induced error at pixel energy and to calculate it according to the squared intensity differences of distorted and reference image ( MSE ) relative to the maximum energy peak in the image. This parameter informs about the structure average of the peak image signal to the error noise ratio.

The other important parameter is the Multiscale Structural Similarity (MS-SSIM) proposed by Wang et al. [40]. This metric is an ameliorate form of the Structural Similarity Index (SSIM) [41], which qualify the original and compressed images by comparing the brightness, contrast, and structure, derived from the principle that the information relating to the human eye is extracted from the structure of the scene and considered as a perceptual HVS-based index. The MS-SSIM provides more flexibility than a single-scale SSIM approach and better performance in the quality image assessment [38].

In the MS-SSIM the operation, based on SSIM, is iteratively repeated to M scales with the application of a low-pass filter and downsampling by a factor of 2. This metric is presented by the following formula [40]:

$M S-\operatorname{SSIM}\left(X_{0}, X_{c}\right)=\frac{1}{M} \sum_{k=1}^{M} \operatorname{SSIM}\left(X_{0}^{(k)}, X_{c}^{(k)}\right)$      (3)

With the SSIM parameter, Eq. (4), is a single-scale similarity index calculated by a combined comparison of the properties of brightness, contrast, and structure information between pair of vectors x and y of compared images [41]:

$\operatorname{SSIM}(x, y)=l(x, y) \times c(x, y) \times s(x, y)$      (4)

With this metric, we calculate the similarity between the original and compressed images on the structural level using the parameters relating to the human vision model like brightness, contrast, and structural composition. However, some research papers present and evaluate other’s proposed full-reference image quality assessment like VIF, FSIM, UQI, NQM, VSNR, GSM, IFC, RFSIM [12, 19, 42].

In a context allowing us to analyze the textural quality in an image, we must use one of the techniques allowing the extraction of the texture features present in this image. In literature, there are a lot of texture features analysis techniques. In resume, there are five basic categories classifying types of features texture as shown in Figure 1 [39, 43]:

(1) Statistical methods that calculate distinct texture features. The basic concept of these methods evaluates the spatial distribution of gray values by calculating local features at each step of the image and extracting a collection of statistics from the local feature distributions.

(2) Structural methods that describe the texture through well-defined primitives (micro-texture) with a structure of spatial relations between these primitives (macro-texture). The texture primitives are often viewed as regions with uniform gray levels, pixels, gray level peaks, line, repetition of edges in different orientations, etc.

(3) Model-based methods that attempt to represent the image texture using the stochastic and generative image model. These methods include, among others, fractal models, autoregressive models, random field models, epitomic model and complex lattice model.

(4) Transform-based methods that use signal processing analyzes. Usually, these features are extracted from transformed images in the frequency domain. Most of the characteristics based on signal processing are usually extracted by applying filter banks to the image and calculating the energy of the filter responses.

(5) Edge and Boundary based methods that analyze contours and edge descriptors in images based on geometric concepts and thus determine the equivalent textural properties. The dominant approach in the analysis of texture edges is to construct a description of the local neighborhood around each pixel.

In our study, we use a recent form of performing a textural analysis called GLCM method (Gray Level Co-Occurrence Matrix). This method uses a statistical analysis based on selected angle and distance parameters [44]. It allows the extraction of statistical information from the image concerning the distribution of pairs of pixels [45]. The principle of statistical representations of the matrix of co-occurrences was proposed by Julesz in 1975 [46]. Subsequently, Haralick defined textural indicators or descriptors describing these matrices [47]. This approach has been greatly appreciated because of its ease of implementation and its performance, which makes it a reference approach [48-51].

Figure 1. Classification of texture analysis techniques

A recent form of performing a textural analysis, textural image quality was proposed in our previous work [52]. It helps us to calculate the textural image quality by using the elementary (GLCM) texture features like Contrast, Correlation, Energy, Entropy, or Homogeneity between original and compressed images.

Each elementary feature quality $I T Q_{F}$, between the compressed image ( Xc) and the original image ( X0), is calculated according to the following equation:

$I T Q_{F}=1-\frac{\left|F_{X_{0}}-F_{X_{C}}\right|}{F_{X_{0}}}$       (5)

where, $F_{X_{0}}$ and $F_{X_{C}}$ are respectivly the elementary textural feature of original and compressed images.

The final image texture quality ( ITQ) is obtained by the following weighted summation equation:

$I T Q=\sum_{i=1}^{5} \omega_{i} \times I T Q_{F(i)}$      (6)

with $\omega_{i}$ is the weight ponderation, $\sum \omega_{i}=1$ and $I T Q_{F(i)}$ correspond to the five quality of elementary feature (1-contrast, 2-correlatio, 3-enenergy, 4-homogeneity and 5-entropy).

Using this quality parameter helps us to determine the textural degradation according to the texture parameters with the GLCM texture features, more details are presented in the work [52].

In another way and to improve the efficiency of the compression algorithms, we study the edge degradation in a compressed image. Since images are composed of textured structures, the discontinuity of the pixel intensities makes it possible to form borders of an outline shape. The importance of edge detection helps to represent borders of different textural structures and objects contained in images [53].

Technically, many edge detection algorithms have been proposed. Referring to some works [54-56], there are several ways to proceed with the detection of contours, but the most used are those based on the gradient or the Laplacian. Gradient methods are effective at detecting ramp edges where grayscale pixels change very slowly. The Roberts operator makes it possible to calculate the two-dimensional gradient of an image quickly and easily. The Prewitt operator calculates the gradient of the image intensity at each point, giving the direction of the greatest possible increase in light and the rate of change in that direction. The classical Sobel algorithm has advantages such as the computation of small quantities and a high computational speed; thus, it has been widely applied in many fields. Among other things, the Canny operator, having several variants, is known as one of the best first gradient detection operators compared to other early gradient detection operators.

Using the Canny operator, the edges appear clearer, the difference between the edges and the background of the image is obvious. This operator uses an adaptive thresholding with hysteresis that eliminates streaking in edge contours. This threshold is evaluated according to the noise in an image with a noise estimation scheme [57]. Its basic concept is based on three steps; a filtering step to reduce image noise and to eliminate the isolated pixels, the application of a gradient to return the intensity of contours, and finally, the determination of the orientations of edges.

When using this algorithm for edge detection, we obtain the binary edge images $E_{o r g}^{(t h)}$ and $E_{c o m p}^{(t h)}$ corresponding respectively to the original and compressed images at a specific threshold (th). These binary edge images are defined within {0, 1}, the 0 value denotes a non-edge point and the 1 value denotes an edge point.

To evaluate the edge image quality, we propose to calculate the common edges between $E_{o r g}^{(t h)}$ and $E_{c o m p}^{(t h)}$at a selected threshold (th), Eq. (7):

$\Delta E^{(t h)}=E_{c o m p}^{(t h)} \cap E_{o r g}^{(t h)}$      (7)

To obtain our proposed edge image quality, we use the following normalized formula that calculates the sum of the common preserved edge points in the compressed image compared to the original image:

$E d g e I Q A^{(t h)}=\sum \Delta E^{(t h)} / \sum E_{o r g}^{(t h)}$      (8)

The efficiency of the edge detection depends essentially on the choice of the threshold parameter (th). In our work, we use the optimal calculating of the threshold parameter determined by the grayscale histogram of the gradient of the image proposed by Jiang et al. [58].

In resume, our goal is to evaluate degradation in medical and biometric images caused by second-generation Wavelet-based compression algorithms. For this, we use three ways of evaluation based on structural quality, textural quality, and edge quality as shown in the flowchart in Figure 2.

Figure 2. The basic flowchart of our study scenario

4. Experimental Results

In this article, our experiments were designed to give a numerical evaluation of the compression effect on images. For this, we choose the use of two kinds of images presenting a textural structure; medical images and bimetric images as shown in Figure 3:

Medical images with a gray Head Brain T2 Axial slice by a Magnetic resonance imaging (MRI) (Patient #1060) downloaded from https://wiki.idoimaging.com/index.php?title=Sample_Data, shown in Figure 3-a, and an MRI 3t_c spine (File #04364899) taken by a Magnetom Symphony TIM Siemens downloaded from http://www.siemens-healthineers.com, Figure 3-b.

Biometric images representing an Iris gray image downloaded from the IIT Delhi Iris database (http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database_Iris.htmi), in Figure 3-c, and a Fingerprint image downloaded from the FVC2002 DB1_B Fingerprint database collected by the International Competition for Fingerprint Verification Algorithms (http://www.cbsr.ia.ac.cn/english/Palmprint%20Databases.asp) in Figure 3-d.

Figure 3. The original images used in our paper

With these concepts, our experience is to compare the textural efficacy of five wavelet algorithms based on the lifting scheme of CDF 9/7 wavelet adopted by the JPEG2000 standard coupled with the Set Partitioning in Hierarchical Trees encoder (SPIHT) or the Zigzag Set Partitioning in Hierarchical Trees encoder (SPIHT-Z) compared to the classical Discrete Wavelet decomposition (DWT):

•Discrete wavelet transform with SPIHT with SPIHT (DWT-SPIHT) algorithm used from Beladgham et al. [59].

•Lifting wavelet transform with SPIHT (LWT-SPIHT) algorithm used from Bouida et al. [60].

•Lifting wavelet packet transform with SPIHT (PWT-SPIHT) algorithm used from Benyahia et al. [9].

•Lifting Quincunx wavelet transform with SPIHT (QWT-SPIHT) algorithm used from Beladgham et al. [11].

•Lifting Quincunx wavelet transform with SPIHT-Z encoder (QWT-SPIHTZ) algorithm used from Bouida et al. [52] and Benyahia et al. [61].

A first analysis of the textural structure of the original images selected for our calculations using the GLCM parameters, allows us to observe their textural levels, see Table 1. In this table, we present the elementary GLCM texture features like Contrast, Correlation, Energy, Homogeneity, and Entropy calculated from original images. When observing the variance of the intensity between a pixel and its neighbors over the entire image, we notice a high texture depth and smoothness only in the fingerprint image. However, in the other images, the texture depth is constant. The correlation feature in all images has a high value indicating a perfect correlation of texture in a horizontal direction or vertical direction. The energy values determine the regularity and uniformity across all images with the same interval slightly high in the fingerprint.

Table 1. The textural features of original images

 

Brain

Spine

Finger

Iris

Contrast

0.08

0.18

0.81

0.20

Correlation

0.97

0.97

0.82

0.97

Energy

0.31

0.31

0.49

0.15

Homogeneity

0.96

0.93

0.86

0.92

Entropy

1.54

1.87

1.66

2.28

In the case of homogeneity, all images present a high closeness of the distribution of texture elements according to the diagonal direction. Finally, the entropy property shows the image information and reflects the complexity of the texture distribution, with the high value in the iris images.

Figures 4 to 7 show the structural quality assessment using PSNR [13], MS-SSIM [40], VIF [62], and FSIM [63] of the compressed images with the application of the selected Wavelet-based algorithms varying the compression ratios between 0.125 and 2.0 bpp. With PSNR in Figure 4, we can observe in medical images and for low compression ratios (<0.5 bpp) compression with QWT-SPIHTZ is a weakly better than the other techniques and almost equivalent in quality to PWT-SPIHT and QWT-SPIHT algorithms. These three techniques are better than DWT-SPIT and LWT-SPIHT one. In terms of structural similarity, we also note the same observation except that beyond 1.0 bpp almost all the algorithms have the same quality. Generally, the error in pixel energy shown by the PSNR is acceptable in Brain, Spine and Fingerprint images from 0.5 bpp, and in Iris image starts to be good from 1.0 bpp.

However, when analyzing the structural similarity MS-SSIM in Figure 5, we notice that the MS-SSIM is acceptable in all images with compression ratios >0.25 bpp with better quality in QWT-SPIHTZ, QWT-SPIHT and PWT-SPIHT compression.

When observing the VIF quality factor in Figure 6, we conclude clearly that in accordance with natural scene statistical analysis [62], the conservation of the fidelity in the images (medical and biometric) compressed by the QWT-SPIHTZ, QWT-SPIHT, and PWT-SPIHT especially in the very low ratios (<1.0 bpp).

The same observation in Figure 7 is also remarkable with the Feature Similarity Index (FSIM) for the images compressed by the QWT-SPIHTZ, QWT-SPIHT, and PWT-SPIHT algorithms, especially in ratios ≤0.75 bpp with a slight decrease for the finger image.

When considering the texture analysis, we observe, as shown in Figures 8 and 9, that the texture quality is preserved when the compression ratio is >0.5 bpp with all compression algorithms in the case of medical and biometric images, especially with the use QWT-SPIHTZ, QWT-SPIHT, and PWT-SPIHT compression algorithms.

Figure 4. The PSNR quality of compressed images

Figure 5. The MS-SSIM quality of compressed images

Figure 6. The VIF quality of compressed images

Figure 7. The FSIM quality of compressed images

Before analyzing the edge quality of the compressed images, we can observe the degradation of edges on these images using some compression ratios and compared them to the original images. The determined Canny edges for only two samples are used in this comparison (medical brain image and the biometric iris image) as shown in Table 2 and Figure 10.

In Table 2 and to understand and confirm the degradation quality in edges under compression, we use two factors to calculate the edge quality evaluation; the edge quality factor [64] and the Edge preservation Index [65]. With these factors, this table summarizes the values recovered during the analysis of edges for the Brain image and Iris image.

From this result, we can observe the good preservation of the edge structure in compressed images, especially with PWT-SPIHT, QWT-SPIHT, and QWT-SPIHTZ. This gives a good appreciation of the quality of contour preservation in these kinds of images and will retain very useful texture information during their use.

This remark is clearly observed in Figure 10. The presence of deformation in the contours of compressed images for different algorithms and using different compression ratios are easily remarkable. These degradations are remarkable for the low compression ratios and especially for compression using DWT and LWT for all images, but in the quincunx wavelet, these degradations are slight.

In this case, it can be assessed that this geometric degradation can adversely affect the quality when exploiting these images during a medical analysis or biometric authentication. This is due to the distortion in the textural structure of the image that can deteriorate informational content.

Finally and as shown in Figures 11 and 12, we conclude the same previous ascertainment for the edge quality. We can see clearly the preservation of edges when using PWT and QWT transforms with a slight difference in the case of the Finger image. The accepted edge quality values are observed from 0.5 bpp only in QWT-SPIHTZ, QWT-SPIHT, and PWT-SPIHT compression algorithms. In DWT-SPIHT and LWT-SPIHT, the edge quality is very low for bitrates <1.5 bpp. This is probably caused by the non-separable decomposition technique in the used wavelets.

After this investigation, we conclude that only QWT-SPIHTZ, QWT-SPIHT, and PWT-SPIHT compression algorithms present a good preservation of structural, textural and, edge qualities with the use of medical and biometric images. We can say that these algorithms preserve the structural, textural and, edge quality in the chosen medical and biometric images.

Figure 8. The texture quality of compressed medical images

Figure 9. The texture quality of compressed images

Figure 10. The edge degradation in compressed brain and iris image

Table 2. The edge factor quality of brain and Iris images under compression

Brain Iamge

0.125

0.25

0.50

1.0

Fedge

EPI

Fedge

EPI

Fedge

EPI

Fedge

EPI

DWT-SPIHT

0.020

0.234

0.047

0.251

0.060

0.483

0.073

0.857

LWT- SPIHT

0.019

0.211

0.050

0.279

0.060

0.451

0.073

0.742

PWT- SPIHT

0.060

0.348

0.065

0.632

0.072

0.786

0.076

0.852

QWT- SPIHT

0.067

0.347

0.066

0.561

0.077

0.781

0.076

0.851

WT-SPIHTZ

0.072

0.642

0.074

0.745

0.076

0.809

0.077

0.854

Iris Iamge

0.125

0.25

0.50

1.0

Fedge

EPI

Fedge

EPI

Fedge

EPI

Fedge

EPI

DWT-SPIHT

0.039

0.290

0.049

0.472

0.086

0.707

0.101

0.857

LWT- SPIHT

0.020

0.237

0.44

0.388

0.076

0.586

0.096

0.821

PWT- SPIHT

0.068

0.531

0.079

0.749

0.096

0.835

0.103

0.874

QWT- SPIHT

0.066

0.578

0.088

0.699

0.104

0.818

0.105

0.866

WT-SPIHTZ

0.093

0.714

0.099

0.790

0.103

0.841

0.105

0.872

Figure 11. The edge quality of compressed medical images

Figure 12. The edge quality of compressed biometric images

5. Conclusions

In our investigation, we presented a new form of analyzing degradation in compressed images, especially in medical and biometric images. This analysis association a classical structural quality assessment to a textural feature degradation quality and an edge-oriented performing quality. With the textural quality assessment, we focused our calculation only on the most important texture features determinate by the GLCM method like Contrast, Correlation, Energy, Homogeneity, and Entropy. In the edge quality performing, we used the Canny edge performance in compressed images to determine the retention rate edges compared to those in the original one.

The numerical experiment results study some second-generation wavelet-based compression algorithms coupled with SPIHT and SPIHT-Z coding and applied on medical and biometric images. These results confirmed that the Quincunx based algorithm associated with SPIHT-Z (QWT-SPIHTZ) like QWT-SPIHT and PWT-SPIHT gives good results than other used algorithms (LWT-SPIHT and DWT-SPIHT). It is worth mentioning that at very low bitrates, these algorithms provide very important textural quality in medical and biometric images.

As a future perspective, we suggest adding the development of a multi-threshold edge quality to allow a deep evaluation of the textural structure and thinking about the multi-resolution analysis of texture quality by wavelet transforms.

Acknowledgment

This work is partly supported by Algerian ministry of higher education and research (PRFU) project N° A25N01UN080120180002. We would like to thank Dr. Fatiha Mouili for her careful reading of this manuscript and her valuable English writing suggestions.

  References

[1] Salomon, D. (2007). Data Compression: The Complete Reference (4th ed.). London: Springer.

[2] Hussain, A.J., Al-Fayadh, A., Radi, N. (2018). Image compression techniques: A survey in lossless and lossy algorithms. Neurocomputing, 300: 44-69. https://doi.org/10.1016/j.neucom.2018.02.094

[3] Chinnam, S., Sistla, V., Kolli, V. (2019). SVM-PUK kernel based MRI-brain tumor Identification using texture and Gabor wavelets. Traitement du Signal, 36(2): 185-191. https://doi.org/10.18280/ts.360209

[4] Fekri-Ershad, S. (2019). Gender classification in human face images for smart phone applications based on local texture information and evaluated Kullback-Leibler divergence. Traitement du Signal, 36(6): 507-514. https://doi.org/10.18280/ts.360605

[5] Setyaningsih, E., Harjoko, A. (2017). Survey of hybrid image compression techniques. International Journal of Electrical and Computer Engineering (IJECE), 7(4): 2206. https://doi.org/10.11591/ijece.v7i4.pp2206-2214

[6] Uthayakumar, J., Vengattaraman, T., Dhavachelvan, P. (2018). A survey on data compression techniques: From the perspective of data quality, coding schemes, data type and applications. Journal of King Saud University - Computer and Information Sciences, S1319157818301101. https://doi.org/10.1016/j.jksuci.2018.05.006

[7] Kou, W. (1995). Digital image compression: algorithms and standards (Vol. 333). Springer Science & Business Media. https://doi.org/10.1007/978-1-4757-2361-8

[8] Gao, R.X., Yan, R. (2011). Wavelet packet transform. Wavelets, 69-81. https://doi.org/10.1007/978-1-4419-1545-0_5

[9] Benyahia, I., Beladgham, M., Bassou, A. (2018). Evaluation of the medical image compression using wavelet packet transform and SPIHT coding. International Journal of Electrical and Computer Engineering (IJECE), 8(4): 2139-2147. https://doi.org/10.11591/ijece.v8i4.pp2139-2147

[10] Chen, Y., Adams, M.D., Lu, W.S. (2006). Design of optimal quincunx filter banks for image coding. EURASIP Journal on Advances in Signal Processing, 2007(1): 083858. https://doi.org/10.1155/2007/83858

[11] Beladgham, M., Bessaid, A., Taleb-Ahmed, A., Boucli Hacene, I. (2012). Medical image compression using quincunx wavelets and SPIHT coding. Journal of Electrical Engineering and Technology, 7(2): 264-272. https://doi.org/10.5370/JEET.2012.7.2.264

[12] Phadikar, B.S., Maity, G.K., Phadikar, A. (2018). Full reference image quality assessment: A survey. Industry Interactive Innovations in Science, Engineering and Technology, 11: 197-208. https://doi.org/10.1007/978-981-10-3953-9_19

[13] Wang, Z., Bovik, A.C. (2006). Modern Image Quality Assessment. San Rafael, Calif.: Morgan & Claypool Publishers.

[14] Rosen, A.F.G., Roalf, D.R., Ruparel, K., et al. (2018). Quantitative assessment of structural image quality. NeuroImage, 169: 407-418. https://doi.org/10.1016/j.neuroimage.2017.12.059

[15] Wagner, M., Lin, H., Li, S., Saupe, D. (2019). Algorithm Selection for Image Quality Assessment. arXiv:1908.06911 [cs]. 

[16] Ciocca, G., Corchs, S., Gasparini, F. (2015). Complexity perception of texture images. New Trends in Image Analysis and Processing - ICIAP 2015 Workshops, Genoa, Italy, pp. 119-126. https://doi.org/10.1007/978-3-319-23222-5_15

[17] Sadykova, D., James, A.P. (2017). Quality assessment metrics for edge detection and edge-aware filtering: A tutorial review. 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, pp. 2366-2369. https://doi.org/10.1109/ICACCI.2017.8126200

[18] Sakuldee, R., Udomhunsakul, S. (2007). Objective performance of compressed image quality assessments. International Journal of Computer and Information Engineering, 1(11): 3423-3432. https://doi.org/10.5281/zenodo.1074731

[19] Khosravy, M., Patel, N., Gupta, N., Sethi, I.K. (2019). Image quality assessment: a review to full reference indexes. Recent Trends in Communication, Computing, and Electronics, 524: 279-288. https://doi.org/10.1007/978-981-13-2685-1_27

[20] Vora, V.S., Suthar, A.C., Makwana, Y.N., Davda, S.J. (2010). Analysis of compressed image quality assessments. International Journal of Advanced Engineering and Application, 225-229.

[21] Hu, S., Jin, L., Wang, H., Zhang, Y., Kwong, S., Kuo, C.C.J. (2015). Compressed image quality metric based on perceptually weighted distortion. IEEE Transactions on Image Processing, 24(12): 5594-5608. https://doi.org/10.1109/TIP.2015.2481319

[22] Gu, K., Wang, S., Zhai, G., Ma, S., Yang, X., Zhang, W. (2016). Content-weighted mean-squared error for quality assessment of compressed images. Signal, Image and Video Processing, 10(5): 803-810. https://doi.org/10.1007/s11760-015-0818-9

[23] Zhang, X., Lin, W., Wang, S., Liu, J., Ma, S., Gao, W. (2019). Fine-grained quality assessment for compressed images. IEEE Transactions on Image Processing, 28(3): 1163-1175. https://doi.org/10.1109/TIP.2018.2874283

[24] Krivenko, S., Zriakhov, M., Kussul, N., Lukin, V. (2019). Prediction of visual quality for lossy compressed images. 2019 IEEE 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM), Polyana, Ukraine, pp. 1-5. https://doi.org/10.1109/CADSM.2019.8779266

[25] Kocsis, O., Costaridou, L., Mandellos, G., Lymberopoulos, D., Panayiotakis, G. (2003). Compression assessment based on medical image quality concepts using computer-generated test images. Computer Methods and Programs in Biomedicine, 71(2): 105-115. https://doi.org/10.1016/S0169-2607(02)00090-1

[26] Perez-Diaz, M. (2014). Techniques to evaluate the quality of medical images. AIP Conference Proceedings, 1626(1): 39-45. https://doi.org/10.1063/1.4901358

[27] Chow, L.S., Paramesran, R. (2016). Review of medical image quality assessment. Biomedical Signal Processing and Control, 27: 145-154. https://doi.org/10.1016/j.bspc.2016.02.006

[28] Kumar, B., Kumar, S.B., Kumar, C. (2013). Development of improved SSIM quality index for compressed medical images. 2013 IEEE Second International Conference on Image Information Processing (ICIIP-2013), Shimla, pp. 251-255. https://doi.org/10.1109/ICIIP.2013.6707593

[29] Gaudeau, Y., Lambert, J., Labonne, N., Moureaux, J. M. (2014). Compressed image quality assessment: Application to an interactive upper limb radiology atlas. 2014 IEEE International Conference on Image Processing (ICIP), Paris, pp. 501-505. https://doi.org/10.1109/ICIP.2014.7025100

[30] Liu, H., Wang, Z. (2019). Perceptual quality assessment of medical images. Encyclopedia of Biomedical Engineering, 588-596. https://doi.org/10.1016/B978-0-12-801238-3.64099-0

[31] Grother, P., Tabassi, E. (2007). Performance of Biometric Quality Measures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(4): 531-543. https://doi.org/10.1109/TPAMI.2007.1019

[32] Jain, A.K., Flynn, P., Ross, A.A. (2007). Handbook of Biometrics. Berlin, Heidelberg: Springer-Verlag.

[33] Alonso-Fernandez, F., Fierrez, J., Ortega-Garcia, J. (2012). Quality measures in biometric systems. IEEE Security Privacy, 10(6): 52-62. https://doi.org/10.1109/MSP.2011.178

[34] Bharadwaj, S., Vatsa, M., Singh, R. (2014). Biometric quality: a review of fingerprint, iris, and face. EURASIP Journal on Image and Video Processing, 2014(1): 34. https://doi.org/10.1186/1687-5281-2014-34

[35] Funk, W., Arnold, M., Busch, C., Munde, A. (2005). Evaluation of image compression algorithms for fingerprint and face recognition systems. Proceedings from the Sixth Annual IEEE SMC Information Assurance Workshop, West Point, NY, USA, pp. 72-78. https://doi.org/10.1109/IAW.2005.1495936

[36] Pathak, P. (2010). Image compression algorithms for fingerprint system. International Journal of Computer Science Issues, 7(3): 45-50.

[37] Thanki, R.M., Borisagar, K.R. (2017). Compressive Sensing for biometric system. Intelligent Analysis of Multimedia Information. Chapter, IGI Global. https://doi.org/10.4018/978-1-5225-0498-6.ch014

[38] Thanki, R.M., Dwivedi, V., Borisagar, K.R. (2018). Hybrid compression method using compressive sensing (CS) theory for various biometric data and biomedical data. Proceedings of the International Conference on Intelligent Systems and Signal Processing, pp. 1-13. https://doi.org/10.1007/978-981-10-6977-2_1

[39] Bharati, M.H., Liu, J.J., MacGregor, J.F. (2004). Image texture analysis: methods and comparisons. Chemometrics and Intelligent Laboratory Systems, 72(1): 57-71. https://doi.org/10.1016/j.chemolab.2004.02.005

[40] Wang, Z., Simoncelli, E.P., Bovik, A.C. (2003). Multiscale structural similarity for image quality assessment. The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, Pacific Grove, CA, USA, pp. 1398-1402. https://doi.org/10.1109/ACSSC.2003.1292216

[41] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4): 600-612. https://doi.org/10.1109/TIP.2003.819861

[42] Zhang, L., Zhang, L., Mou, X., Zhang, D. (2012). A comprehensive evaluation of full reference image quality assessment algorithms. 2012 19th IEEE International Conference on Image Processing, Orlando, FL, pp. 1477-1480. https://doi.org/10.1109/ICIP.2012.6467150

[43] Chaki, J., Dey, N. (2020). Texture Feature Extraction Techniques for Image Recognition. Springer Singapore.

[44] Haralick, R.M., Shanmugam, K., Dinstein, I. (1973). Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics, SMC, 3(6): 610-621. https://doi.org/10.1109/TSMC.1973.4309314

[45] Benco, M., Hudec, R., Kamencay, P., Zachariasova, M., Matuska, S. (2014). An Advanced Approach to Extraction of colour texture features based on GLCM. International Journal of Advanced Robotic Systems, 11(7): 104. https://doi.org/10.5772/58692

[46] Julesz, B. (1975). Experiments in the visual perception of texture. Scientific American, 232(4): 34-43.

[47] Haralick, R.M. (1979). Statistical and structural approaches to texture. Proceedings of the IEEE, 67(5): 786-804. https://doi.org/10.1109/PROC.1979.11328

[48] Baraldi, A., Panniggiani, F. (1995). An investigation of the textural characteristics associated with gray level cooccurrence matrix statistical parameters. IEEE Transactions on Geoscience and Remote Sensing, 33(2): 293-304. https://doi.org/10.1109/TGRS.1995.8746010

[49] Mohanaiah, P., Sathyanarayana, P., GuruKumar, L. (2013). Image texture feature extraction using GLCM approach. International Journal of Scientific and Research Publications (IJSRP), 3(5): 1-5.

[50] Verma, M., Raman, B., Murala, S. (2015). Local extrema co-occurrence pattern for color and texture image retrieval. Neurocomputing, 165: 255-269. https://doi.org/10.1016/j.neucom.2015.03.015

[51] Hall-Beyer, M. (2017). GLCM Texture: A Tutorial v. 3.0 March 2017. University of Calgary. 

[52] Bouida, A., Beladgham, M., Bassou, A., Benyahia, I. (2020). Quality and texture analysis of biometric images compressed with second-generation wavelet transforms and SPIHT-Z encoder. Indonesian Journal of Electrical Engineering and Computer Science, 19(3): 1325-1339. https://doi.org/10.11591/ijeecs.v19.i3.pp1325-1339

[53] Marr, D., Hildreth, E., Brenner, S. (1980). Theory of edge detection. Proceedings of the Royal Society of London. Series B. Biological Sciences, 207(1167): 187-217. https://doi.org/10.1098/rspb.1980.0020 

[54] Senthilkumaran, N., Rajesh, R. (2009). Image segmentation - a survey of soft computing approaches. 2009 International Conference on Advances in Recent Technologies in Communication and Computing, Kottayam, Kerala, pp. 844-846. https://doi.org/10.1109/ARTCom.2009.219

[55] Chouhan, S.S., Kaul, A., Singh, U.P. (2018). Soft computing approaches for image segmentation: a survey. Multimedia Tools and Applications, 77(21): 28483-28537. https://doi.org/10.1007/s11042-018-6005-6

[56] Ruslau, M.F.V., Pratama, R.A., Meirista, E. (2020). Edge detection of digital image with different edge types. Journal of Physics: Conference Series, 1569: 042069. https://doi.org/10.1088/1742-6596/1569/4/042069

[57] Sulistyo, W.Y., Riadi, I., Yudhana, A. (2020). Comparative Analysis of Image Quality Values on Edge Detection Methods. Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi), 4(2): 345-351. https://doi.org/10.29207/resti.v4i2.1827

[58] Jiang, W., Li, F., Xu, D. (2019). An improved canny operator in the application of medical image segmentation. Proceedings of the 3rd International Conference on Computer Science and Application Engineering, Sanya, China, pp. 1-5. https://doi.org/10.1145/3331453.3360978

[59] Beladgham, M., Bensaid, A., Moulay-Lakhdar, A., Benaissa, M., Bassou, A. (2010). MRI image compression using biorthogonal CDF wavelet based on lifting scheme and SPIHT coding. Quatrième Conférence Internationale sur le Génie Electrique CIGE’10, 03-04 Novembre 2010, Université de Bechar, Algérie, pp. 225-232. 

[60] Bouida, A., Benyahia, I., Beladgham, M., Merit, K. (2018). The improving of biometric image quality compression using Wavelet and SPIHT coding. 018 International Conference on Signal, Image, Vision and their Applications (SIVA), Guelma, Algeria, pp. 1-6. https://doi.org/10.1109/SIVA.2018.8661029

[61] Benyahia, I., Bassou, A., Allaoui, C.E.H., Beladgham, M. (2019). Modified spiht algorithm for quincunx wavelet image coding. Indonesian Journal of Electrical Engineering and Computer Science, 16(1): 230-242. https://doi.org/10.11591/ijeecs.v16.i1.pp230-242

[62] Sheikh, H.R., Bovik, A.C. (2006). Image information and visual quality. IEEE Transactions on Image Processing, 15(2): 430-444. https://doi.org/10.1109/TIP.2005.859378

[63] Zhang, L., Zhang, L., Mou, X., Zhang, D. (2011). FSIM: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 20(8): 2378-2386. https://doi.org/10.1109/TIP.2011.2109730

[64] Rajab, M. (2016). Performance evaluation of image edge detection techniques. International Journal of Computer Science and Security (IJCSS), 10(5): 170-185.

[65] Joseph, J., Jayaraman, S., Periyasamy, R., Renuka, S. (2017). An edge preservation index for evaluating nonlinear spatial restoration in MR images. Current Medical Imaging Reviews, 13(1): 58-65. https://doi.org/10.2174/1573405612666160609131149