Wavelet Image Coding Measurement based on System Visual Human

Similar documents
WAVELET VISIBLE DIFFERENCE MEASUREMENT BASED ON HUMAN VISUAL SYSTEM CRITERIA FOR IMAGE QUALITY ASSESSMENT

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

Modified SPIHT Image Coder For Wireless Communication

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

CSEP 521 Applied Algorithms Spring Lossy Image Compression

Wavelet Based Image Compression Using ROI SPIHT Coding

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Wavelet Transform (WT) & JPEG-2000

DCT-BASED IMAGE COMPRESSION USING WAVELET-BASED ALGORITHM WITH EFFICIENT DEBLOCKING FILTER

Adaptive Image Coding With Perceptual Distortion Control

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

Reversible Wavelets for Embedded Image Compression. Sri Rama Prasanna Pavani Electrical and Computer Engineering, CU Boulder

Image coding based on multiband wavelet and adaptive quad-tree partition

Robust Image Watermarking based on DCT-DWT- SVD Method

THE QUALITY of an image is a difficult concept to

BLIND MEASUREMENT OF BLOCKING ARTIFACTS IN IMAGES Zhou Wang, Alan C. Bovik, and Brian L. Evans. (

EXPLORING ON STEGANOGRAPHY FOR LOW BIT RATE WAVELET BASED CODER IN IMAGE RETRIEVAL SYSTEM

Adaptive Quantization for Video Compression in Frequency Domain

Blind Measurement of Blocking Artifact in Images

Comparison of EBCOT Technique Using HAAR Wavelet and Hadamard Transform

Embedded Rate Scalable Wavelet-Based Image Coding Algorithm with RPSWS

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

Reconstruction PSNR [db]

ANALYSIS OF SPIHT ALGORITHM FOR SATELLITE IMAGE COMPRESSION

Embedded Descendent-Only Zerotree Wavelet Coding for Image Compression

Fingerprint Image Compression

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality

A Study of Image Compression Based Transmission Algorithm Using SPIHT for Low Bit Rate Application

REGION-BASED SPIHT CODING AND MULTIRESOLUTION DECODING OF IMAGE SEQUENCES

MRT based Fixed Block size Transform Coding

A SCALABLE SPIHT-BASED MULTISPECTRAL IMAGE COMPRESSION TECHNIQUE. Fouad Khelifi, Ahmed Bouridane, and Fatih Kurugollu

FAST AND EFFICIENT SPATIAL SCALABLE IMAGE COMPRESSION USING WAVELET LOWER TREES

A 3-D Virtual SPIHT for Scalable Very Low Bit-Rate Embedded Video Compression

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

Error Protection of Wavelet Coded Images Using Residual Source Redundancy

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

Bit-Plane Decomposition Steganography Using Wavelet Compressed Video

An Optimum Approach for Image Compression: Tuned Degree-K Zerotree Wavelet Coding

signal-to-noise ratio (PSNR), 2

IMAGE COMPRESSION USING EMBEDDED ZEROTREE WAVELET

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

Image Compression Algorithms using Wavelets: a review

Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding

CHAPTER-4 WATERMARKING OF GRAY IMAGES

EXPERIMENTAL ANALYSIS AND MODELING OF DIGITAL VIDEO QUALITY Mylène C.Q. Farias, a Michael S. Moore, a John M. Foley, b and Sanjit K.

Contribution of CIWaM in JPEG2000 Quantization for Color Images

Evaluation of Two Principal Approaches to Objective Image Quality Assessment

A New Configuration of Adaptive Arithmetic Model for Video Coding with 3D SPIHT

Performance of Quality Metrics for Compressed Medical Images Through Mean Opinion Score Prediction

An Embedded Wavelet Video Coder Using Three-Dimensional Set Partitioning in Hierarchical Trees (SPIHT)

ENTROPY-BASED IMAGE WATERMARKING USING DWT AND HVS

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

An embedded and efficient low-complexity hierarchical image coder

FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION

A New Psychovisual Threshold Technique in Image Processing Applications

DIGITAL IMAGE WATERMARKING BASED ON A RELATION BETWEEN SPATIAL AND FREQUENCY DOMAINS

Low-Memory Packetized SPIHT Image Compression

Module 8: Video Coding Basics Lecture 42: Sub-band coding, Second generation coding, 3D coding. The Lecture Contains: Performance Measures

Comparative Analysis of 2-Level and 4-Level DWT for Watermarking and Tampering Detection

An Embedded Wavelet Video Coder. Using Three-Dimensional Set. Partitioning in Hierarchical Trees. Beong-Jo Kim and William A.

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

Topic 5 Image Compression

Color Image Compression Using EZW and SPIHT Algorithm

No Reference Medical Image Quality Measurement Based on Spread Spectrum and Discrete Wavelet Transform using ROI Processing

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

Wavelet Based Image Compression, Pattern Recognition And Data Hiding

An Embedded Wavelet Video. Set Partitioning in Hierarchical. Beong-Jo Kim and William A. Pearlman

CHAPTER-6 WATERMARKING OF JPEG IMAGES

No-reference perceptual quality metric for H.264/AVC encoded video. Maria Paula Queluz

Lossless and Lossy Minimal Redundancy Pyramidal Decomposition for Scalable Image Compression Technique

Just Noticeable Difference for Images with Decomposition Model for Separating Edge and Textured Regions

QUANTIZATION WATERMARKING IN THE JPEG2000 CODING PIPELINE

JPEG IMAGE CODING WITH ADAPTIVE QUANTIZATION

IMAGE PROCESSING USING DISCRETE WAVELET TRANSFORM

Image Quality Assessment Techniques: An Overview

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

SPIHT-BASED IMAGE ARCHIVING UNDER BIT BUDGET CONSTRAINTS

BLIND QUALITY ASSESSMENT OF JPEG2000 COMPRESSED IMAGES USING NATURAL SCENE STATISTICS. Hamid R. Sheikh, Alan C. Bovik and Lawrence Cormack

Image Compression Using BPD with De Based Multi- Level Thresholding

8- BAND HYPER-SPECTRAL IMAGE COMPRESSION USING EMBEDDED ZERO TREE WAVELET

Performance Analysis of SPIHT algorithm in Image Compression

PERFORMANCE ANAYSIS OF EMBEDDED ZERO TREE AND SET PARTITIONING IN HIERARCHICAL TREE

Mesh Based Interpolative Coding (MBIC)

Robust Image Watermarking based on Discrete Wavelet Transform, Discrete Cosine Transform & Singular Value Decomposition

Visual Communications and Image Processing 2003, Touradj Ebrahimi, Thomas Sikora, Editors, Proceedings of SPIE Vol (2003) 2003 SPIE

An Improved Performance of Watermarking In DWT Domain Using SVD

A combined fractal and wavelet image compression approach

International Journal of Advanced Research in Computer Science and Software Engineering

The Improved Embedded Zerotree Wavelet Coding (EZW) Algorithm

FRACTAL IMAGE COMPRESSION OF GRAYSCALE AND RGB IMAGES USING DCT WITH QUADTREE DECOMPOSITION AND HUFFMAN CODING. Moheb R. Girgis and Mohammed M.

CERIAS Tech Report

Progressive resolution coding of hyperspectral imagery featuring region of interest access

Image Interpolation using Collaborative Filtering

New Approach of Estimating PSNR-B For Deblocked

Compression of Stereo Images using a Huffman-Zip Scheme

A New Approach to Compressed Image Steganography Using Wavelet Transform

Reduction of Blocking artifacts in Compressed Medical Images

CHAPTER 5 RATIO-MODIFIED BLOCK TRUNCATION CODING FOR REDUCED BITRATES

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform

Transcription:

Applied Mathematical Sciences, Vol. 4, 2010, no. 49, 2417-2429 Wavelet Image Coding Measurement based on System Visual Human Mohammed Nahid LIMIARF Laboratory, Faculty of Sciences, Med V University, Rabat, Morocco mohammed.nahid@gmail.com Abderrahim Bajit LIMIARF Laboratory, Faculty of Sciences, Med V University, Rabat, Morocco sinser2000@yahoo.com Ahmed Tamtaoui National Institute of Communication INPT, Rabat, Morocco tamtaoui@inpt.ac.ma Elhoussine Bouyakhf LIMIARF Laboratory, Faculty of Sciences, Med V University, Rabat, Morocco bouyakhf@fsr.ac.ma Abstract A reliable quality measurement is a much needed tool for determining the type and amount of image distortion for the embedded zero tree wavelet image coding based on perceptual techniques. Attempts to improve quality measurement include incorporation of simple models of the human visual system (HVS) and multi-dimensional tool design, it is essential to adapt perceptually based metrics which objectively enable to asses the visual quality measurement. Our paper proposes an objective quality metric based on various psychovisual models, yielding the objective factor called Probability Score (PS) that correlates well with visual error perception, and demonstrating very good performance. Thus, we avoid the traditional subjective criteria called mean opinion score (MOS), which involves human observers, are inconvenient, timeconsuming, and influenced by environmental conditions. Widely used pixelwise measure, the peak signal to noise ratio (PSNR) cannot capture the artifacts like blurriness or blockiness. The model therefore called the Modified Wavelet Visible Difference Predictor MWVDP, is based both on traditional psychometric function and the Daly model visible difference predictor (VDP) and integrates various psychovisual models. Keywords: DWT, image copression, image quality, visual perception, luminance masking, contrast masking, objectif quality measure PS

2418 M. Nahid, A. Bajit, A. Tamtoui and E. H. Bouyakhf 1 Introduction The last years have seen increasing efforts within the embedded wavelet coding algorithms which not only provides very good compression performance, but also has the property that the bitstream can be truncated at any point and still be decoded to recreate a reasonably good quality image. Embedded zerotree wavelet coding (EZW) introduced by J. M. Shapiro [11] and by A. Said and W. A. Pearlman [5] using an algorithm based on set partitioning in hierarchical trees (SPIHT) has proven to be a very effective image compression method based on the PSNR measure. Minimizing such distortion measures does not necessarily guarantee preservation of good perceptual quality of the reconstructed images and may result in visually annoying artifacts despite the potential for a good PSNR. This is especially true in low bit rate image coding applications where the goal is to remove as much perceptual redundancy as possible while introducing minimal perceptual distortion. In digital image processing, great success has been obtained recently by a class of perceptually based embedded wavelet image coding algorithms, such as SPIHT, and EBCOT [7] algorithms. Some scheme didn t incorporate the optimal quantization model proposed by Watson. This model is based on a psychovisual experiments of the 9/7 wavelets [6] which determinates an optimal quantization matrix that yields a perceptually lossless compression quality. In other schemes the integration of interesting HVS features are not considered, like luminance and contrast masking [1], [10], [12], [15] and contrast sensitivity function CSF [9], whose particular feature is to filter spatially all imperceptible frequencies by the human visual cortex. Exploiting this fact, we can adapt image contrast (contrast masking), and remove considerable invisible frequencies and still quantize efficiently with a perceptually improved quality the reconstructed image. In addition, these techniques exhibit very desirable characteristics including very fast execution and perceptual optimization embedded bitstream transmission. Although SPIHT based image coders are effective at minimizing MSE, they are not explicitly designed to minimize perceptual based distortions that are matched to the capacities of HVS. The proposed model MWVDP is applyied for embedded zero tree wavelet image coding based on perceptual tools called POEZIC [15] coder that incorporates various masking effects of human visual system perception which prior to encoding, weights the wavelet coefficients according to their perceptual importance and reaches the targeted bit rate with an improving perceptual quality versus that obtained by the SPIHT algorithm. Adopted visual models in our algorithm are wavelet JND thresholds, luminance masking, contrast masking (section 2), contrast sensitivity function CSF (section 3) and quantization error sensitivity function (section 4). The objective qultiy metrics is done in section 5 and finally, section 6 gives a large discussion of the MWVDP results.

Wavelet image coding measurement 2419 Figure 1: Wavelet perceptual masking algorithm SetUp (top), contrast masking (bottom-left), and luminance and contrast masking (bottom-right). 2 Wavelet Perceptual Masking Model SetUp In this work, three visual phenomena are modeled to compute the perceptual weighting model SetUp matrix: the JND thresholds [2], luminance masking [15] (also known as light adaptation), contrast masking [15] and the contrast sensitivity function CSF. This model correlates well with the cortical decomposition. The JND thresholds are thus computed from the base detection threshold for a subband. The mathematical model for the JND threshold is obtained from the psychophysical experiments adopted by Watson corresponding to the 9/7 biorthogonal wavelet basis [6], [2]. In image coding, the detection thresholds will depend on the mean luminance of the local image region and, therefore, a luminance masking correction factor must be derived and applied to the contrast sensitivity profile to account for this variation. In this work, the luminance masking adjustment is approximated using a power function [15], here we adopt the model used in JPEG2000 with a factor exponent of 0.649 [15]. Another factor that will affect the detection threshold is the contrast masking also known as threshold elevation, which takes into account the fact that the visibility of one image component (the target) changes with the presence of other image components (the masker) [1], [10], [12], [15]. Contrast masking measures the variation of the detection threshold of a target signal as a function of the contrast of the masker. The resulting masking sensitivity profiles are referred to as target threshold versus masker contrast functions. In our case, the masker signal is represented by the wavelet coefficients of the input image to be coded while the target signal is represented by the quantization distortion. The final perceptual model as shown in figures??, where the computation algorithm of the wavelet JND thresholds, luminance masking,

2420 M. Nahid, A. Bajit, A. Tamtoui and E. H. Bouyakhf contrast masking and CSF are shown as illustration from the Lena wavelet coefficients with respect to a given observation distance of V=4. The wavelet JND thresholds JND (λ,θ,i,j) calculation process for each wavelet coefficient v(λ, θ, i, j) at location (i, j) within a subband (λ, θ), at level decomposition λ and orientation are computed as follow in equation (1): t JND (λ, θ, i, j) = a l (λ, θ, i, j).a c (λ, θ, i, j).csf (λ, θ, i, j) (1) a l (λ, θ, i, j) = ( V ) at λmax,ll,i,j V mean : The luminance masking adjustment; a c (λ, θ, i, j) = max { 1, ( ) V (λ,θ,i,j ε } JND (λ,θ).a l (λ,θ,i,j) : The contrast masking adjustment; V λmax, LL, i, j : The DWT coefficient, in the LL subband, that spatially corresponds to location (λ, θ, i, j); λ max = 5, v mean = 128, ε = 0.6 and a T = 0.649; i = i/2 λmax λ, j = j/2 λmax λ. 3 CSF Weighting Implementation Approaches To optimize wavelets coefficients weighting and improve the visual quality of the reconstructed image we take benefits of the contrast sensitivity function CSF [13]. The CSF function describes in quantitative terms how good the human visual system HVS perceives a signal at a given spatial frequency. It sets the contrast perception in relation with the spatial frequency usually measured in cycles per optical degree, which gives the CSF a shape that is independent of the viewing distance as shown in figure 2. Common to all compression techniques is the fact that they focus on an improved coding efficiency, which is not necessarily equivalent to an improved visual quality. The CSF function transforms the wavelet decomposed image on an image which is perceptible and remove all imperceptible frequencies that are invisible by the human visual cortex. The viewing conditions were assumed as being fixed. This may not be realistic, as an observer can look at the images from any distance. Nevertheless, fixing r and v is necessary to apply a frequency weighting. Therefore it is shown that with a slight modification of the CSF shape and the assumption of worst case viewing conditions a CSF weighting that works properly for all different viewing distances and typical display media resolutions is the Daly model. In the compression applications, the CSF can basically be exploited to modify the wavelet coefficients before and after quantization, it shapes directly the spectrum of the quantization noise. This strategy is opposed to the direct algorithm that classically codes the detectable frequencies plus some redundant ones which requires additional bits. Conventional CSF implementations into wavelet based codec are based on a single invariant weighting factor per subband [13]. The first one called invariant single factor weighting (ISF). The

Wavelet image coding measurement 2421 Figure 2: CSF function models: Mannos (left), Daly (right). basic idea of the ISF weighting is to assign a single frequency weighting factor per wavelet subband. This approach is simple and stills an efficient perceptual weighting. The second approach weighting represents the DWT filtering which matches exactly the shape of the CSF. It keeps the possibility of an orientation dependent weighting inside the subband and is adapted to the local signal properties. The third approach is the mixed strategy which combines the fixed and filtering algorithm, the former is compatible with low frequencies and the latter is ideal for higher frequencies. 4 Quantization Error Sensitivity Function The perceptual quantization is the operation used to quantize the wavelets coefficients in order to reduce the entropy manifested by the required bits budget to transmit the compressed image. Compression is achieved by quantization and entropy coding of the DWT coefficients. Typically, a uniform quantize is used, implemented by division by a factor Q and rounding to the nearest integer. The factor Q may differ for different bands. Quantization of a single DWT coefficient in band will generate an artifact in the reconstructed image. A particular quantization factor in one band will result in coefficient errors in that band that are approximately uniformly distributed over the interval. The error image will be the sum of a lattice of basis functions with amplitudes proportional to the corresponding coefficient errors [2]. Thus, to predict the visibility of the error due to a particular quantization step, we must measure the visibility thresholds for individual basis function and error ensembles. The wavelet coefficients at different subbands and locations supply information of variable perceptual importance to the HVS. In order to develop a good wavelet-based image coding algorithm that considers HVS features, we need to measure the visual importance of the wavelet coefficients. Psychovisual experiments were conducted to measure the visual sensitivity in wavelet decompositions. Noise was added to the wavelet coefficients of a blank image with uniform mid-gray level. After the inverse wavelet transform, the noise threshold in the spatial

2422 M. Nahid, A. Bajit, A. Tamtoui and E. H. Bouyakhf Table 1: Error contrast sensitivity in the DWT domain for V = 4. Orientation Level A H D V 1 0.0859 0.0976 0.0904 0.0976 2 0.1304 0.1311 0.1010 0.1311 3 0.1613 0.1416 0.0895 0.1416 4 0.1600 0.1188 0.0597 0.1188 5 0.1226 0.0734 0.0280 0.0734 Figure 3: Watson error sensitivity function for WES V = 4. domain was tested. A model that provided a reasonable fit to the experimental data is done in table 1. As shown in figure 3 (for a viewing distance of 4) error sensitivity increases rapidly with wavelet spatial frequency, and with orientation from low pass frequencies to horizontal/vertical to diagonal orientations. Its reverse form yields the wavelet JND thresholds matrix. 5 Objective quality metrics The wavelet transform is one of the most powerful techniques for image compression, because of its similarities to the multiple channel models of the HVS. The DWT decomposes the image into a limited number of spatial frequency channels (5 decomposition levels and 3 orientations Horizontal, Vertical and Diagonal), which approximates well the cortical decomposition. Despite this limitation the quality measure still a goal of the wavelet visible difference predictor WVDP [4] to visually optimize the visual quality performance of the image coding schemes. Based on these multiple channel models we have developed a new objective metric based on wavelet decomposition and the Watson model called wavelet error sensitivity WES model introduced in section 4. This have conducted us to a wavelet based image quality metric, namely, psychometric function PF as shown in figure 4, and modified wavelet

Wavelet image coding measurement 2423 Figure 4: Wavelet image quality measure: Psychometric function PF. Figure 5: Modified wavelet visible difference predictor MWVDP Model. visible difference predictor MWVDP as illustrated in figure 5. These metrics are used in this paper to predict visible differences between the original and degraded images with respect to the detect ability of errors for a given perceptual threshold of the WES model within each wavelet channel. For each wavelet coefficient within all channels and error detection probability is then computed based on psychometric function [13], and then using a Minkowski summation of all error probabilities from low to high frequencies we obtain a visible difference map VDM, which yields the probability scale PS measure. Figure 5 shows the component parts of the MWVDP model, which is based on the VDP [14] image quality assessor. It performs respectively wavelet transformation, wavelet detection thresholds, luminance and contrast masking, mutual masking, errors calculation, probability computation based on psychometric function, Minkowski summation, and finally yields probability score PS. The original and the noisy images are first transformed to the DWT domain using a 5 wavelet level decomposition based on the 9/7 Biorthogonal wavelet basis. The differences between the original and noisy images (the wavelet errors) are tested against a wavelet JND threshold to check if the errors are under or bellow this threshold. In other hand we verify if errors are or not

2424 M. Nahid, A. Bajit, A. Tamtoui and E. H. Bouyakhf visible. A psychometric function (2) and its improved version the MWVDP equation (3) are then applied to estimate the error detection probabilities for each wavelet coefficient and convert these differences, as a ratio of the wavelet JND thresholds, to sub-band detection probabilities. This factor means the ability of detecting a distortion in a subband at location in the DWT field. MW V DP P(λ,θ,i,j) ( P(λ,θ,i,j) P F = 1 exp ( = 1 exp D (λ,θ,i,j) β) (2) JND (λ,θ,i,j) D (λ,θ,i,j) β) (3) T UN (λ,θ,i,j).jnd (λ,θ,i,j) Where D (λ,θ,i,j) denotes the quantization distortion detection at location (λ, θ, i, j), DWT denotes the Wavelet JND thresholds, T UN (λ,θ,i,j) denotes mutual contrast masking and is computed as the minimum of contrast masking of the MW V DP original and degraded images, JND(λ,θ,i,j) as computed in equation (4) denotes the MWVDP mutual contrast masking and is computed as the minimum of DWT threshold elevation of the original and degraded images, it s based on alteration that weights the wavelet coefficients. { MW V DP max(jnd, Alterationλ.C JND(λ,θ,i,j) = (λ,θ,i,j) ) if C(λ, θ, i, j) > 0 max(jnd, Alteration λ.2. C (λ,θ,i,j) ) otherwise (4) The parameter β for each metric is chosen to maximize the correspondence of the DWT probability error P(λ,θ,i,j) XXX and the probability summation [9], [4]. Finally we calculate the probability score PS by summing all probabilities within all wavelet subbands [13] as follows in equation (5): ( P S = exp (λ,θ,i,j) 1 P (λ,θ,i,j) Notate that the greater this factor is, the best the decoded image quality is. This score means a quality factor which its greatest value 1 when the coding quality is excellent and at the opposite way it reaches a value around 0 when the coding quality is bad. 6 Experimental Results and Discuss We apply the MWVDP quality metric to coders like EBCOT [7] and a perceptually optimized wavelet embedded zerotree image coder POEZIC [3] using 8 bits per pixel gray scale images and compare it to the standard SPIHT [5] coder. Selected 512x512 gray scale images in this paper are Lena, Barbara, Zelda, Goldhill, Mandrill, and Boat. The metric MWVDP correlates well with the mean opinion score MOS and gives significant evaluation instead of the ) β (5)

Wavelet image coding measurement 2425 Figure 6: POEZIC versus SPIHT: PS probability scores with bit rate at viewing distances (V=1, 3, 6, and 10) using modified wavelet visible difference predictor MWVDP metric; from the top to bottom figures results are for Goldhill and Mandrill images. PSNR measure. Figure 6 shows the PS measure with targeted bit rate for the 512 x 512 Goldhill and Mandrill images encoded with both SPIHT and POEZIC algorithms. At a very low bit-rate of 0.0625 bpp, the mouth, nose, and eye regions are hardly recognizable in the SPIHT coded image, whereas those regions in the POEZIC, coded image exhibit detailed visual information and are perceptually recognizable. At a low bit-rate of 0.125 bpp, SPIHT still decodes a very blurred image, while POEZIC begins to give acceptable visual quality over the face region. Increasing the bit-rate to 0.25 bpp and 0.5 bpp, the visual quality of the POEZIC coded images is still superior to the SPIHT coded images. When the bit-rate of 0.5 bpp is reached, the POEZIC coded image approaches uniform resolution and visual quality of the decoded SPIHT image and its visually optimized version POEZIC are almost indistinguishable. Figure 7 shows the MWVDP comparisons of the POEFIC and SPIHT compressed Goldhill and Mandrill images with varying observation distance V for

2426 M. Nahid, A. Bajit, A. Tamtoui and E. H. Bouyakhf Figure 7: POEZIC versus SPIHT: PS probability scores with viewing distance at 0.5 bpp, 0,125 bpp and 0.03125 bpp, using modified wavelet visible difference predictor MWVDP metric; from the top to bottom figures results are for Goldhill and Mandrill images. a given and fixed targeted bit rate at 0.015625 bpp, 0.0625 bpp and 0.25 bpp. The probability score PS is given as a function of the viewing distance. In comparison with SPIHT, significant quality gain is achieved by the POEZIC coding algorithm versus the standard SPIHT classical algorithm. Our aim of quality optimization is reached through the entire range of viewing distances. This is consistent with the subjective quality. Figure 8 illustrates the POEZIC and SPIHT compression versions of the Lena image with multiple targeted bit rate for a fixed viewing distance of V=4 the image wide. At low bit-rates such as 0.03125 bpp and 0.0625 bpp, POEZIC maintains acceptable quality at the whole image versus the standard SPIHT coder. Again, a visually high-quality uniform resolution image is obtained from the same bit stream with a sufficient bit-rate of 0.5 bpp and so on. The wavelet quality measure results values of the quality metric PS factor of both compressed Lena images (POEZIC Right and SPIHT Left coders) are mentioned above each compressed image.

Wavelet image coding measurement 2427 7 Conclusion Our quality evaluation system make a great part of a great project concerning the real time video coding and quality assessing in a wireless GSM networks infrastructure. It can be applied to various embedded coders such EZW, SPIHT, EBCOT and JPEG2000 and improve the perceptual quality measurement of the predicted frames in wavelet video coding based on HVS quality properties.

2428 M. Nahid, A. Bajit, A. Tamtoui and E. H. Bouyakhf Figure 8: Lena image compression results. The images of the left column that follow are for standard SPIHT only coded images. The images of the right column that follow are for the Embedded ZeroTree Wavelet Image Coding based on Perceptual Optimization Tools algorithm. The bit rates from top to bottom are 0.03125 bpp, 0.0625 bpp, 0.125 bpp, 0.25 bpp and 0.5 bpp respectively, at observation distance of V = 4. References [1] A. B. Watson and J. A. Solomon, A model of visual contrast gain control and pattern masking, J. Opt. Soc. Amer., (1997), vol. 14, 2397-2391. [2] A. B. Watson, G. Y. Yang, J. A. Solomon, and J.Villasonor, Visibility of Wavelet Quantization Noise, IEEE Trans. Image Processing, (1997), vol. 6 no. 8, 1164-1175. [3] A. Bajit, M. Nahid, A. Tamtaoui, and E. H. Bouyakhf, A Perceptually Optimized Wavelet Embedded ZeroTree Image Coder International Journal of Computer Science, (2007), Vol. 4, Number 4, 296-301. [4] A. P. Bradley, A Wavelet Visible Difference Predictor, IEEE Transactions on Image Processing, (May 1999), Vol. 8, No. 5.

Wavelet image coding measurement 2429 [5] A. Said and W. A. Pearlman, A new, fast and efficient image codec based on set partitioning in hierarchical trees, IEEE Trans. Circuits and Systems for video Technology, (June 1996), vol. 6, 243-250. [6] A.Cohen, I.Daubechies, and J.C.Feauveau, Biorthogonal bases of compactly supported wavelets, Commun. Pure Appl. Math., (1992), vol. 45, 485-560. [7] EBCOT: Embedded Block Coding with Optimized Truncation, ISO/IEC JTC1/SC29/WG1 N1020R, (Oct. 1998). [8] I. Honsch and L. J. Karam, Adaptive image coding with perceptual distortion control, IEEE Trans. Image Process., (Mars 2002), vol. 11, no. 3, 213-222. [9] J. G. Robson and N. Graham, Probability summation and regional variation in contrast sensitivity across the visual field, Vis. Res., (1981), vol. 21, 409-418. [10] J. M. Foley, Human luminance pattern-vision mechanisms: masking experiments require a new model, J. Compar. Neurol., (1994), vol. 11, no. 6, 1710-1719. [11] J. M. Shapiro, Embedded image coding using zerotrees of wavelet coefficients, IEEE Trans. Signal Processing, (1993), vol. 41, 3445-3462. [12] J. Ross and H. D. Speed, Contrast adaptation and contrast masking in human vision, In Proc. Roy. Soc. Lond. B, (1991), 61-69. [13] M. J. Nadenau, J. Reichel, and. Kunt, Wavelet-based Color Image Compression: Exploiting the Contrast Sensitivity Function, (2000). [14] S. Daly, The visible differences predictor: An algorithm for the assessment of image fidelity, in Digital Images and Human Vision, Cambridge, MA: MIT Press, (1993), 179-206. [15] S. Daly, W. Zeng, J. Li, S. Lei, Visual masking in wavelet compression for JPEG 2000, In Proceedings of IST/SPIE Conference on Image and Video Communications and Processing, San Jose, CA, (January 2000), Vol. 3974. Received: June, 2008