Error Protection of Wavelet Coded Images Using Residual Source Redundancy

Similar documents
SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

Embedded Descendent-Only Zerotree Wavelet Coding for Image Compression

ANALYSIS OF SPIHT ALGORITHM FOR SATELLITE IMAGE COMPRESSION

Wavelet Based Image Compression Using ROI SPIHT Coding

Modified SPIHT Image Coder For Wireless Communication

Wavelet Transform (WT) & JPEG-2000

CSEP 521 Applied Algorithms Spring Lossy Image Compression

Embedded Rate Scalable Wavelet-Based Image Coding Algorithm with RPSWS

Image Compression Algorithms using Wavelets: a review

REGION-BASED SPIHT CODING AND MULTIRESOLUTION DECODING OF IMAGE SEQUENCES

A Study of Image Compression Based Transmission Algorithm Using SPIHT for Low Bit Rate Application

An Optimum Approach for Image Compression: Tuned Degree-K Zerotree Wavelet Coding

An embedded and efficient low-complexity hierarchical image coder

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

Fingerprint Image Compression

A SCALABLE SPIHT-BASED MULTISPECTRAL IMAGE COMPRESSION TECHNIQUE. Fouad Khelifi, Ahmed Bouridane, and Fatih Kurugollu

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

A New Configuration of Adaptive Arithmetic Model for Video Coding with 3D SPIHT

A 3-D Virtual SPIHT for Scalable Very Low Bit-Rate Embedded Video Compression

FAST AND EFFICIENT SPATIAL SCALABLE IMAGE COMPRESSION USING WAVELET LOWER TREES

DCT Coefficients Compression Using Embedded Zerotree Algorithm

Group Testing for Image Compression

An Embedded Wavelet Video Coder Using Three-Dimensional Set Partitioning in Hierarchical Trees (SPIHT)

Bit-Plane Decomposition Steganography Using Wavelet Compressed Video

Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding

THE SOURCE and channel coding functions of a communication

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Low-Memory Packetized SPIHT Image Compression

EXPLORING ON STEGANOGRAPHY FOR LOW BIT RATE WAVELET BASED CODER IN IMAGE RETRIEVAL SYSTEM

Color Image Compression Using EZW and SPIHT Algorithm

An Embedded Wavelet Video Coder. Using Three-Dimensional Set. Partitioning in Hierarchical Trees. Beong-Jo Kim and William A.

PERFORMANCE ANAYSIS OF EMBEDDED ZERO TREE AND SET PARTITIONING IN HIERARCHICAL TREE

signal-to-noise ratio (PSNR), 2

A Lossy Image Codec Based on Adaptively Scanned Wavelet Difference Reduction

Comparative Analysis of 2-Level and 4-Level DWT for Watermarking and Tampering Detection

Progressive Geometry Compression. Andrei Khodakovsky Peter Schröder Wim Sweldens

Fully Spatial and SNR Scalable, SPIHT-Based Image Coding for Transmission Over Heterogenous Networks

Coding the Wavelet Spatial Orientation Tree with Low Computational Complexity

Fully Scalable Wavelet-Based Image Coding for Transmission Over Heterogeneous Networks

Improved Image Compression by Set Partitioning Block Coding by Modifying SPIHT

Module 6 STILL IMAGE COMPRESSION STANDARDS

DCT-BASED IMAGE COMPRESSION USING WAVELET-BASED ALGORITHM WITH EFFICIENT DEBLOCKING FILTER

Reconstruction PSNR [db]

An Efficient Context-Based BPGC Scalable Image Coder Rong Zhang, Qibin Sun, and Wai-Choong Wong

An Embedded Wavelet Video. Set Partitioning in Hierarchical. Beong-Jo Kim and William A. Pearlman

Analysis and Comparison of EZW, SPIHT and EBCOT Coding Schemes with Reduced Execution Time

Visual Communications and Image Processing 2003, Touradj Ebrahimi, Thomas Sikora, Editors, Proceedings of SPIE Vol (2003) 2003 SPIE

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

Wavelet-based Contourlet Coding Using an SPIHT-like Algorithm

Reversible Wavelets for Embedded Image Compression. Sri Rama Prasanna Pavani Electrical and Computer Engineering, CU Boulder

Medical Image Compression Using Multiwavelet Transform

Image coding based on multiband wavelet and adaptive quad-tree partition

Comparison of different Fingerprint Compression Techniques

Optimal Estimation for Error Concealment in Scalable Video Coding

Visually Improved Image Compression by Combining EZW Encoding with Texture Modeling using Huffman Encoder

Color Image Compression using Set Partitioning in Hierarchical Trees Algorithm G. RAMESH 1, V.S.R.K SHARMA 2

ECE 533 Digital Image Processing- Fall Group Project Embedded Image coding using zero-trees of Wavelet Transform

IMAGE COMPRESSION ALGORITHM BASED ON HILBERT SCANNING OF EMBEDDED QUADTREES: AN INTRODUCTION OF THE Hi-SET CODER

Performance Evaluation on EZW & SPIHT Image Compression Technique

Layered Self-Identifiable and Scalable Video Codec for Delivery to Heterogeneous Receivers

IMAGE COMPRESSION USING EMBEDDED ZEROTREE WAVELET

A COMPRESSION TECHNIQUES IN DIGITAL IMAGE PROCESSING - REVIEW

Key words: B- Spline filters, filter banks, sub band coding, Pre processing, Image Averaging IJSER

MRT based Fixed Block size Transform Coding

Comparison of EBCOT Technique Using HAAR Wavelet and Hadamard Transform

Wavelet Based Image Compression, Pattern Recognition And Data Hiding

642 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 11, NO. 5, MAY 2001

IMAGE CODING USING WAVELET TRANSFORM, VECTOR QUANTIZATION, AND ZEROTREES

Multimedia Networking ECE 599

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

Frequency Band Coding Mode Selection for Key Frames of Wyner-Ziv Video Coding

Differential Compression and Optimal Caching Methods for Content-Based Image Search Systems

IMAGE DATA COMPRESSION BASED ON DISCRETE WAVELET TRANSFORMATION

Image Compression Using BPD with De Based Multi- Level Thresholding

JPEG Joint Photographic Experts Group ISO/IEC JTC1/SC29/WG1 Still image compression standard Features

FPGA Implementation of Image Compression Using SPIHT Algorithm

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

Center for Image Processing Research. Motion Differential SPIHT for Image Sequence and Video Coding

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Perfect Compression Technique in Combination with Training Algorithm and Wavelets

CERIAS Tech Report An Evaluation of Color Embedded Wavelet Image Compression Techniques by M Saenz, P Salama, K Shen, E Delp Center for

An Spiht Algorithm With Huffman Encoder For Image Compression And Quality Improvement Using Retinex Algorithm

CS 335 Graphics and Multimedia. Image Compression

MEDICAL IMAGE COMPRESSION USING REGION GROWING SEGMENATION

Digital Image Processing

FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION

Scalable Compression and Transmission of Large, Three- Dimensional Materials Microstructures

Stereo Image Compression

Hybrid Fractal Zerotree Wavelet Image Coding

Performance Analysis of SPIHT algorithm in Image Compression

Zhitao Lu and William A. Pearlman. Rensselaer Polytechnic Institute. Abstract

Mr.Pratyush Tripathi, Ravindra Pratap Singh

Edge-Enhanced Image Coding for Low Bit Rates

International Journal of Multidisciplinary Research and Modern Education (IJMRME) ISSN (Online): ( Volume I, Issue

Efficient, Low-Complexity Image Coding with a Set-Partitioning Embedded Block Coder

Tutorial on Image Compression

CHAPTER 2 LITERATURE REVIEW

Error Resilient Image Transmission over Wireless Fading Channels

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

SI NCE the mid 1980s, members from both the International Telecommunications Union (ITU) and the International

Transcription:

Error Protection of Wavelet Coded Images Using Residual Source Redundancy P. Greg Sherwood and Kenneth Zeger University of California San Diego 95 Gilman Dr MC 47 La Jolla, CA 9293 sherwood,zeger @code.ucsd.edu Abstract We investigate the residual redundancy in the Shapiro and Said-Pearlman wavelet-based image compression algorithms. The goal is to use redundancy inherent in the output of these source coders to provide joint source-channel decoding in order to improve error correction/detection performance for transmission across a noisy channel. 1. Introduction Shannon s separation principle states that source and channel coding can be treated separately with no performance loss - at least with unlimited complexity and delay. However in practical systems where some redundancy remains at the output of the source coder, the receiver can exploit the redundancy to improve channel decoding performance. Practical source coding algorithms for voice/audio, images, and video do not operate perfectly, and the remaining redundancy at the output of the source coder, referred to as residual redundancy, can be exploited by joint source-channel decoding. This paper investigates residual redundancy for certain progressive image compression algorithms. Specifically, we report here on experimental observations and progress in our of on-going research to ultimately improve channel decoding using source information. Shapiro s [6] embedded zerotree wavelet (EZW) image compression algorithm exploits the dependency that exists in typical images among spatially-related wavelet coefficients in different subbands. As bit planes of the image are encoded with the EZW algorithm, a zerotree symbol allows efficient coding of large groups of low-magnitude coefficients. While the EZW algorithm and the refined algorithm by Said and Pearlman [4] (SPIHT) achieve excellent compression results, they are highly susceptible to transmission errors due to the use of variable length coding and This work was supported in part by the National Science Foundation. a large amount of state information. Even single-bit transmission errors often lead to the loss of synchronization and a total collapse of decoded image quality. Two examples demonstrating the effects of errors are shown in Figures 1 and 2. Figure 1 shows the effect of an error in a single sign bit of a wavelet coefficient with large magnitude. The bit error, in this example, only affects one wavelet coefficient. A second example, in Figure 2, demonstrates a case where the bit error results in loss of synchronization in the decoder. In both cases, the transmission error occurred in the first transmitted bits, and the reconstructed image is shown at rate bpp. In a typical image, those bits where errors usually lead to loss of synchronization make up a significant fraction of the total bits - often 7 to 8 percent of the bits for images compressed to rates of.5 bits per pixel and below. This observation is demonstrated in the stacked bar graph in Figure 3 for several bit planes of the Lena image. The visual effect of a bit error generally decreases as a function of how far into the bit stream the error occurs, due to the progressive nature of the coder. The fact that bit errors so often result in extremely unnatural looking images suggests that decoding a corrupted bit stream (i.e. into EZW or SPIHT symbols) should result in parsed symbols that are very unlikely for natural images. It is precisely the frequent occurrence of improbable symbol sequences resulting from bit-error corrupted image transmissions that we study, in an effort to use knowledge of source information to combat transmission In what follows, we investigate the residual redundancy in the output of a zerotree wavelet coder based on the SPIHT algorithm, together with the effects of uncorrected channel 2. Joint Source-Channel Decoding The idea behind joint source-channel decoding is to use the residual redundancy in a compressed source to assist

Figure 1. 512x512 Lena (at.25 bpp) with a single sign bit error in a large magnitude coefficient. Figure 2. 512x512 Lena (at.25 bpp) with a single sorting bit error leading to error propagation. in the channel decoding operation. If the source coder were ideal for a noiseless channel, then the compressed bit stream would consist of independent equally-likely bit values. In practice, the source coder is not able to perfectly compress the data, and often it is infeasible to add additional lossless compression at the encoder, such as arithmetic coding, due to complexity constraints or compliance with standards. Also, the results in [7] indicate that it may be better to use the residual redundancy at the receiver than remove it in a noisy environment. The residual redundancy in a source can be utilized by a channel decoder to correct bit errors that might otherwise go unnoticed. This concept was exploited, for example, in speech and image coding applications in [1, 3, 5, 7]. As noted in [3], when smoothing of the decoded output can be used to mitigate the effects of channel errors, there is residual redundancy which can be exploited by the channel decoder to directly improve channel decoding. An important problem for a given source and a given source coding algorithm is to characterize the nature of the residual redundancy. 3. Illegal Sequences If a source coder has the property that not every possible bit stream is a potential output of the coder, then there is some inherent redundancy in the coder (e.g. Huffman coding does not have this property). In some cases it can prove more convenient to leave this redundancy in the coder because the extra transmission rate is very small and removing it can be difficult. Also the decoder can use knowledge about such illegal sequences to detect errors - that is, if the decoder parses an illegal sequence (one the encoding algorithm would never choose), then the decoder can be certain an error occurred. The Said and Pearlman algorithm has some illegal sequences which can all be used for bit error detection. The illegal sequences are all related to the extension of the set of significant coefficients (i.e. those coefficients with magnitude at least as large as the current bit plane) through the extension of the tree of coefficients under consideration. The code syntax allows for the possibility of specifying that a significant coefficient lies in a given branch of the tree while declaring all such coefficients insignificant - clearly the encoder would never use such a sequence. Specifically, the algorithm processes 2x2 groups of sibling wavelet coefficients (i.e. those coefficients with the same parent in the tree) as a unit when checking for new significant descendants. If none of these four coefficients had a significant descendant in an earlier bit plane then a single bit is used to indicate whether any of the four has a significant descendant in the current bit plane. In cases where a significant descendant is indicated, four bits specify which members of the 2x2 group have significant descendants. Only 15 of the possible 16 sequences of 4 bits

Fraction of bits 1.9.8.7.6.5.4.3.2 Distribution of bit types vs. bit plane for the image Lena Node test Descendant test Sign Refinement number of bits from the error to the point of detection were examined. The conditional cumulative distributions of the number of bits before detection (conditioned on the event that the error was detected) for several images is shown in Figure 4. These distributions show the fraction of errors detected within a certain number of bits for single bit The performance is similar over a range of images with median detection delays on the order of 1-2 bits and mean detection delays on the order of 2-3 bits. The conditional distribution was considered because errors in sign and refinement bits do not lead to error propagation so cannot cause illegal sequences..1 1 2 3 4 5 6 7 Bit plane number 1.9 Conditional cumulative distribution of the number of bits before detection given the error was detected. Figure 3. Bit type distribution vs. bit plane number for the 512x512 Lena image. are allowed since at least one of the coefficients has a significant descendant. The other illegal sequence is more dispersed in the bit stream. Immediately after declaring that a coefficient has significant descendants, the significance of the 2x2 group of its children is reported. Then the group of children is placed on the end of the current list of sibling groups so that it can be checked for significant descendants at a later time. An illegal condition occurs if a sibling group is added but none of its coefficients or descendants are reported to be significant. The bits leading to this condition tend to be dispersed in the bit stream with a spacing that is dependent on the number of sibling groups in the list. These illegal conditions all occur during the descendant test phase of the SPIHT algorithm. Processing in the SPIHT algorithm is accomplished with a few sub-passes through the data so that all current insignificant nodes in the list are tested for significance, then all elements of the list are checked for significant descendants, and finally all significant coefficients are refined. This grouping of tests can lead to a large number of bits between an error and and an error detection by an illegal condition, especially as the coding rate increases since the sub-passes become longer. Interleaving the tests reduces this problem since the descendant tests will be dispersed throughout the bit stream instead of in blocks. The impact of this re-ordering on the source coding performance is very small and may even improve it for some images at certain rates. The performance of the two will of course be equal at the point where each bit plane is complete. In order to evaluate the error detection performance of illegal sequences, both the probability of detection and the Fraction of detected errors.8.7.6.5.4.3.2.1 Lena Baboon Plane Barbara 1 1 1 1 2 1 3 1 4 Number of bits from error to detection Figure 4. Detection performance for single bit An interesting question to consider is how the importance of an error (in terms of its effect on PSNR) is related to the detection delay. Those errors which go undetected or lead to long delays will hopefully also have less of an impact on decoded image quality. The scatter plot in Figure 5 shows the detection delay and decoded PSNR for many single-bit error trials. The points in the plot with detection delay equal to 66 (i.e. those in the upper right corner of the plot) correspond to undetected errors since the entire compressed image contained fewer bits. While some of these undetected errors occurred in descendant or node test bits, it can be noted that all had a relatively small impact on image quality. The plot also shows that in general the errors with longer delays are the ones which result in less degradation of PSNR. This relationship is very different from conventional error detection using parity checks on the bit values where performance would be independent of decoded PSNR. The performance was also tested with burst errors, which

34 32 3 Number of bits before error detection vs. decoded PSNR for Lena compression. Figure 6 shows a contour plot of the joint distribution of parent and child log-magnitudes for the Lena image. The same parent-child dependency was exploited in [2] for image compression. Final decoded PSNR (db) 28 26 24 22 2 18 16 14 1 1 1 1 2 1 3 1 4 1 5 Number of bits from error to detection Figure 5. Detection performance for single bit Child log2(magnitude) 1 8 6 4 2 2 Parent child distribution for 512x512 Lena are typical of the errors at the output of a decoder for convolutional codes (i.e. a Viterbi decoder). Bursts of duration 1 bits, with probability that an individual bit is in error, were inserted in independent trials at many different locations in the bit stream. The error detection performance was very similar to the single bit error case. 4. Wavelet Coefficient Dependencies In order to provide soft likelihood information to the channel decoder, the probability of any particular input sequence must be evaluated. Since the encoding operates on wavelet coefficients, it is natural for the decoder to utilize dependencies in the wavelet domain to evaluate the likelihood of a sequence. An important question is to determine what dependencies exist and how they affect the distribution of code sequences. The zerotree method already takes advantage of some dependency among wavelet values by using the fact that if a coefficient is found to have magnitude below a given threshold, then all its descendents in the wavelet tree also tend to have magnitudes below this threshold. For a bit plane encoder like the EZW algorithm, this implies that parent nodes tend to become significant before descendents. In order to estimate likelihood information, it is important to estimate the distribution of the bit plane where a child becomes significant, given the parent value (or its quantized representation available at the decoder). Examining the joint distribution of parent and child log-magnitudes for several natural images reveals that there is a dependency for large values of the parent which is the typical region for low rate image 4 4 2 2 4 6 8 1 Parent log2(magnitude) Figure 6. Joint distribution of parent-child log magnitudes. The joint distribution is typically very similar over a large range of natural images. In the progressive decoder, the distribution will provide some information on when a child should become significant given the parent s decoded value. Transmission errors tend to cause more coefficients to be declared significant in a single bit plane than is normal for a natural image. Also, the tree of wavelet coefficients under consideration tends to extend deeper in a single bit plane. In a natural image, new significant coefficients are usually children of current leaf nodes of the tree or perhaps grandchildren, but rarely any deeper descendants. After a loss of synchronization due to bit errors, the decoded symbols lead to unusual parent-child magnitude relationships. Therefore, a metric based on probability estimates from the parent-child joint distribution can allow errors to be recognized. Initial results are encouraging and more extensive testing is underway in order to draw more conclusions about the effectiveness of the metric. 5. Conclusion We have identified illegal symbol sequences in the SPIHT zerotree coder which can be used for error detection. Detection performance was evaluated in terms of probability of detection and detection delay, since quick detection

can help reduce the number of candidate decoded sequences considered in the channel decoder. The results reveal detection delays on the order of 1-2 bits. Smaller delays occur for errors that lead to more severe image quality degradation. We have also proposed the use of the parent-child magnitude joint distribution to construct a soft likelihood metric. On-going and future work includes evaluation of the performance of the parent-child distribution metric in distinguishing between noisy and natural decoded images, inclusion of other dependencies in the metric computation, integration of the metric with channel decoding algorithms, and evaluation of the performance of the joint source-channel decoder. References [1] F. I. Alajaji, N. C. Phamdo, and T. E. Fuja. Channel codes that exploit the residual redundancy in celp-encoded speech. IEEE Transactions on Speech and Audio Processing, 4(5):325 336, Sept. 1996. [2] R. W. Buccigrossi and E. P. Simoncelli. Progressive wavelet image coding based on a conditional probability model. In Proc. ICASSP 97, pages 2957 296, 1997. [3] J. Hagenauer. Source-controlled channel decoding. IEEE Transactions on Communications, 43(9):2449 2457, Sept. 1995. [4] A. Said and W. A. Pearlman. A new, fast, and efficient image codec based on set partitioning in hierarchical trees. IEEE Transactions on Circuits and Systems for Video Technology, 6(3):243 25, June 1996. [5] K. Sayood and J. C. Borkenhagen. Use of residual redundancy in the design of joint source/channel coders. IEEE Transactions on Communications, 39(6):838 846, June 1991. [6] J. M. Shapiro. Embedded image coding using zerotrees of wavelet coefficients. IEEE Transactions on Signal Processing, 41(12):3445 3462, Dec. 1993. [7] W. Xu, J. Hagenauer, and J. Hollman. Joint source-channel decoding using the residual redundancy in compressed images. In Proceedings of ICC/SUPERCOMM 96. International Conference on Communications, pages 142 148, 1996.