Boundary Artifact Minimization on Best Matching Blocks in Wavelet-Based Video Compression

Similar documents
signal-to-noise ratio (PSNR), 2

Digital Video Processing

JPEG 2000 vs. JPEG in MPEG Encoding

Performance Comparison between DWT-based and DCT-based Encoders

Perfect Reconstruction FIR Filter Banks and Image Compression

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

Bit-Plane Decomposition Steganography Using Wavelet Compressed Video

Motion Estimation Using Low-Band-Shift Method for Wavelet-Based Moving-Picture Coding

Lecture 5: Error Resilience & Scalability

Multiframe Blocking-Artifact Reduction for Transform-Coded Video

Multiresolution motion compensation coding for video compression

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

DCT-BASED IMAGE COMPRESSION USING WAVELET-BASED ALGORITHM WITH EFFICIENT DEBLOCKING FILTER

Reversible Wavelets for Embedded Image Compression. Sri Rama Prasanna Pavani Electrical and Computer Engineering, CU Boulder

A 3-D Virtual SPIHT for Scalable Very Low Bit-Rate Embedded Video Compression

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

( ) ; For N=1: g 1. g n

Wavelet Based Image Compression Using ROI SPIHT Coding

Very Low Bit Rate Color Video

Pyramid Coding and Subband Coding

ANALYSIS OF SPIHT ALGORITHM FOR SATELLITE IMAGE COMPRESSION

International Journal of Wavelets, Multiresolution and Information Processing c World Scientific Publishing Company

Compression of Stereo Images using a Huffman-Zip Scheme

VHDL Implementation of Multiplierless, High Performance DWT Filter Bank

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Overview. Videos are everywhere. But can take up large amounts of resources. Exploit redundancy to reduce file size

OPTIMIZED QUANTIZATION OF WAVELET SUBBANDS FOR HIGH QUALITY REAL-TIME TEXTURE COMPRESSION. Bob Andries, Jan Lemeire, Adrian Munteanu

Pyramid Coding and Subband Coding

CSEP 521 Applied Algorithms Spring Lossy Image Compression

Block-Matching based image compression

Module 8: Video Coding Basics Lecture 42: Sub-band coding, Second generation coding, 3D coding. The Lecture Contains: Performance Measures

Motion-Compensated Wavelet Video Coding Using Adaptive Mode Selection. Fan Zhai Thrasyvoulos N. Pappas

Adaptive Quantization for Video Compression in Frequency Domain

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

CONTENT BASED IMAGE COMPRESSION TECHNIQUES: A SURVEY

Digital Image Processing

Fine grain scalable video coding using 3D wavelets and active meshes

MOTION COMPENSATION IN TEMPORAL DISCRETE WAVELET TRANSFORMS. Wei Zhao

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover

Blind Measurement of Blocking Artifact in Images

Lecture 5: Compression I. This Week s Schedule

EFFICIENT METHODS FOR ENCODING REGIONS OF INTEREST IN THE UPCOMING JPEG2000 STILL IMAGE CODING STANDARD

Modified SPIHT Image Coder For Wireless Communication

Image Compression. CS 6640 School of Computing University of Utah

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

A SCALABLE SPIHT-BASED MULTISPECTRAL IMAGE COMPRESSION TECHNIQUE. Fouad Khelifi, Ahmed Bouridane, and Fatih Kurugollu

FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION

Optimizing the Deblocking Algorithm for. H.264 Decoder Implementation

Wavelet-Based Video Compression Using Long-Term Memory Motion-Compensated Prediction and Context-Based Adaptive Arithmetic Coding

Mesh Based Interpolative Coding (MBIC)

VIDEO AND IMAGE PROCESSING USING DSP AND PFGA. Chapter 3: Video Processing

Biomedical signal and image processing (Course ) Lect. 5. Principles of signal and image coding. Classification of coding methods.

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

CHAPTER 3 WAVELET DECOMPOSITION USING HAAR WAVELET

Digital Image Processing

A New Configuration of Adaptive Arithmetic Model for Video Coding with 3D SPIHT

JPEG 2000 compression

Implementation and analysis of Directional DCT in H.264

Wavelet Transform (WT) & JPEG-2000

CoE4TN3 Image Processing. Wavelet and Multiresolution Processing. Image Pyramids. Image pyramids. Introduction. Multiresolution.

IMAGE COMPRESSION. October 7, ICSY Lab, University of Kaiserslautern, Germany

IMAGE PROCESSING USING DISCRETE WAVELET TRANSFORM

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

Image Compression Algorithm for Different Wavelet Codes

Image Fusion Using Double Density Discrete Wavelet Transform

Reconstruction PSNR [db]

THE TRANSFORM AND DATA COMPRESSION HANDBOOK

A Comparative Study of DCT, DWT & Hybrid (DCT-DWT) Transform

Media - Video Coding: Standards

Fingerprint Image Compression

Embedded Rate Scalable Wavelet-Based Image Coding Algorithm with RPSWS

EE 5359 MULTIMEDIA PROCESSING SPRING Final Report IMPLEMENTATION AND ANALYSIS OF DIRECTIONAL DISCRETE COSINE TRANSFORM IN H.

Interframe coding A video scene captured as a sequence of frames can be efficiently coded by estimating and compensating for motion between frames pri

IMAGE COMPRESSION USING TWO DIMENTIONAL DUAL TREE COMPLEX WAVELET TRANSFORM

Wireless Communication

DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS

LECTURE VIII: BASIC VIDEO COMPRESSION TECHNIQUE DR. OUIEM BCHIR

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ICIP.2006.

Coding of Coefficients of two-dimensional non-separable Adaptive Wiener Interpolation Filter

Multiresolution Image Processing

Optimal Estimation for Error Concealment in Scalable Video Coding

JPEG 2000 Still Image Data Compression

SINGLE PASS DEPENDENT BIT ALLOCATION FOR SPATIAL SCALABILITY CODING OF H.264/SVC

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 8, NO. 4, AUGUST

Image Resolution Improvement By Using DWT & SWT Transform

Digital Image Processing

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform

Implementation of Lifting-Based Two Dimensional Discrete Wavelet Transform on FPGA Using Pipeline Architecture

Hybrid Image Compression Technique using Huffman Coding Algorithm

Variable Temporal-Length 3-D Discrete Cosine Transform Coding

Medical Image Compression Using Multiwavelet Transform

ARCHITECTURES OF INCORPORATING MPEG-4 AVC INTO THREE-DIMENSIONAL WAVELET VIDEO CODING

ECE 417 Guest Lecture Video Compression in MPEG-1/2/4. Min-Hsuan Tsai Apr 02, 2013

Fundamentals of Video Compression. Video Compression

Video Compression Method for On-Board Systems of Construction Robots

Advances of MPEG Scalable Video Coding Standard

Transcription:

Boundary Artifact Minimization on Best Matching Blocks in Wavelet-Based Video Compression WEITING CAI and MALEK ADJOUADI Center for Advanced Technology and Education Department of Electrical & Computer Engineering Florida International University 10555 W. Flagler Street, Miami, FL 33174 Abstract: - The widespread usage of digital videos calls for an intense need of video compression techniques to conserve storage space and minimize bandwidth utilization. Discrete wavelet transform (DWT) has manifested its intrinsic multiresolution and scalability advantages on image compressions without the annoying blocking artifacts. Thus, Motion estimation and motion compensation (MEMC) combined with wavelet transformation has been beneficially applied to improve video compression efficiency and accuracy. However, circular convolution in wavelet transform generates border distortion because of the discontinuous periodic extension on the two ends of the signal, and becomes more apparent as more levels of decomposition are involved. In addition, hierarchical motion vector refinement needs accurate motion estimation from the lower resolution subframes without further distortions after multiple convolutions with the wavelet filter. This paper investigates the use of symmetric-extended wavelet transform (SWT) to reduce the boundary distortions experienced in multiresolutional motion estimation, so as to improve the compression performances in scalable video applications. Keywords: Video compression, motion estimation and motion compensation, symmetric-extended wavelet transform, entropy coding. 1 Introduction The main idea of video compression is to uncorrelate the data and reduce the redundancies inside and between the video frames by means of spatial, temporal, psychovisual and coding compressions, so that less data are used to represent the same amount of information. Wavelet-based video compression [1-4] has been studied on the subject of motion estimation by means of complex-valued wavelet transform [5], overcomplete discrete wavelet transform [6] and wavelet packets [7]. The underlying signal processing used in wavelet transform is the convolution between the decimated input signal and the wavelet filters. Since image signals are not continuous at the boundaries, problems of coefficient expansion and boundary distortion are faced in the implementation of filtering on the finite length signal. Critical subsampling from wavelet filter convolution under perfect reconstruction constraint requires circular convolution instead of linear convolution to make the minimum half-sample decomposition possible. Critical subsampling from wavelet filter convolution under perfect reconstruction constraint requires circular convolution instead of linear convolution to make the minimum halfsample decomposition possible. However, circular convolution generates border distortions because of the discontinuous periodic extension on the two ends of the image signal. Proper boundary extensions on the finite length signal are necessary for subframe motion estimation without boundary discontinuities and for perfect reconstruction without coefficient expansion. Symmetric extension of a signal for circular convolution is pointed out to improve the jumping artifacts from the periodic extension [8, 9, 10]. The signal and filter symmetric-extended features [11] inspire an approach to realize video compressions with better boundary condition to get the best matching blocks during the procedure of multiresolutional motion estimation. The layout of this paper can be described as follows: Section 2 is the outline of wavelet domain motion estimation and compensation. Section 3 describes the symmetric-extended wavelet transform. Section 4 focuses on the wavelet domain symmetric-extended motion estimation. Section 5 presents the implementation of the wavelet-based video encoder and decoder. Section 6 shows the compression performance improvements. Section 7 provides the assessment of the results attained in this research work. 2 Wavelet Domain Motion Estimation and Compensation Since motion activities of different orientation subframes in the wavelet transformed domain are highly correlated, motion estimation and motion compensation can be implemented globally but not fully, using a multiresolution description of the video frames to obtain initial and refined search inside wavelet domain by building up the frame pyramids. At the lowest resolution level of motion estimation, full search algorithm is applied on the 1

approximation subframe as an initial estimate of the motion vector field. The motion vectors of the approximation subframe in a higher resolution level are refined by those from the adjacent lower resolution approximation subframe after doubling, and represent the motion vectors of the three detail subbands at the same level. Thus, one motion vector array is shared by all the subbands in a level without extra motion estimation computation and associated motion information: MV i 1 = 2MVi + MVi 1 where MV i and MV i 1 are the motion vectors in level i and its higher resolution level i 1. The refinement process is continued successively until reaching the required dimension and scalability of the video frame. 3 Symmetric-Extended Wavelet Transform To smooth the boundary discontinuity, a finite length signal is first symmetrically extended before the periodic repeating to perform circular convolution. It s known that a symmetric signal has linear phase, and the resulting signal is also symmetric when convolved with the linear phase filters [11]. Thus, the combination of symmetric extension on the signal and the biorthogonal symmetric wavelet filter is an effective approach to reducing the boundary distortion after half resolution decomposition. The main concern is that the signal from wavelet transform might not be symmetric after decimation, so the condition of perfect reconstruction in half resolution cannot be realized. Therefore, the extension and decimation need to be resolved in such a way that the approximation and detail signals after decimation are still symmetrically extended, so that the first half truncation as the decomposed signal is possible. A digital filter is defined as a W f filter or an H f filter if its impulse response is a finite odd-length or even-length symmetric sequence. Assuming that a signal to be processed has finite length L. A whole point symmetry extension W s of the signal means that all samples except for the two end points are repeated as a period before a signal is said to be circularly extended. Each period thus has a length of 2L 2. Original signal Symmetrical extension Periodic extension Figure 1. W extension of the finite-length signal Figure 1 shows the original signal samples in black dots, and its W s extension by performing the symmetric extension without the first and last samples as the gray dots and the periodic extension as the white dots. A half point symmetry extension H s of the signal is that the whole length of the original signal are repeated before it is periodically extended. As shown in Figure 2, the original signal as the black dots are all symmetrically extended into one period as the gray dots and circularly extended as the white dots. Figure 2. H extension of the finite-length signal If the signal is symmetrically extended except for the first sample point as a period and then circularly extended, it is called a (1,2) mode extension of the signal; On the other hand, if the signal is symmetrically extended except for the last sample point as a period before the circular extension, it is called a (2,1) mode extension of the signal, as shown in Figure 3 and 4, respectively. Figure 3. (1,2) mode extension of the finite-length signal Figure 4. (2,1) mode extension of the finite-length signal To realize multilevel decomposition by keeping half length in the decomposed signals, discrete wavelet transform using circular convolution is applied. Although the coefficient expansion problem is solved, boundary distortion problem still exists because of the jumping points on both ends of the signal from the periodic padding in circular convolution. Thus, the solution to these problems requires that the decomposed signal remains in half length even after circular convolution with the expanded signal to deal with the boundary artifacts. Observe from Figure 5, half decimation from a W s signal is also a symmetric-extended signal in (1,2) extension mode as the black samples or in (2,1) extension mode as 2

the white samples; while half downsampling from an H s signal does not have symmetric properties, as illustrated in Figure 6 for both the white and black decimated samples. conservation as a factor of compression [12]. The signal characteristics as well as the corresponding symmetry modes along the transformation procedure are shown in Figure 7. Figure 5. Half decimation on Figure 6. Half decimation on W s signal H s signal Since the original signal is almost doubled in length by the symmetric extension before circular convolution with the wavelet filters, the decimated signals need to be symmetric, so that half length decomposition can be possible for perfect reconstruction. Thus, the wavelet filtered signal needs to be a W s signal so that the half decimations are still symmetric. To obtain a W s signal after circular convolution with the biorthogonal filter, the symmetry mode of the extended signal should be matched with that of the applied filter. That is to say, for wavelet transformation using a W filter, the signal is W s extended and made periodic; and for transformation using an H f filter, the signal is H s extended and made periodic. In such a way, the decimation on the convolved W s signal will be in (1,2) or (2,1) extension mode, then only the first half samples need to be saved in the decomposition for perfect reconstruction. 4 Wavelet Domain Symmetric- Extended Motion Estimation In multiresolution motion estimation and compensation, when more levels of decomposition are applied, the circularly extended boundary samples convolved with the wavelet filters will generate deeper border distortions along the subframes. Therefore, in this study, the subframes are decomposed using the symmetric-extended wavelet transform to obtain better matching blocks from motion estimation, so that the compensated frame will have lower entropy to increase the compression efficiency. Causal biorthogonal 9/7 filter is used due to the fact that its coefficients are close to be orthogonal for energy f Figure 7. Signal characteristics in symmetric-extended wavelet transform using biorthogoanl 9/7 filter Before convolving with the biorthogonal 9/7 filter as an odd-length symmetric W f filter, the finitelength signal is first W s extended into length 2L 2 as the input to the wavelet transformer including filtering and downsampling. The convolution is aligned on the center of the filter to the first sample point of a signal period. Based on the analysis of the filter outputs, wavelet transform decomposes the W s signal into an approximation part in (1,2) extension mode from the lowpass branch as well as a detail part in (2,1) extension mode from the highpass branch. The decomposed approximation subframes are obtained by truncating the (1,2) mode extended signal in both the horizontal and vertical directions into half resolutions. After the symmetric-extended subframes of all the levels are obtained, the procedure of multiresolutional motion estimation and compensation follows the steps introduced earlier in Section II. 5 Implementation The basic structure of the wavelet-based video encoder is shown in Figure 8. Because high frequencies are contained at the sharp motion compensated block edges, and since small spatial correlations of the prediction error blocks decrease the coding efficiency, wavelet transform is applied before motion estimation. It was demonstrated that the DWT applied on the original video frames outperforms the DWT operated on the differential frame [13]. A series of operations are performed to remove spatial, temporal, psychovisual and coding redundancies. Color space transform (CST) is used to decorrelate the color data by transforming the regular RGB color space into YC rcb luminance-chrominance color space. Due to mean shifting and quantization, a large portion of the coefficients are zeroed out, run lengths of zeros in a subband are counted. Thus, dynamic entropy coding is computed on the basis of the statistical distributions of the current frame data including run lengths, motion vectors, and the non-zero coefficients. Since some intermediate categories may contain no coefficients, category and its corresponding basecode are 3

transmitted as a pair to the decoder without empty categories included. The data of the bit stream follow the orders from motions to coefficients, from luminance to chrominances, from approximation to detail subbands, and from low to high resolution levels. The compressed data as well as the side information including the frame size, filters used, transformation information and the dynamic Huffman tables are sent to the decoder. The architecture of the entropy decoder is shown in Figure 9. The motion vectors and the wavelet reference frame for motion compensation generate the wavelet domain prediction error frame. The addition of the compensated frame and the prediction error frame reconstructs the approximation of the original wavelet transformed frame. Inverse wavelet transform is applied before inverse color space transform (ICST) from YC rcb format back to RGB, to produce the reconstructed frame. Figure 8. Proposed architecture of the wavelet video encoder Figure 9. Proposed architecture of the wavelet video decoder a) Motion vector fields from PME method b) Reconstructed frame from PME method Bit rate = 0.504bpp, PSNR = 31.615dB c) Motion vector fields from SME method d) Reconstructed frame from SME method Bit rate = 0.512bpp, PSNR = 32.294dB Figure 10. Wavelet domain compression using PME and SME methods on foreman 4

a) Motion vector fields from SME method b) Reconstructed frame from SME method Bit rate = 0.508bpp, PSNR = 35.357dB Figure 11. Wavelet domain compression using PME and SME methods on carphone 6 Results Since the current frame simply takes reference from its former or future frames in a specific interpolation mode, the optimization of motion estimation and compensation can be considered as a procedure to improve the reference-current frame pair performance. Discrete wavelet transform (DWT) is applied on the current frame and the reference frame before motion estimations between the subframe pyramids [13]. The experiment takes one frame from sequence foreman as the current frame and its previous frame as the reference. Multiresolution motion estimation is performed on the decomposed subframe pairs using 2-pixel full search in the lowest resolution level and 1-pixel refinements. Minimum absolute difference (MAD) is used as the matching criterion at the lowest resolution level as well as the reinitialized current blocks when the best matching block from the last level is not found. The performances of wavelet-domain periodicextended motion estimation (PME) and symmetricextended motion estimation (SME) are compared in Figure 10 by means of motion vector fields and reconstruction quality evaluations. Motion vectors of an intermode compressed block are drawn if the best matching block is found, otherwise a square is marked to represent an intramode compressed block. Since SME method has the preprocessing of removing the discontinuity boundary artifacts on the decomposed subframes, better matching blocks in each decomposition level can be obtained. The current frame is compressed by about 50 times at bit rate of 0.5bpp with reconstruction quality PSNR of 32.3dB, while the PME method has about 31.6dB under the same condition. Figure 11 shows another reference-current frame pair taken from video sequence carphone in which the 5 th frame is the current processing frame while the 1 st frame is used for motion estimation and compensation. A range of compressions at low bit rates on foreman using PME and SME methods are compared in Figure 12, where SME method achieves above 0.5dB improvement in PSNR than that from the PME method. The stable improvements of the rate distortion qualities on the reconstructed frames from SME method are also illustrated for carphone in Figure 13. 7 Conclusions Circular convolution in wavelet transform is used to realize signal filtering without coefficient expansion, but the performance of the filtered signal depends on the padding of the finite-length signal. The operations of extension and decimation between the symmetric extension of the finite length signal and the biorthogonal symmetric wavelet filters realizes the symmetric-extended wavelet transform. The first half truncation as the decomposed signal removes the discontinuous boundary artifacts and achieves half resolution decomposition although the signal is expanded before circular convolution. Instead of simply taking both ends of the signal as the periodic extension, the utilization of symmetricextended wavelet transform in video compression removes discontinuities on the convolved boundary blocks of the decomposed subframes. The preservation of the original signal characteristics yields more accurate motion estimation as well as lower entropy compensated blocks when the current frame is convolved with the wavelet filter and the resolution is getting smaller. It achieves better performance than the periodic-extended circular convolution in the applications of hierarchical motion estimation and video compression, as proven by the experimental results in this research work. 5

Figure 12. Wavelet-based video compression on "foreman" using PME and SME methods References: [1] Pankaj N. Topiwala, Wavelet Image and Video Compression, Kluwer Academic Publishers, 1998. [2] El-Sayed Youssef, Ehab Sabry and Mohamed Khedr, Hybrid Adaptive Motion Search with Wavelet/Wavelet Packet Based Motion Residual Video Coding, Seventeenth National Radio Science Conference, Feb. 22-24, 2000, Minufiya University, Egypt, pp. C2 1-11. [3] G. Van der Auwera, A. Munteanu, G. Lafruit and J. Cornelis, Video Coding Based on Motion Estimation in the Wavelet Detail Images, ICASSP 98, pp. 2801-2804. [4] Seongman Kim and Seunghyeon Rhee, Interframe Coding Using Two-Stage Variable Block-Size Multiresolution Motion Estimation and Wavelet Decomposition, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 8, No. 4, Aug. 1998. [5] Julian Magarey and Nick Kingsbury, Motion Estimation Using Complex Wavelets, IEEE International Conference on Acoustics, Speech and Signal Processing, Vol. 4, pp.2371-2374, 1996. [6] Radu Zaciu, Claudiu Lamba, Constantin Burlacu and Gabriel Nicula, Motion Estimation and Motion Compensation Using an Overcomplete Discrete Wavelet Transform, ICIP96, pp. 973-976. [7] Michael B. Martin and Amy E. Bell, New Image Compression Techniques Using Multiwavelets and Multiwavelet packets, IEEE Transactions on Image Processing, Vol. 10, No. 4, Apr. 2001. Figure 13. Wavelet-based video compression on "carphone" using PME and SME methods [8] Athanassios Skodras, Charilaos Christopoulos and Touradj Ebrahimi, The JPEG 2000 Still Image Compression Standard, IEEE Signal Processing Magazine, pp. 36-58, Sep. 2001. [9] Charilaos Christopoulos and Athanassios Skodras and Touradj Ebrahimi, The JPEG2000 Still Image Coding System: an Overview, IEEE Transactions on Consumer Electronics, Vol. 46, No. 4, pp. 1103-1127, Nov. 2000. [10] A. N. Skodras, C. A. Christopoulos and T. Ebrahimi, JPEG2000: The Upcoming Still Image Compression Standard, Proceedings of the 11 th Portuguese Conference on Pattern Recognition, Porto, Portugal, May 11-12, 2000. [11] Gilbert Strang and Truong Nguyen, Wavelets and Filter Banks, Wellesley-Cambridge Press, 1996. [12] Bryan E. Usevitch, A tutorial on Modern Lossy Wavelet Image Compression: Foundations of JPEG 2000, IEEE Signal Processing Magazine, Sep. 2001. [13] Ya-Qin Zhang and Sohail Zafar, Motion- Compensated Wavelet Transform Coding for Color Video Compression, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 2, No. 3, Sep. 1992. Acknowledgments The authors gratefully acknowledge the support from the National Science Foundation under grants EIA-9906600 and HRD-0317692 and from the Office of Naval Research under grant N00014-99-1-0952. 6