Compression of Stereo Images using a Huffman-Zip Scheme

Similar documents
Stereo Image Compression

Compression of Light Field Images using Projective 2-D Warping method and Block matching

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

Lecture 5: Compression I. This Week s Schedule

Introduction ti to JPEG

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology

Advanced Video Coding: The new H.264 video compression standard

ECE 417 Guest Lecture Video Compression in MPEG-1/2/4. Min-Hsuan Tsai Apr 02, 2013

LECTURE VIII: BASIC VIDEO COMPRESSION TECHNIQUE DR. OUIEM BCHIR

INF5063: Programming heterogeneous multi-core processors. September 17, 2010

Image Compression Algorithm and JPEG Standard

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV

An introduction to JPEG compression using MATLAB

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

Digital Video Processing

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Video Compression An Introduction

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

Index. 1. Motivation 2. Background 3. JPEG Compression The Discrete Cosine Transformation Quantization Coding 4. MPEG 5.

Rate Distortion Optimization in Video Compression

Multimedia Systems Video II (Video Coding) Mahdi Amiri April 2012 Sharif University of Technology

Interframe coding A video scene captured as a sequence of frames can be efficiently coded by estimating and compensating for motion between frames pri

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

Reduced Frame Quantization in Video Coding

CMPT 365 Multimedia Systems. Media Compression - Video

Coding of Coefficients of two-dimensional non-separable Adaptive Wiener Interpolation Filter

VC 12/13 T16 Video Compression

Digital Image Representation Image Compression

Enhancing the Image Compression Rate Using Steganography

Chapter 10. Basic Video Compression Techniques Introduction to Video Compression 10.2 Video Compression with Motion Compensation

Video Compression MPEG-4. Market s requirements for Video compression standard

Wireless Communication

Using animation to motivate motion

Multimedia Signals and Systems Still Image Compression - JPEG

Adaptive Quantization for Video Compression in Frequency Domain

Digital Image Processing

Lecture 8 JPEG Compression (Part 3)

Digital Image Processing

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Optimization of Bit Rate in Medical Image Compression

Fingerprint Image Compression

FRACTAL IMAGE COMPRESSION OF GRAYSCALE AND RGB IMAGES USING DCT WITH QUADTREE DECOMPOSITION AND HUFFMAN CODING. Moheb R. Girgis and Mohammed M.

An Efficient Image Compression Using Bit Allocation based on Psychovisual Threshold

Overview: motion-compensated coding

Image/video compression: howto? Aline ROUMY INRIA Rennes

VIDEO SIGNALS. Lossless coding

Professor Laurence S. Dooley. School of Computing and Communications Milton Keynes, UK

15 Data Compression 2014/9/21. Objectives After studying this chapter, the student should be able to: 15-1 LOSSLESS COMPRESSION

COLOR IMAGE COMPRESSION USING DISCRETE COSINUS TRANSFORM (DCT)

Compression of VQM Features for Low Bit-Rate Video Quality Monitoring

Compression Part 2 Lossy Image Compression (JPEG) Norm Zeck

06/12/2017. Image compression. Image compression. Image compression. Image compression. Coding redundancy: image 1 has four gray levels

CMPT 365 Multimedia Systems. Media Compression - Image

EFFICIENT DEISGN OF LOW AREA BASED H.264 COMPRESSOR AND DECOMPRESSOR WITH H.264 INTEGER TRANSFORM

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

Robert Matthew Buckley. Nova Southeastern University. Dr. Laszlo. MCIS625 On Line. Module 2 Graphics File Format Essay

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

Frequency Band Coding Mode Selection for Key Frames of Wyner-Ziv Video Coding

LETTER Improvement of JPEG Compression Efficiency Using Information Hiding and Image Restoration

1740 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 7, JULY /$ IEEE

IMAGE COMPRESSION USING FOURIER TRANSFORMS

Lecture Coding Theory. Source Coding. Image and Video Compression. Images: Wikipedia

DIGITAL IMAGE WATERMARKING BASED ON A RELATION BETWEEN SPATIAL AND FREQUENCY DOMAINS

JPEG Compression. What is JPEG?

( ) ; For N=1: g 1. g n

An Efficient Mode Selection Algorithm for H.264

International Journal of Advanced Research in Computer Science and Software Engineering

DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS

CS 260: Seminar in Computer Science: Multimedia Networking

FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

Lecture 6: Compression II. This Week s Schedule

MRT based Fixed Block size Transform Coding

IMAGE COMPRESSION. Chapter - 5 : (Basic)

Image and Video Coding I: Fundamentals

Outline Introduction MPEG-2 MPEG-4. Video Compression. Introduction to MPEG. Prof. Pratikgiri Goswami

IMAGE COMPRESSION USING HYBRID QUANTIZATION METHOD IN JPEG

DigiPoints Volume 1. Student Workbook. Module 8 Digital Compression

Audio-coding standards

JPEG 2000 vs. JPEG in MPEG Encoding

Interframe coding of video signals

Image, video and audio coding concepts. Roadmap. Rationale. Stefan Alfredsson. (based on material by Johan Garcia)

Video Compression Standards (II) A/Prof. Jian Zhang

Interactive Progressive Encoding System For Transmission of Complex Images

Image and Video Compression Fundamentals

Face Protection by Fast Selective Encryption in a Video

A new predictive image compression scheme using histogram analysis and pattern matching

JPEG Joint Photographic Experts Group ISO/IEC JTC1/SC29/WG1 Still image compression standard Features

Lossless Image Compression with Lossy Image Using Adaptive Prediction and Arithmetic Coding

DCT Based, Lossy Still Image Compression

AUDIOVISUAL COMMUNICATION

CISC 7610 Lecture 3 Multimedia data and data formats

CSEP 521 Applied Algorithms Spring Lossy Image Compression

ROI Based Image Compression in Baseline JPEG

Network Image Coding for Multicast

Topic 5 Image Compression

Transcription:

Compression of Stereo Images using a Huffman-Zip Scheme John Hamann, Vickey Yeh Department of Electrical Engineering, Stanford University Stanford, CA 94304 jhamann@stanford.edu, vickey@stanford.edu Abstract We designed and implemented an algorithm for compression of stereo images. Our algorithm divides an image into blocks and performs a discrete cosine transform on each block. A variable step-size is used for uniform quantization that allows for control over quality and bit-rate. The DC coefficients are stored as fixed-length integers. The AC coefficients are Huffman encoded into a binary stream which is losslessly compressed using zip. We also implement a motion compensation algorithm under the encoding above that exploits the similarity between the left and right portions of a stereo image. We develop a simple method that uses the similarity between the Cb and Cr components for slight gains in performance. Keywords - color compression, compression, still image compression, integer-pel motion compensation, zip I. INTRODUCTION With the demand of 3D video rising, an efficient stereo compression scheme will play a major role in any practical 3D system. Fortunately, the high correlation between stereo image pairs makes more efficient compression a possibility. Direct application of an existing compression scheme does not take advantage of the correlation. In this paper, we examine a different form of discrete cosine transform compression that uses zip to efficiently compress Huffman encoded binary streams. We also demonstrate the ability of motion compensation to function under such a scheme and efficiently compress the image pairs. Color compression provides another method for additional gains, but the amount of reduction is limited by the nature of the color components. II. PRIOR WORK In the JPEG standard, an image is divided into blocks which are transformed using the discrete cosine transform (DCT). The transform generates one DC coefficient corresponding to the mean and many AC coefficients corresponding to the frequency content. Each resulting coefficient is quantized based on their frequency. The quantized AC coefficients are run-length encoded in a zigzag fashion to compress the large number of coefficients equal to zero. The run-length code is then encoded using a default Huffman table or an image optimized table. [1] The JPEG standard achieves a high compression ratio with acceptable peak signal to noise ratio (PSNR). Huffman coding is an integral component of our project. It generates an optimal binary stream for the AC coefficients which can be later compressed by the zip algorithm. [2] The zip algorithm does most of the heavy lifting in this approach. Zip is based on the DEFLATE algorithm that combines Huffman coding and LZ77. LZ77 provides a universal encoding scheme that achieves the entropy rate but may need a large number of bits to converge. [3] By encoding our coefficients in a consistent fashion, zip will have sufficient bits to efficiently compress our data. III. ALGORITHM DESIGN A. Single Image Algorithm Our method works similar to the JPEG standard. The image is converted from RGB color space to YCbCr color space. The Cb and Cr components are subsequently downsampled to one quarter of their original size. Each component is encoded separately. To compress a single image, we first split the image into 10 by 10 blocks. A two-dimensional discrete cosine transform is applied to the image. The DCT transforms the data into the spatial frequency domain. In this domain, we take advantage of the low amount of high frequency content and the low sensitivity of the human eye for high frequency content. The original image uses 8 bits per pixel. After the DCT, the image will need more than 8 bits per pixel to store the image losslessly. However, quantization on the DCT coefficients can be used to significantly reduce the number of bits per pixel without significantly degrading the image. A uniform mid-tread quantizer is applied to DCT coefficients. The quantization level can be varied depending on the image to achieve the desired level of distortion and bit-rate. The DC components are encoded as fixed-length integers while the AC components are encoded separately. An optimized Huffman table is generated for the AC coefficients. Each table is built based on all 99 coefficients. Using one table for all coefficients generate a bit stream that is more intelligible to zip. Using different tables for each coefficient would make it much more difficult for zip to converge to an optimal encoding. With a high quantization coefficient, the DCT will have a large number of zero values. Rather than using run-length encoding scheme to compress the large number of zeros, our method pushes the compression of the zeros until after Huffman coding. Zip is very efficient at recognizing and compressing a long stream of zeros. Zip may also provide

some additional compression by recognizing and compressing any other patterns that arise. Each component is encoded in the same fashion with similar fields grouped together. A few parameters are transmitted first such as block size, quantization level, and a table of various array lengths. This is followed by the DC values. The Huffman table values are stored as singles which allows for a more robust design. The Huffman table codes are compressed into a form that has double the word length, but is self-terminating ( 1101 becomes 10100011 ). Odd bits encode the codeword, while even bits allow for easy determination of codeword length. The Huffman encoded AC coefficient binary streams are written to the file. The last step is to zip the file using MATLAB s built in zip function. Decoding follows the inverse of above. The file is unzipped and each component is read in. The Huffman tables are unpacked and then used to decode the AC coefficients. The DC and AC coefficients are transformed using the inverse discrete cosine transform and the quantization level. The Cb and Cr components are upscaled to full resolution and the entire image is converted back to RGB color space. B. Stereo Compression with Motion Compensation The correlation between the left and right images suggests that compression can be improved through a joint scheme. We implemented a motion compensation algorithm that can exploit this dependency to improve the compression ratio. Motion compensation describes a current frame as a set of blocks from a previous reference frame that have been shifted to a new position. When the picture can be accurately synthesized from the reference frame, the compression efficiency will be improved. The left picture is encoded and then decoded using the method above. The decoded left image serves as the reference as opposed to the left image itself, since only the decoded image is available to the receiver. The right image is split into 30 by 30 blocks and we exhaustively search over a space from -127 to 127 in the x-direction and -15 to 15 in the y-direction. We originally chose a smaller search range in the y direction, but were informed that some of the images had vertical disparities clustered around 10 pixels. For testing purposes an even larger search range might be included. The 30 by the 30 block size keeps computation costs down, keeps the number of motion vectors short, and evenly divides the test images. We did not have the time to search over all possible block sizes to find the optimal size. We stored the motion vectors as 8 bit and 5 bit integers respectively. This meant that 13 bits were used to store the motion vector for a block of 900 pixels. We found that entropy coding would not significantly decrease the length of the motion vectors, and the tables were a significant cost. By using 30 by 30 blocks, the cost of the motion vectors is amortized over 900 pixels, and additional compression of the motion vectors provides little gain. Lagrangian mode decision is the standard method for deciding between which particular mode or motion vector to choose. It minimizes combined rate and distortion. We choose displacement vectors (d i,d j ) such that combined rate and distortion is minimized. (d i, d j ) = argmin{d i,j + λr i,j } D i,j: distortion of current block after subtracting the reference block at (d i,d j) R i,j: bits needed to transmit the displacement vector Our algorithm has a fixed rate, so the motion vector with the lowest distortion is chosen. We subtract the reference block specified by the displacement vector from the current block. This residual block is then split into nine 10 by 10 blocks which are transformed using the DCT and quantized. The zip algorithm needs its binary streams to be in a similar format for maximum effectiveness, therefore a consistent scheme is used across all components. We did try directly transforming the 30 by 30 block size which resulted in a smaller uncompressed file and a larger compressed file than our default scheme with no motion compensation. With the DCT coefficients known for the residual, a second decision is made between coding the nine 10 by 10 residual blocks or coding the current block as nine 10 by 10 blocks. For this decision, we assume distortion is roughly similar for both and use an estimate of rate as our sole decision criterion. R 1 = AC Coefficients of Intraframe (2) R 2 = AC Coefficients of Residual (3) This assumes the cost of a coefficient grows linearly with magnitude which may be a decent approximation. All nine 10 by 10 blocks are coded in the same method. If Residual mode is selected then the motion vector is the one previously calculated. If Intraframe mode is selected, then the motion vector becomes (-128,-16) which is outside the search range but within the range of the integer values. The DCT coefficients are then encoded using our standard algorithm as defined previously. For this project we only compressed the luminance of the right based on the left. This technique could be extended to Cb and Cr, but those components are already small and the additional gains would likely be little. Decoding involves generating the reference image, generating the transmitted image, determining the mode, finding the reference block and adding it to the transmitted block. C. Color Compression The correlation between the different components suggests that there are gains to be made. Here we propose two different algorithms that take advantages of inter-color correlation: 1) Color Compression using Sum and Difference The Cb and Cr components appear to have significant similarities. When one is flat the other is flat; when one shifts the other shifts. We proposed a 3 mode compression of the Cb and Cr components. Mode 0: Mode 1: Mode 2: Encode Cr, Cb Encode Cr, Cb + Cr Encode Cr, Cb - Cr

The first step is to encode Cr. When encoding Cb the program can choose between modes. As with motion compensation, the decision is made by summing the absolute values of the AC coefficients. The image resulting from the mode chosen is then compressed using our standard algorithm. The idea is that for certain sections of an image the color is such that when Cr increases then Cb decreases by roughly the same amount, which means the sum will be constant. This corresponds to colors such as blue and red. For other colors, such as green, when Cr decreases, Cb decreases by a similar amount which means the difference of the vectors will be relatively constant. For uncorrelated colors such as gray and brown, encoding separately provides the best results by not increasing the range of values. 2) Color Estimation based on Luminance In this algorithm, we split an image into blocks and each block has its YCbCr channel respectively. For each block, first we downsample the Y channel so that the dimension for three channels are the same. We then estimate Cb and Cr channel by fitting variables to a linear equation. Cb = a cb Y + b cb (4) Cr = a cr Y + b cr (5) Table 1: Performance on Training and Test Images Default JPEG Our Method Motion Estimation Images BPP PSNR BPP PSNR BPP PSNR 1 2.015 37.061 1.754 37.656 1.708 37.572 2 1.298 41.476 1.014 39.973 0.939 39.768 3 0.627 47.335 0.473 44.480 0.437 44.156 4 1.203 42.243 0.895 40.384 0.798 40.108 5 1.226 40.778 1.008 39.983 0.903 39.730 6 1.628 39.877 1.327 38.677 1.215 38.467 7 2.009 38.099 1.721 37.816 1.633 37.650 8 1.910 38.961 1.659 38.391 1.464 38.206 9 1.106 42.358 0.839 40.767 0.740 40.578 10 2.017 38.737 1.733 38.050 1.558 37.862 11 1.428 40.499 1.087 38.861 0.963 38.703 12 1.042 42.745 0.767 40.891 0.684 40.634 13 2.239 37.475 1.956 37.664 1.769 37.497 14 2.431 37.617 2.158 37.310 2.107 37.148 t01 0.587 47.289 0.443 44.738 0.385 44.501 t02 1.689 39.218 1.437 38.714 1.275 38.528 t03 1.346 40.954 1.010 39.575 0.930 39.271 t04 1.931 38.499 1.711 38.208 1.551 38.034 t05 1.638 38.557 1.392 39.094 1.366 38.984 t06 1.005 42.170 0.719 40.782 0.671 40.592 t07 1.965 38.434 1.690 37.650 1.584 37.556 We then choose between the intraframe mode and the linear mode using a Lagrangian mode decision. If the color mode is chosen, the residual and four coefficients to estimate Cb and Cr are stored. D. Rate Control Rate control allows for selection of a quantization step size to achieve a desired rate. We implemented a rate control algorithm based on an approach from a 2004 paper. [4] The quantization level, Q, is updated using the following equation. Q = Q previous * (bpp desired / bpp previous ) 1.5 (6) We use the resulting bits per pixel to estimate the value Q for the next iteration and repeat until the bits per pixel approaches the desired bits per pixel. IV. EXPERIMENTAL RESULTS A. Single Image Algorithm Our single image algorithm performed very well. Table 1 shows the performance of our algorithm with and without motion compensation versus JPEG. Our algorithms were run with a default quantization level of 16, and default JPEG was run with a quality factor of 80. Table 1 shows that our algorithm significantly outperformed JPEG in bits per pixel with roughly similar PSNR values. Figure 1 shows our algorithm significantly outperforming default JPEG for Image 1. The improvement is about 10 percent. Figure 4 shows one of the defects in our approach. Our algorithm can break down for less complex images. This is due largely to how DC values are stored, and we will suggest improvements in the next section. Table 2: Zip Compression for Image 4 Quantization Level Uncompressed (KB) Zipped (KB) 5 345 270 7 307 213 9 285 176 11 270 152 13 260 134 15 252 119 17 246 108 19 242 99 21 238 92 23 235 85 25 232 79

Table 2 shows the size of the uncompressed and the compressed files. The uncompressed files require at least one bit per pixel and therefore approach a fundamental limit as the quantization levels increase. The fundamental limit can be calculated by assuming each AC coefficient uses one bit per pixel and each DC coefficient uses 11 bits. S = 540*960*(2*1+4*1/4)*((99+11)/100)/8/1024 = 208 KB The actual limit is slightly higher due to the storage of Huffman tables and a few parameters. The zip compression provides most of the heavy lifting when sub bit per pixel accuracy is needed. B. Stereo Compression with Motion Compensation Motion compensation proved very beneficial for compression. Table 1 shows that for most images motion compensation provided about a 10% reduction in overall file size. This means that the motion compensated image was reduced by over 20% which is a very reasonable number. Table 1 shows that for some high detail images (such as training image 1 and test image 5) the improvement from motion compensation was smaller. Figure 2 shows that the benefits of motion compensation are consistent over the entire range of compression rates. For images 4 and images 10, motion compensation provides a reduction in bits per pixel. selected. For other portions of the image mode 1 is selected. Mode 0 is selected for most of the image, implying that for most of the image the Cb and Cr components do not give much information about each other. The color compression based on luminance is not feasible. When the block size is small, it is too expensive to send the coefficients. However, when the block size is large, the residual is too large to transmit. Therefore the second scheme was not a practical solution. Figure 4: Mode Selection C. Color Compression The color compression using sums and differences provided a very slight benefit for compressing the color components. Figure 3 shows that compressing image 1 through the use of 3 different modes provided about a 1% improvement in performance. The improvement is slight due to the fact that the Cb component of the images is downsampled by 4 and is about 4 times less complex than the Y component. This means any significant reduction in the Cb component is negated by the overall unimportance of the Cb component. Figure 4 shows how the mode selection occurs for training image 5. For the very green portions of the image, mode 2 is D. Rate Control We chose rates that would give us distortion close to the PSNR threshold. Rate control with adjusting step size allowed us to approach the desired rate and distortion efficiently. Table

3 shows that rate control can give us a significant reduction in file size by using all available distortion. Table 3: Rate Control Performance Our Method Rate Control Image Size (KB) PSNR Size (KB) PSNR t01 62.5 44.955 26.59 39.095 t02 219.2 39.302 171.8 38.279 t03 162.2 39.791 97.24 37.996 t04 260.6 38.857 201.0 37.499 t05 210.4 39.695 143.7 37.076 t06 114.7 41.160 83.42 40.432 t07 239.8 38.396 196.1 37.010 V. IMPROVEMENTS There are a lot of possible improvements that we wished to try but did not have the time. The most important issue to fix would be the coding of the DC coefficients. For our project we stored the DC coefficients using 11 bit integers regardless of the other parameters. The actual size needed can be determined by a calculation of the parameters. Assuming the blocks used in the DCT are square, the DC values range from 0 to: Max DC Value = 255*(block length)/(quantization level). For a residual the range is from negative Max to Max. For our default configuration of 10 by 10 and quantization level of 16, the max DC value is 160 which requires 8 or 9 bit integers. An improvement would be to scale the number of bits to the appropriate level. The other fix would be to Huffman code the differences between the DC values from block to block. This should reduce the number of bits needed for encoding the DC values especially at high quantization levels. JPEG does a few things in the standard that may or may not prove beneficial to our algorithm. Our algorithm does not collapse the DCT coefficients in the zigzag fashion. JPEG also uses quantization masks which could improve performance or could cause problems for zip. There are always improvements that can be made to the motion estimator. We could implement faster search, subpel motion compensation, and spatial filters to reduce high frequency components. One improvement that may or may not work is to allow each of the nine 10 by 10 blocks in the 30 by 30 block after the large displacement to search in the local neighborhood for a slightly better match. Each 10 by 10 block could also have the option of choosing intraframe or residual mode rather than as a group. The color compression and motion compensation algorithms use a very simple method for estimating what the rate required for their AC coefficients will be. There is likely a more accurate way to estimate rate then these methods. We could also extend motion compensation to Cb and Cr in a variety of ways though the gains would likely be small. VI. CONCLUSION Our project demonstrated a discrete cosine transform based compression scheme that uses Huffman coding then zip compression to outperform default JPEG compression. The improvement can be 10% greater than default zip. The scheme requires a consistent Huffman coding scheme that allows zip to treat the binary stream as a semi-stationary source. If the source varies in properties then performance will significantly degrade. We implemented an integer-pel motion compensation scheme that showed 10% compression from our standard algorithm and worked under our Huffman-Zip scheme. We also tested several color compression schemes which provided very slight gains. This is due to the fact that the color components are a small portion of the image. VII. ACKNOWLEDGMENTS We would like to thank Professor Bernd Girod, Professor Thomas Wiegand, Mina Maker, and Haricharan Lakshman for teaching EE398A, providing additional help after class, and mentoring us during the course of this project. REFERENCES [1] Information technology Digital compression and coding of continuous-tone still images Requirements and guidelines, ISO/IEC IS 10918-1 ITU-T Recommendation T.81, Sept. 1992. [2] D.A. Huffman, "A Method for the Construction of Minimum- Redundancy Codes," Proceedings of the I.R.E., pp 1098-1102, Sept. 1952. [3] J. Ziv and A. Lempel, "A universal algorithm for sequential data compression," Information Theory, IEEE Transactions on, vol.23, no.3, pp. 337-343, May 1977. [4] A. Bruna, S. Smith, F. Vella and F. Naccari, "JPEG rate control algorithm for multimedia," Consumer Electronics, 2004 IEEE International Symposium on, pp. 114-117, Sept. 1-3, 2004. APPENDIX John Hamann worked on the DCT coding scheme, motion compensation, and color compression. Vickey Yeh worked on the DWT coding scheme, rate control, motion compensation, and color compression.