JPEG IMAGE CODING WITH ADAPTIVE QUANTIZATION

Similar documents
( ) ; For N=1: g 1. g n

CMPT 365 Multimedia Systems. Media Compression - Image

Interactive Progressive Encoding System For Transmission of Complex Images

Image Compression Algorithm and JPEG Standard

Reduction of Blocking artifacts in Compressed Medical Images

Video Compression Method for On-Board Systems of Construction Robots

Adaptive Quantization for Video Compression in Frequency Domain

Statistical Modeling of Huffman Tables Coding

A deblocking filter with two separate modes in block-based video coding

IMAGE COMPRESSION USING HYBRID QUANTIZATION METHOD IN JPEG

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Color Imaging Seminar. Yair Moshe

Compression II: Images (JPEG)

A NEW ENTROPY ENCODING ALGORITHM FOR IMAGE COMPRESSION USING DCT

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

Lecture 8 JPEG Compression (Part 3)

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

Compression Artifact Reduction with Adaptive Bilateral Filtering

A HYBRID DPCM-DCT AND RLE CODING FOR SATELLITE IMAGE COMPRESSION

MRT based Fixed Block size Transform Coding

Digital Image Representation Image Compression

Index. 1. Motivation 2. Background 3. JPEG Compression The Discrete Cosine Transformation Quantization Coding 4. MPEG 5.

Key words: B- Spline filters, filter banks, sub band coding, Pre processing, Image Averaging IJSER

Reversible Wavelets for Embedded Image Compression. Sri Rama Prasanna Pavani Electrical and Computer Engineering, CU Boulder

Image Error Concealment Based on Watermarking

A Very Low Bit Rate Image Compressor Using Transformed Classified Vector Quantization

Enhancing the Image Compression Rate Using Steganography

On the Selection of Image Compression Algorithms

Performance Comparison between DWT-based and DCT-based Encoders

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Media - Video Coding: Standards

Lapped Orthogonal Transform Coding by Amplitude and Group Partitioning

AN ANALYTICAL STUDY OF LOSSY COMPRESSION TECHINIQUES ON CONTINUOUS TONE GRAPHICAL IMAGES

Compression of Stereo Images using a Huffman-Zip Scheme

Image Compression Standard: Jpeg/Jpeg 2000

Variable Temporal-Length 3-D Discrete Cosine Transform Coding

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

JPEG. Wikipedia: Felis_silvestris_silvestris.jpg, Michael Gäbler CC BY 3.0

Introduction ti to JPEG

A new predictive image compression scheme using histogram analysis and pattern matching

Vidhya.N.S. Murthy Student I.D Project report for Multimedia Processing course (EE5359) under Dr. K.R. Rao

FPGA Implementation of 2-D DCT Architecture for JPEG Image Compression

Advanced Video Coding: The new H.264 video compression standard

ROI Based Image Compression in Baseline JPEG

Image Coding. Image Coding

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

Reduced Frame Quantization in Video Coding

DCT-BASED IMAGE COMPRESSION USING WAVELET-BASED ALGORITHM WITH EFFICIENT DEBLOCKING FILTER

Editorial Manager(tm) for Journal of Real-Time Image Processing Manuscript Draft

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality

Wavelet Transform (WT) & JPEG-2000

What is multimedia? Multimedia. Continuous media. Most common media types. Continuous media processing. Interactivity. What is multimedia?

VC 12/13 T16 Video Compression

Lossless Image Compression having Compression Ratio Higher than JPEG

JPEG Compression. What is JPEG?

THE discrete cosine transform (DCT) is the most popular

JPEG 2000 vs. JPEG in MPEG Encoding

AUDIOVISUAL COMMUNICATION

Blind Measurement of Blocking Artifact in Images

Fast Progressive Image Coding without Wavelets

Multimedia. What is multimedia? Media types. Interchange formats. + Text +Graphics +Audio +Image +Video. Petri Vuorimaa 1

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

Introduction to Video Compression

signal-to-noise ratio (PSNR), 2

On a Probabilistic Approach to Rate Control for Optimal Color Image Compression and Video Transmission

Fundamentals of Video Compression. Video Compression

Combining Support Vector Machine Learning With the Discrete Cosine Transform in Image Compression

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ICIP.1996.

CSEP 521 Applied Algorithms Spring Lossy Image Compression

Image and Video Coding I: Fundamentals

Deblocking Filter Algorithm with Low Complexity for H.264 Video Coding

Wireless Communication

DISCRETE COSINE TRANSFORM BASED IMAGE COMPRESSION Aniket S. Dhavale 1, Ganesh B. Gadekar 2, Mahesh S. Bhagat 3, Vitthal B.

Bit-Plane Decomposition Steganography Using Wavelet Compressed Video

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

Image Compression Techniques

Compression Part 2 Lossy Image Compression (JPEG) Norm Zeck

CHAPTER 5 RATIO-MODIFIED BLOCK TRUNCATION CODING FOR REDUCED BITRATES

7.5 Dictionary-based Coding

IMAGE COMPRESSION. Chapter - 5 : (Basic)

Using Shift Number Coding with Wavelet Transform for Image Compression

Lecture 8 JPEG Compression (Part 3)

Video Compression An Introduction

Lossless Image Compression with Lossy Image Using Adaptive Prediction and Arithmetic Coding

PERFORMANCE AND ANALYSIS OF RINGING ARTIFACTS REDUCTION USING DEBLOCKING FILTER

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

Image, video and audio coding concepts. Roadmap. Rationale. Stefan Alfredsson. (based on material by Johan Garcia)

Optimizing the Deblocking Algorithm for. H.264 Decoder Implementation

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

FAST AND EFFICIENT SPATIAL SCALABLE IMAGE COMPRESSION USING WAVELET LOWER TREES

New Approach of Estimating PSNR-B For Deblocked

AN EFFICIENT LAPPED ORTHOGONAL TRANSFORM IMAGE CODING TECHNIQUE

A Novel Approach for Deblocking JPEG Images

A Reversible Data Hiding Scheme for BTC- Compressed Images

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

DIGITAL IMAGE WATERMARKING BASED ON A RELATION BETWEEN SPATIAL AND FREQUENCY DOMAINS

Transcription:

JPEG IMAGE CODING WITH ADAPTIVE QUANTIZATION Julio Pons 1, Miguel Mateo 1, Josep Prades 2, Román Garcia 1 Universidad Politécnica de Valencia Spain 1 {jpons,mimateo,roman}@disca.upv.es 2 jprades@dcom.upv.es Abstract JPEG is one of the most world wide used image coding methods. This method allows to get very good results with moderate compression (bit rates > 0,5 bpp) but suffers from blocking effect at low bit-rates To reduce the visibility of this artefact, in this paper we propose to use an adaptive quantization algorithm in JPEG. Our algorithm provides images with better objective and subjective quality at low bit rates at the expense of a small increment in computational cost. Key Words Image compression, JPEG, adaptive quantization. 1. Introduction The JPEG standard is one of the most popular image compression algorithms [1][2][3]. In its sequential mode (Figure 1), JPEG first splits the image into 8 8 nonoverlapping pixels blocks. Then, the discrete cosine transform (DCT) of each block is computed and the resulting coefficients are scalarly quantized. Entropy coding is finally applied to the quantized coefficients. 8x8 Blocks Image 8x8 FDCT Quantization Entropy coding Figure 1: Standard JPEG coder Compressed image The only parameter we have to get more compression is the quantization array (array of quantization factors) that is applied to all the blocks of coefficients. At low bit rates two main artefacts are introduced: blocking effect and blurring. Blocking effect stands for the discontinuities among adjacent blocks and it is due to the independent encoding of each block. Blurring happens because many high frequency coefficients are usually quantized as zeroes. The regular spatial structure of the blocking effect makes this artefact more annoying than blurring, especially in smooth areas of the image. To reduce the visibility of the blocking effect some postprocessing techniques have been proposed [6][7]. Another solution consists of using other coding schemes based on the Lapped Orthogonal Transform [8] or the Discrete Wavelet Transform [9]. In previous works [5][6] we developed Scaled JPEG, a method that improves the relation quality/compression changing the block size. Nevertheless, the result was not fully compatible with standard JPEG decoders, so the resultant DCT is pruned to an 8 8 array of coefficients in order to increase the compatibility. In this paper, we propose a method to reduce the blocking effect introduced in the JPEG-encoded images by slightly modifying the quantization algorithm of the JPEG standard. Our quantization algorithm reduces the blocking effect at the expense of increasing blurring, providing in this way images with better objective and subjective quality than those obtained with the standard JPEG at the same rate. Our algorithm only increases slightly the computational cost of the quantization algorithm used in a standard JPEG encoder. Although there are extensions to JPEG [11] that allows variable quantization by scaling the quantization array by a different factor for each block of coefficients, like MPEG does, most of commercial JPEG decoders do not support this extension. Our algorithm uses a different approach that is compatible with the baseline JPEG, so any JPEG decoder can decode the compressed images generated by our algorithm. 2. Quantization With Threshold Let F(u,v) be the DCT coefficients of a 8 8 block of pixels. Then, the quantized DCT coefficients F Q (u,v) are given by F Q (u,v) F(u,v) = round, Q(u,v) 0 u,v 7 where Q(u,v) is the quantizer step size for the (u,v) coefficient. The quantization step sizes are obtained by multiplying a quantization table q(u,v) by a factor α: (1)

Q(u, v) = q(u,v) α. By varying the parameter α, the rate and the distortion can be changed. However, once α has been set, the same quantization step sizes values Q(u,v) are used in all the blocks of the image. Finally, the integer sequence is coded using an entropy-based method in order to reduce its size. The larger the amount of zeros after quantization, the greater the compression factor achieve with the entropy coder. Therefore, the value of parameter α determines the compression level, but also the quality. In areas of the image with low spatial variations, only the lowest frequency DCT coefficients have significant amplitude values. Blocking effect is more visible in these areas. To decrease visibility of this effect, we present an adaptive quantization algorithm for JPEG in the following. Our quantization algorithm quantizes the lowest frequency DCT coefficient with a different strategy. Specifically, coefficients F(0,0), F(0,1) and F(1,0); or F0, F1 and F2 according to the JPEG zig-zag scanning order, are quantized with the JPEG standard quantization algorithm.. The remaining coefficients, F3 F63 according to the zig-zag scanning order, are quantized cancelling those that are close enough to zero, that is, cancelling those below a given threshold, th in (3). Notice that if th=0.5, standard JPEG quantization is performed. Threshold (2) allows us to vary the amount of zeroes generated after quantization: the larger threshold, the larger the number of coefficients equal to zero, and consequently, the larger the compression gain obtained and the distortion introduced. S vu Svu, > th = Qvu Qvu Sqvu Svu 0, th Qvu (3) This method has two main advantages, it is very easy and fast to implement, and the result image can be decompressed with any standard JPEG decoder. As the threshold value is only used to cancel coefficients, it is not necessary in the image reconstruction process, so it is not stored with the compressed image. This characteristic could be used to define a different threshold for each image block, which in fact means that we can achieve different compression ratios and qualities for each block. This is different from the method proposed in standard adaptive JPEG extension ISO/IEC DIS 10918 [10], that needs to store additional data in image in order to have different quantization for each block. 3. Adaptative Quantization To achieve higher compression gains in the encoding of coefficients F3-F63, we propose to change th in (3) according to the sort of block that is being (a) (b) (c) (d) Figure 2: SAILBOAT block classification with default values with a quality factor of 25. A white block represents a block belonging to : (a) zone 1, (b) zone2, (c) zone 3 and (d) zone 4.

quantized. Depending on the number of zeroes (z value) generated in each block after standard quantization, our algorithm classify the blocks into four different classes. Table 1 shows the four block classes and its corresponding threshold values used in this work. Figure 2 presents the block classification of blocks for the SAILBOAT image. The resulting JPEG image has a bit rate of 0.64 bits per pixel (bpp) at a quality level of 25. Block class z margin threshold 1 z < 48 1 2 49 z < 56 1.5 3 57 z < 60 2.5 4 z 60 1 Table 1: Classification of blocks attending to its z values and its corresponding threshold values Although we have defined 4 zones it is easy for the user to reduce the number of block classes. If two of the classification values are the same, i.e. z i equal to z i+1, the number of classes is reduced. The same effect is obtained if two adjacent zones have the same threshold value. A threshold of 0.5 will not modify the coefficients, i.e. the block will remain as with standard JPEG. One advantage of this classification method is its low cost; the number of zeroes can be computed at the same time that integer conversion is performed. The value of th for each class of Table 1 has been chosen by taking into account two considerations: the degree of bits saving and the visibility of the distortion introduced. For instance, in class 4 (nearly constant blocks) a low threshold is chosen because there are not significant bits gains with a higher threshold, and however, the distortion introduced can be very annoying. Tests performed with the values of Table 1 have provided good results for a wide variety of test images All the threshold values in Table 1 are higher than 0.5. Consequently, in the quantization of coefficients F3- F63, our algorithm generates a greater number of zeroes than the standard JPEG quantization algorithm operating with the same quantization sizes steps q(u,v), and therefore, our algorithm spends a lower number of bits in the encoding of F3-F63. If we encode an image at a given e by using our algorithm and the standard quantization, our algorithm spends more bits in the coding of coefficients F0-F2, and fewer bits in the others. As a result, lower blocking effect is introduced in low spatial activity blocks. Our algorithm introduces more distortion than standard quantization in the rest blocks, and consequently, the blurring and blocking effect of these blocks can increase. As from a perceptual point of view, it is more important to reduce distortion in low spatial activity blocks our algorithm provides better subjective results than standard quantization. The following section shows the obtained improvement obtained improvements when an objective measure (PSNR) is considered 4. Experimental Results The PSNR values obtained in the JPEG-encoding of 7 typical colour images at 0.5 and 0.25 bits per pixel (bpp) using our quantization algorithm and the JPEG standard quantization algorithm are shown in Table 2. Our algorithm provides slight better PSNR values than traditional JPEG quantization at both bitrates. In fact, after computing the PSNR for the seven images at ten different rates between 0.1 bpp and 1 bpp, our algorithm always provided better PSNR results (improvements ranging from 0.02 db until 0.5 db). To test subjectively our algorithm, we asked the opinion of several people with respect to the quality of the images generated by our algorithm and by using standard JPEG. Our algorithm always provided better scores at low bit-rates (R<1 bpp), almost similar scores at mid bit-rates (1bpp<R<2bpp) and worse scores at high bit-rates (R>2 bpp). Image Peppers Airplane Tifanny Lena Baboon House Tree Sailboat PSNR (db) Non-adaptive quantisation 0.25 bpp 0.5 bpp Our algorithm 0.25 0.5 bpp bpp 25.80 28.12 26.13 28.43 26.49 29.97 26.91 30.34 27.71 29.32 28.31 29.78 26.95 29.89 27.20 29.93 19.91 22.04 20.25 22.09 25.17 28.61 25.88 28.93 20.62 24.22 21.30 24.58 23.3 25.8 23.65 26 Table 2: PSNR values of several images using standard JPEG quantisation and our algorithm, at bitrates: 0.25 bpp and 0.5 bpp The figures (Figure 3 and Figure 4) compares the PSNR values obtained with standard JPEG and our method at different bit rates. We use the default configuration described in previous section when testing our method. These tests show that our method get a slight better PSNR for the same bit rate in both images. These results are similar to other tested images. The shapes of the

28 27 PSNR 27,5 PSNR 26,5 27 26 26,5 25,5 25 24,5 JPEG Adaptive JPEG 26 25,5 25 24,5 JPEG Adaptive JPEG 24 0,3 0,4 0,5 0,6 0,7 bpp 0,8 Figure 3: SAILBOAT PSNR vs. bpp results 24 0,5 0,7 0,9 1,1 bpp Figure 4: AIRPC.BMP PSNR vs. bpp results curves for both methods are very similar, but adaptive quantization is always better in the default configuration. In subjective quality tests, human testers (usually other researchers of our department) select our method as a better image when the differences between both images were appreciable by human eyes. Let us compare images in Figure 5. These details of pepper image were obtained for 0.2 bpp (or close) in order to show the blocky effect of standard JPEG. Standard JPEG was configured with a quality factor of 8, while our method only required a quality factor of 11. With this quality factor, the sizes of the resulting files are near equal, only a difference about 100 bytes of an image with a size near 7K. In standard JPEG there are a lot of blocky zones that in Adaptive Quantization JPEG are not so visible. Nevertheless, regions with more detail are a little more blurry with our method, as can be seen in the upper zone of the detail figures 5.a and 5.b. 5. Conclusion We have proposed a low computational cost adaptive quantization algorithm that improves the image quality of the JPEG standard for low bit rates while keep compatibility with the baseline JPEG. The standard JPEG increases the compression ratio by increasing the quantization of the whole image. In our method, we can get more compression by doing zero some coefficients but applying less quantization to other coefficients in the same block and also we can apply a different scheme to each block, based in a simple classification method. References [1] Digital Compression and Coding of Continuos Tone Images (Part 1: Requirements and Guidelines). ISO/IEC 10918-1, 1992 a) Standard JPEG (0.218 bpp) b) Using our algorithm (0.220 bpp) Figure 5: Results of encoding the image peppers and its posterior decoding.

[2] W. B. Pennebaker, J. L. Mitchell, JPEG: Still Image Data Compression Standard, (New York: Van Nostran Reinhold, 1993) [3] G. K. Wallace, The JPEG Still Picture Compression Standard, Commun. of the ACM, 34(4), 1991, 31-44 [4] K. R. Rao, P. Yip, Discrete Cosine Transform: Algorithms, Advantages, Applications (New York, Academic Press, 1990) [5] J. Pons, Una contribución a la optimización de las técnicas de compresión y descompresión de imágenes fotográficas basadas en el estándar JPEG.(Ph. D. thesis, Universidad Politécnica de Valencia, 1996) [6] S. Minami and A. Zakhor, An optimisation approach for removing blocking effects in transform coding, IEEE Transactions on Circuits and Systems for Video Technology, 1995, 5 (4), pp. 74-82. [7] G. Lakhani and N. Zhong, Derivation of prediction equations for blocking effect reduction. Proc- IEEE Transactions on Circuits and Systems for Video Technology, 1999, 9 (3), pp. 415-418. [8] H. S. Malvar and D. H. Staelin, The LOT: Transform Coding without blocking effects, IEEE Transactions on Acoustic, Speech and Signal Processing, 1989, 37 (4) pp. 553-559. [9] M. Vetterli and J. Kovacevic, Wavelets and subband coding, Prentice Hall, Englewood Cliffs, New Jersey, 1995. [10] R. Rosenholtz and A.B. Watson, Perceptual adaptive JPEG coding, Proc. IEEE International Conf. on Image Processing, Laussane, Swizterland, 1996, Vol. I pp. 901-904 [11] Information technology: Digital compression and coding of continuous-tone still images extensions. CCITT Recommendation T.84, 1996-