Decoding. Encoding. Recoding to sequential. Progressive parsing. Pixels DCT Coefficients Scans. JPEG Coded image. Recoded JPEG image. Start.

Similar documents
Compression of Stereo Images using a Huffman-Zip Scheme

Video Compression An Introduction

Scalable Video Coding

MRT based Fixed Block size Transform Coding

Compressed-Domain Video Processing and Transcoding

Stereo Image Compression

System Modeling and Implementation of MPEG-4. Encoder under Fine-Granular-Scalability Framework

Video Transcoding Architectures and Techniques: An Overview. IEEE Signal Processing Magazine March 2003 Present by Chen-hsiu Huang

JPEG decoding using end of block markers to concurrently partition channels on a GPU. Patrick Chieppe (u ) Supervisor: Dr.

Lecture 8 JPEG Compression (Part 3)

INF5063: Programming heterogeneous multi-core processors. September 17, 2010

VC 12/13 T16 Video Compression

Interactive Progressive Encoding System For Transmission of Complex Images

Image, video and audio coding concepts. Roadmap. Rationale. Stefan Alfredsson. (based on material by Johan Garcia)

JPEG. Wikipedia: Felis_silvestris_silvestris.jpg, Michael Gäbler CC BY 3.0

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality

MPEG-2. And Scalability Support. Nimrod Peleg Update: July.2004

Fingerprint Image Compression

Module 7 VIDEO CODING AND MOTION ESTIMATION

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Efficient support for interactive operations in multi-resolution video servers

A HYBRID DPCM-DCT AND RLE CODING FOR SATELLITE IMAGE COMPRESSION

Quo Vadis JPEG : Future of ISO /T.81

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

IMAGE COMPRESSION USING HYBRID QUANTIZATION METHOD IN JPEG

Low-Memory Packetized SPIHT Image Compression

MULTIMEDIA COMMUNICATION

Redundant Data Elimination for Image Compression and Internet Transmission using MATLAB

Video coding. Concepts and notations.

Motion Estimation. Original. enhancement layers. Motion Compensation. Baselayer. Scan-Specific Entropy Coding. Prediction Error.

Lossless and Lossy Minimal Redundancy Pyramidal Decomposition for Scalable Image Compression Technique

FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

CODING METHOD FOR EMBEDDING AUDIO IN VIDEO STREAM. Harri Sorokin, Jari Koivusaari, Moncef Gabbouj, and Jarmo Takala

JPEG Syntax and Data Organization

Robust MPEG-2 SNR Scalable Coding Using Variable End-of-Block

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology

Advanced Video Coding: The new H.264 video compression standard

06/12/2017. Image compression. Image compression. Image compression. Image compression. Coding redundancy: image 1 has four gray levels

MPEG-4: Simple Profile (SP)

An introduction to JPEG compression using MATLAB

Lecture 8 JPEG Compression (Part 3)

Image Segmentation Techniques for Object-Based Coding

Statistical Modeling of Huffman Tables Coding

In the name of Allah. the compassionate, the merciful

CMPT 365 Multimedia Systems. Media Compression - Image

A Very Low Bit Rate Image Compressor Using Transformed Classified Vector Quantization

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

Enhancing the Image Compression Rate Using Steganography

Modified SPIHT Image Coder For Wireless Communication

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

Introduction to Video Encoding

In the first part of our project report, published

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

The Standardization process

JPEG Modes of Operation. Nimrod Peleg Dec. 2005

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

Compression II: Images (JPEG)

Image Compression Algorithm and JPEG Standard

Week 14. Video Compression. Ref: Fundamentals of Multimedia

Video Compression MPEG-4. Market s requirements for Video compression standard

Multimedia Decoder Using the Nios II Processor

A LOW-COMPLEXITY AND LOSSLESS REFERENCE FRAME ENCODER ALGORITHM FOR VIDEO CODING

Laboratoire d'informatique, de Robotique et de Microélectronique de Montpellier Montpellier Cedex 5 France

University of Mustansiriyah, Baghdad, Iraq

Multimedia Communications. Transform Coding

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Chapter 10. Basic Video Compression Techniques Introduction to Video Compression 10.2 Video Compression with Motion Compensation

Index. 1. Motivation 2. Background 3. JPEG Compression The Discrete Cosine Transformation Quantization Coding 4. MPEG 5.

Secure Scalable Streaming and Secure Transcoding with JPEG-2000

Digital Image Processing

Outline Introduction MPEG-2 MPEG-4. Video Compression. Introduction to MPEG. Prof. Pratikgiri Goswami

FAST AND EFFICIENT SPATIAL SCALABLE IMAGE COMPRESSION USING WAVELET LOWER TREES

A Miniature-Based Image Retrieval System

Block-based Watermarking Using Random Position Key

Fast Region-of-Interest Transcoding for JPEG 2000 Images

Lecture 13 Video Coding H.264 / MPEG4 AVC

Bit Rate Reduction Video Transcoding with Distributed Computing

Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations

The Existing DCT-Based JPEG Standard. Bernie Brower

5.1 Introduction. Shri Mata Vaishno Devi University,(SMVDU), 2009

COMP 249 Advanced Distributed Systems Multimedia Networking. The Video Data Type Coding & Compression Basics

An Efficient Motion Estimation Method for H.264-Based Video Transcoding with Arbitrary Spatial Resolution Conversion

Fundamentals of Video Compression. Video Compression

Design, Implementation and Evaluation of a Task-parallel JPEG Decoder for the Libjpeg-turbo Library

Lecture 3 Image and Video (MPEG) Coding

Forensic analysis of JPEG image compression

ECE 417 Guest Lecture Video Compression in MPEG-1/2/4. Min-Hsuan Tsai Apr 02, 2013

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Chapter 10 ZHU Yongxin, Winson

JPEG 2000 vs. JPEG in MPEG Encoding

ISSN Vol.06,Issue.10, November-2014, Pages:

Scalable Compression and Transmission of Large, Three- Dimensional Materials Microstructures

ECE 634: Digital Video Systems Scalable coding: 3/23/17

A video streaming technique, in which video content is played back while the video data is being downloaded from an origin server, is useful for the

10.2 Video Compression with Motion Compensation 10.4 H H.263

High Efficiency Video Coding. Li Li 2016/10/18

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

Customizing Progressive JPEG for Efficient Image Storage

Reduced Frame Quantization in Video Coding

JPEG: An Image Compression System

Transcription:

Progressive Parsing Transcoding of JPEG Images Λ Johan Garcia, Anna Brunstrom Department of Computer Science Karlstad University, SE-6 88 Karlstad, Sweden E-mail: johan.garcia, anna.brunstrom@kau.se Abstract The introduction of new communication networks and devices raises the need for adaptation of Web images. Adaptation is done by proxies that use different transcoding policies and methods. This paper presents progressive parsing, which is an efficient method for performing the transcoding of progressive JPEG images by truncating the datastream. The suggested transcoding method has a lower delay and shows image quality advantages over other suggested approaches. The underlying mechanisms for these advantages are discussed and three possible implementation variants are presented. Introduction The integration of wireless networks into the Internet infrastructure poses new challenges to the research community. Relative to landline networks, wireless networks are slower due to bandwidth restrictions. The introduction of new terminal types connected over these slow links creates a need to adapt data originally not intended for such an environment. A major part of the data transfered on the Internet is related to Web surfing, and most of the data transfered when browsing the Web is image data. Adaptation of image data is hence a major concern and a number of proxy systems that perform such adaptation exist, as listed in [8]. Adaptation of image data is done by transcoding the images so that they have a higher compression level. This can be done either by changing the compression algorithm or by using the same algorithm configured to yield higher compression. This paper presents an adaptation method that falls into the latter category. We propose progressive parsing transcoding as an efficient transcoding method for progressively coded JPEG images. This is a transcoding method that takes progressive JPEG images as input and produces progressive JPEG images with a higher compression level as output. This transcoding method requires very little resources at the proxy performing the transcoding, Λ This work is supported by Ericsson Infotech produces a progressive datastream, and can give a rate/distortion performance superior to other methods. The layout of this paper is as follows. Section 2 provides a brief overview of JPEG and describes the principles of progressive parsing. Section discusses image quality improvements. Section 4 elaborates on possible implementation techniques of a progressive parsing transcoder. A summary is provided in section. 2 Progressive Transcoding JPEG is a widely used image format on the Web. JPEG [6] has several modes, but the two used in practice are baseline sequential encoding and progressive encoding. When used on Web images, sequential encoding provides the user with an image that grows from top to bottom as more data arrives. Progressive encoding instead quickly provides the user with a coarse image that is stepwise refined as more data are received. Progressive images are considered to provide a better user experience, but can create problems when the image is transcoded. A short review of baseline sequential JPEG is first provided to explain why this is the case. JPEG encodes an image by first doing block preparation, which includes color conversion, downsampling and the splitting of each color component into 8x8 sample point blocks. The next step is a forward DCT transform performed on each block to obtain 64 DCT coefficients. Of these coefficients, one is the DC coefficient that holds the average value of the block, and 6 are AC coefficients that represent the spatial frequency contentoftheblock. The coefficients are then quantized according to a quantization table. The quantization table can be scaled to provide the desired compression level. After this, runlength and Huffman coding losslessly compress the quantized coefficients. Finally, the data is packetized and headers are inserted according to some interchange format, usually JFIF []. Decoding is essentially the inverse of the above steps in reverse order. Decoding and encoding are schematically illustrated in Figure.

Decoding B Prep DCT Quant RLE Huff JPEG Coded image Pack Pixels DCT Coefficients Scans Encoding B Prep DCT Quant RLE Huff Pack Recoded JPEG image Figure : JPEG Decoding and Encoding The above description applies to sequential images while a progressive image is different in that it is composed of a number of scans. Each scanprovides one step of refinement to the image. The progressive mode of JPEG allows two mechanisms for obtaining the progressiveness, namely spectral selection and successive approximation. Spectral selection is performed by sending only a subset of the DCT coefficients in a scan. Successive approximation sends only a few of the most significant bits in one scan, sending more bits in subsequent scans. The above mechanisms can be combined, and typical progressive images contain such combined progression sequences. Progressive images can be seen as performing the steps runlength coding, Huffman coding and packetization in several iterations, one iteration per scan. Images can be transcoded in different places in the decoding sequence, as illustrated by the vertical lines in Figure. For sequential input images, transcoding can be performed either by doing a full decoding/encoding cycle or more efficiently by performing the requantization in the DCT domain [4]. The following text refers to both these methods as requantization based. In addition to requantization based transcoding, progressive input images can also be transcoded by simply truncating the image datastream at a suitable position and discarding the remaining data. This approach exploits the inherent scalability potential of progressive images. Since a progressive datastream always contains the visually most important information first, images can be transcoded by truncating the image datastream. This of course requires that the application receiving truncated image data is capable of handling it. Our tests haveshown that all tested applications, including two major browsers, have the capability of correctly displaying truncated progressive images. The concept of truncating progressive streams is not new. However, no studies of the applicability of truncating progressive JPEG images for Web image transcoding have been found in the literature. On the contrary, the literature suggests that progressive truncation is not sufficiently understood. In [2] it is suggested that progressive JPEG images should be transcoded to sequential JPEG images in order to avoid the processing penalty of Huffman table optimization. Huffman table optimization is always done for progressive encoding but not for sequential encoding. The Huffman table optimization provides slightly better rate/distortion performance at the expense of greater processing requirements. Instead of reducing the processing requirements by transcoding to sequential JPEG, a much greater gain can be obtained by using progressive parsing transcoding instead. This also preserves the rate/distortion advantage as described in the next section. Progressive parsing transcoding also has a buffering advantage since the transcoder can parse an input stream and at the same time output the data to an output stream. Only a small amount ofbufferingis required to evaluate JPEG scan headers and optionally buffer one scan. This streaming behavior is favorable with regard to both transcoding delay and buffer requirements []. Requantization based transcoding of a progressive image requires that the whole image is buffered regardless of whether the output image is progressive or sequential. The buffering advantage of progressive parsing transcoding is illustrated in Figure 2. Image data stream Progressive parsing Start Parsing and sending Finished Image data stream Figure 2: Buffering Advantage Recoding to sequential Start Buffering Processing Sending

(a) Original progressive image (b) Progressively parsed (c) Transcoded to sequential Figure : Quality Illustration Quality Improvements Figure shows a section of the well-known Lena image. Table shows the details for these images, providing the compression ratio in bits per pixel(bpp). The PSNR of the transcoded images relative to the original progressive image is also shown. All images were processed using the IJG (Independent JPEG Group) tools [] and our own transcoder. The default quantization tables were used for all the images in Figure. For image (a) and (b), the standard IJG progression sequence was used with image (b) truncated after the th of scans. As the PSNR Image bpp PSNR Qual. setting Original progressive.8-6 Progressively parsed.28.42 6 + skewing Sequential transcoded.296.6 Table : Image data values suggest, Figure shows that the progressively parsed image has higher visual quality than the sequentially transcoded image, this is especially visible in the chin area. The better quality iseven more noticeable in color reproduction. The quality improvement comes from three factors, one applicable to progressively coded images in general and two specific for progressively parsed images. These factors are discussed in the next three paragraphs. Agiven image encoded as a progressive JPEG image typically has slightly higher quality than the same image encoded as sequential JPEG because the Huffman tables in progressive images are always optimized. For sequential images the Huffman tables proposed in the JPEG standard are typically used since this saves processing and memory resources by not performing Huffman table optimization. The increased compression performance provided by Huffman optimization is obviously also present in progressively parsed images. When performing progressive parsing transcoding, this performance advantage does not incur any resource usage overhead at the proxy. The Huffman optimization done when the original image was encoded is in effect reused. Another factor contributing to the quality improvement in progressive parsing transcoding is quantization table skewing. This effect makes the quantization table better suited for highly compressed images and can provide a quality improvement relative to images transcoded by requantization based methods. This effect is a result of the fact that the relative importance of the individual coefficient quantization values changes as scans arrive. In the beginning, when little data are available (i.e. at very high compression levels), the quantization value for the DC coefficient is relatively lower than for the AC coefficients. This occurs since more significant bits are available for the DC coefficient thanfortheac coefficients. The removal of the least significant bit from a group of coefficients effectively doubles the quantization values for the coefficients of this group. Higher order AC coefficients have not yet been transmitted, in effect producing an infinite quantization value for these coefficients. At high compression levels, a relatively lower quantization level should be used for the DC coefficient in order to avoid blocking artifacts. For non-progressively parsed images, the quantiza-

6 6 6 2 4 6 8 2 4 6 8 2 4 6 8 (a) Original image (b) Progressively parsed (c) Transcoded to sequential Figure 4: Quantization Tables tion table is typically just scaled with a uniform factor across all coefficient values. The fact that the relative importance of the coefficients changes as the compression level increases is not considered. This is illustrated in Figure, where the blocking artifact is visible in the sequentially transcoded image but is considerably less visible in the image transcoded by progressive parsing. The skewing of the DC quantization value is illustrated in Figure 4, which shows the quantization tables consisting of quantization values for each of the 64 coefficients. Note that the DC quantization value at (,) is higher than the surrounding values for the image recoded to sequential. For the progressively parsed image, the DC valueislower than the surrounding ones, thus providing better resolution for the DC level which results in less blocking artifacts. The quality improvement obtained from this skewing is dependent on the progression sequence used and the scan at which the truncation is done. It is possible that similar or better quality improvements for non-progressively parsed images can be obtained by using a quantization table tailored to the specific compression level. Such tables are however not readily available and are non-trivial to construct [9]. Another advantage of using progressive parsing transcoding, as opposed to requantization based transcoding, is that less noise is injected by quantization value changes. This effect occurs because the effective quantization values for progressive parsing are always multiples of the original quantization values, whereas requantization results in changes that are not whole multiples of the original quantization values. This means that the zero-error-accumulation property [] holds for more coefficients, thus inducing less total requantization noise. 4 Truncation implementation As shown in the previous two sections, progressive parsing transcoding can provide advantages from both a resource and a quality perspective. The question however remains of how to best implement the transcoding in practice. Progressive parsing is centered around the truncation of an incoming datastream to achieve a target compression level, C g (bpp). The number of pixels in the image can be computed using the image width, w, and height, h, that are given in the JPEG header. This information can then be used to implement the truncation in different ways. Simple Truncation Simple truncation is the most basic form of progressive parsing transcoding. An end-of-image marker (EOI) is simply inserted into the datastream when C g is reached and the remaining data are discarded. Although simple, this method has a drawback because it will not provide a consistent quality level over the whole image. When the datastream is truncated in the middle of a scan, the upper part of the picture will have slightly better quality than the lower part. Depending on the scan granularity used, this effect will be more or less visible. Inter-scan Truncation With inter-scan truncation the datastream is truncated after n of the k total scans by inserting an EOI marker after the nth scan and discarding the k n last scans. The output compression level is only variable in k discrete levels. When performing transcoding, the problem thus occurs of selecting n, the number of scans that should compose the transcoded image.

The following expression is proposed to determine n: min ( n : n 2 Z + ^ i=n X i= S(i)=(hwC g ) > ff ) () Value S(i) is the number of bits in scan i. Value ff is a compensation factor introduced to compensate for the discreteness of n. The value of ff must be» ff<and can either be fixed or adapted as scans arrive and more knowledge about the progression sequence of the current image becomes available. The introduction of ff allows for the transcoder to choose an n so that the output compression level becomes slightly higher than that requested (i.e. C o < C g ). Without ff, C o can never be lower than C g, even if thedifferenceisonlyafewbytes of data in the end of scan n (i.e C g C o(n ) fi C o(n) C g ). The ff value thus controls the amount ofdownward hysteresis. Intra-scan Truncation Intra-scan truncation is a refinement of inter-scan truncation that, instead of truncating at a scan boundary, truncates inside a scan and does so in a way that upholds the progressiveness. By buffering each scan and detecting the scan n in which C g is reached, scan n can then be transcoded. By performing Huffman and runlength decoding on this scan, it becomes possible to trim the scan so that the resulting compression level is a close match toc g. Regardless of whether the scan is a successive approximation, spectral selection or a combination, it will be possible to trim the size of the scan by lowering the number of coefficients used in the scan. A simple expression can be given for determining D o, the number of DCT coefficients to be retained out of the D n coefficients originally present inscann. hwc g D o = &Dn Pi=n i= S(i) S(n) ' (2) Scans that only contain some bits of the DC component cannot be trimmed by the above method. Instead, such scans may be trimmed by removing the least significant bit of the scan or be allowed through unaltered since they are relatively small. We have implemented a transcoder that is capable of simple and inter-scan truncation and are currently investigating the design of an intra-scan capable transcoder. The image in Figure (b) was produced using inter-scan truncation. Summary by simple truncation. The transcoder produces images that adhere to the JPEG standard, but the transcoding is simpler and faster and can provide a better rate/distortion performance than other suggested transcoding approaches. The main contributions of this paper are the classification of the effects that lead to the improved rate/distortion performance and the possible methods of implementing progressive parsing transcoding. Topics for further research include how to optimize ff when using inter-scan truncation and how to select the number of coefficients to include in the last scan of intra-scan truncation. References [] Independent JPEG group software. ftp://ftp.uu.net/graphics/jpeg/. [2] S. Chandra and C. S. Ellis. JPEG compression metric as a quality aware image transcoding. Second Usenix Symposium on Internet Technologies and Systems (USITS '99), October 999. [] S.-F. Chang and A. Eleftheriadis. Error accumulation of repetitive image coding. Proc. IEEE Intl. Symposium on Circuits and Systems, May 994. [4] J. Garcia and A. Brunstrom. Efficient image transfer for wireless networks. 2nd International Conference on Advanced Communication Techology (ICACT), Muju, South Korea, pages, February. [] R. Han, P. Bhagwat, R. LaMaire, T. Mummert, V. Perret, and J. Rubas. Dynamic adaptation in an image transcoding proxy for mobile WWW browsing. IEEE Personal Communication, (6):8, December 998. [6] ITU-T. Recommendation T.8 - digital compression and coding of continuous-tone still images. Geneva, Switzerland, September 992. [] The JPEG file interchange format. Maintained by C-Cube Microsystems Inc., ftp://ftp.uu.net/graphics/jpeg/jfif.ps.gz., 998. [8] R. Mohan, J. Smith, and C.-S. Li. Adapting multimedia internet content for universal access. IEEE Transactions on Multimedia, ():4 4, March 999. [9] A. B. Watson. Visual optimization of DCT quantization matrices for individual images. Proc. SPIE, 9, pages 22 26, 99. This paper presents progressive parsing transcoding, a method for transcoding progressive JPEG images