Optimal Alphabet Partitioning for Semi-Adaptive Coding

Similar documents
Alphabet Partitioning Techniques for Semi-Adaptive Huffman Coding of Large Alphabets

Chapter 5 VARIABLE-LENGTH CODING Information Theory Results (II)

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

Statistical Modeling of Huffman Tables Coding

A Simple Lossless Compression Heuristic for Grey Scale Images

Horn Formulae. CS124 Course Notes 8 Spring 2018

An Asymmetric, Semi-adaptive Text Compression Algorithm

David Rappaport School of Computing Queen s University CANADA. Copyright, 1996 Dale Carnegie & Associates, Inc.

Interactive Progressive Encoding System For Transmission of Complex Images

8 Integer encoding. scritto da: Tiziano De Matteis

Intro. To Multimedia Engineering Lossless Compression

Lecture 17. Lower bound for variable-length source codes with error. Coding a sequence of symbols: Rates and scheme (Arithmetic code)

Modified SPIHT Image Coder For Wireless Communication

Stereo Image Compression

Lossless Image Compression having Compression Ratio Higher than JPEG

Multimedia Networking ECE 599

Lossless Image Compression with Lossy Image Using Adaptive Prediction and Arithmetic Coding

Implementation and Analysis of Efficient Lossless Image Compression Algorithm

Research Article Does an Arithmetic Coding Followed by Run-length Coding Enhance the Compression Ratio?

OPTIMIZED MULTIPLE DESCRIPTION SCALAR QUANTIZATION BASED 3D MESH CODING

Wavelet Based Image Compression Using ROI SPIHT Coding

Data Hiding in Video

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

Greedy Algorithms CHAPTER 16

Performance analysis of Integer DCT of different block sizes.

ARITHMETIC coding [1] [4] is a very efficient entropy

The Encoding Complexity of Network Coding

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson

THE RELATIVE EFFICIENCY OF DATA COMPRESSION BY LZW AND LZSS

Wireless Communication

UC San Diego UC San Diego Previously Published Works

Data Compression. Guest lecture, SGDS Fall 2011

Lossless Compression Algorithms

A Comparative Study of Lossless Compression Algorithm on Text Data

JPEG IMAGE CODING WITH ADAPTIVE QUANTIZATION

Multimedia Communications. Transform Coding

EE67I Multimedia Communication Systems Lecture 4

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Course notes for Data Compression - 2 Kolmogorov complexity Fall 2005

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Image Compression Algorithm and JPEG Standard

Compression of Stereo Images using a Huffman-Zip Scheme

Leveraging Set Relations in Exact Set Similarity Join

Compressive Sensing for Multimedia. Communications in Wireless Sensor Networks

Tutorial on Image Compression

An embedded and efficient low-complexity hierarchical image coder

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

DEGENERACY AND THE FUNDAMENTAL THEOREM

A LOW-COMPLEXITY AND LOSSLESS REFERENCE FRAME ENCODER ALGORITHM FOR VIDEO CODING

06/12/2017. Image compression. Image compression. Image compression. Image compression. Coding redundancy: image 1 has four gray levels

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding

Lossless Geometry Compression for Steady-State and Time-Varying Irregular Grids

A QUAD-TREE DECOMPOSITION APPROACH TO CARTOON IMAGE COMPRESSION. Yi-Chen Tsai, Ming-Sui Lee, Meiyin Shen and C.-C. Jay Kuo

Image Compression Using BPD with De Based Multi- Level Thresholding

Differential Compression and Optimal Caching Methods for Content-Based Image Search Systems

Delay-minimal Transmission for Energy Constrained Wireless Communications

EE-575 INFORMATION THEORY - SEM 092

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

An introduction to JPEG compression using MATLAB

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

Integrating Error Detection into Arithmetic Coding

6. Finding Efficient Compressions; Huffman and Hu-Tucker

A Novel Image Compression Technique using Simple Arithmetic Addition

Bitmap Index Partition Techniques for Continuous and High Cardinality Discrete Attributes

Image Coding and Data Compression

16.410/413 Principles of Autonomy and Decision Making

Scribe from 2014/2015: Jessica Su, Hieu Pham Date: October 6, 2016 Editor: Jimmy Wu

Figure-2.1. Information system with encoder/decoders.

Module 6 STILL IMAGE COMPRESSION STANDARDS

IMAGE COMPRESSION TECHNIQUES

Analysis of Parallelization Effects on Textual Data Compression

Lossless compression II

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Chapter 7 Lossless Compression Algorithms

Notes for Lecture 24

A Research Paper on Lossless Data Compression Techniques

Interleaving Schemes on Circulant Graphs with Two Offsets

A Hybrid Recursive Multi-Way Number Partitioning Algorithm

Digital Image Representation Image Compression

Scalable Coding of Image Collections with Embedded Descriptors

A Novel Statistical Distortion Model Based on Mixed Laplacian and Uniform Distribution of Mpeg-4 FGS

arxiv: v2 [cs.it] 15 Jan 2011

GUJARAT TECHNOLOGICAL UNIVERSITY

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

Max-Count Aggregation Estimation for Moving Points

WIRE/WIRELESS SENSOR NETWORKS USING K-RLE ALGORITHM FOR A LOW POWER DATA COMPRESSION

DigiPoints Volume 1. Student Workbook. Module 8 Digital Compression

AN ANALYTICAL STUDY OF LOSSY COMPRESSION TECHINIQUES ON CONTINUOUS TONE GRAPHICAL IMAGES

Efficient VLSI Huffman encoder implementation and its application in high rate serial data encoding

Module 7 VIDEO CODING AND MOTION ESTIMATION

Video Compression An Introduction

Optimal Exact-Regenerating Codes for Distributed Storage at the MSR and MBR Points via a Product-Matrix Construction

Golomb Coding Implementation in FPGA

Analysis of Algorithms

Complementary Graph Coloring

MRT based Fixed Block size Transform Coding

Tree-Based Minimization of TCAM Entries for Packet Classification

Rate Distortion Optimization in Video Compression

Transcription:

Optimal Alphabet Partitioning for Semi-Adaptive Coding Dan Chen Yi-Jen Chiang Nasir Memon Xiaolin Wu Department of Computer and Information Science Polytechnic University Brooklyn, NY 11201 Abstract Practical applications that employ entropy coding for large alphabets often partition the alphabet set into two or more layers and encode each symbol by using some suitable prefix coding for each layer. In this paper we formulate the problem of optimal alphabet partitioning for the design of a two layer semi-adaptive code and give a solution based on dynamic programming. However, the complexity of the dynamic programming approach can be quite prohibitive for a long sequence and a very large alphabet size. Hence, we give a simple greedy heuristic algorithm whose running time is linear in the number of symbols being encoded, irrespective of the underlying alphabet size. give experimental results that demonstrate the fact that superior prefix coding schemes for large alphabets can be designed using our approach as opposed to the typically ad-hoc partitioning approach applied in the literature. dchen01@utopia.poly.edu. Research supported by NSF Grant ACI-0118915. yjc@poly.edu. Research supported in part by NSF Grant ACI-0118915, NSF CAREER Grant CCR-0093373, and NSF ITR Grant CCR-0081964. memon@poly.edu. Research supported in part by NSF Grant ACI-0118915 and NSF Grant CCR-0208678. xwu@poly.edu. Research supported in part by NSF Grant CCR-0208678. We 1

1 Introduction For various reasons, some of them technical and some not, the most widely used entropy coding technique today is Huffman coding [2]. Although adaptive versions of Huffman s algorithm have been known for many years [5, 12], due to complexity issues, Huffman coding is most widely used in a static or semi-adaptive form. Static Huffman coding can be used when there is some knowledge about the statistics of the data being coded. However, when this is not the case, a semi-adaptive Huffman code can be used which makes two passes through the data. In the first pass, statistics are collected to aid the construction of the Huffman code and the actual coding is only accomplished in the second pass. The specific Huffman code designed for the data is included in the data using some canonical representation technique. Most compression standards like JPEG, GIF, gzip, etc. include a semi-adaptive Huffman code as one of the supported, and indeed most widely used, options for entropy coding. Although the simplicity and effectiveness of static and semi-adaptive Huffman coding provide a compelling rationale for their deployment in many applications, complexity issues start becoming serious concerns as the underlying alphabet size of the data being encoded start getting very large. This is due to the fact that for large alphabet sizes, a Huffman code for the entire alphabet requires an unduly large code table. This is the case even for a semi-adaptive technique where only the codewords for symbols that occur in the data being compressed need to be stored. An excessively large Huffman table can lead to multiple problems. First of all, in the semi-adaptive case, a larger code would require more bits to represent thereby reducing the efficiency of Huffman coding. Additionally, in both the static and semi-adaptive case, large Huffman tables can lead to serious difficulties in a hardware implementation of the codec. 1.1 Alphabet Partitioning One way to deal with the problem of entropy coding large alphabets is to partition the alphabet into smaller sets and use a product code architecture where a symbol is identified by a set number and then by the element within the set. This approach has also been called amplitude partitioning [10] in the literature. One special case of the product code arises when the cardinality of each partition is 2 k for some k and exactly k bits are employed to identify any given element in that set. This 2

strategy has been employed in many data compression standards in the context of Huffman coding and has also been called Modified Huffman Coding. For example, a modified Huffman coding procedure specified by the JPEG standard [8] is used for encoding DC coefficient differences. Here, each prediction error (or DC difference in the lossy codec) is classified into a magnitude category and the label of this category is Huffman coded. Since each category consists of a multiple number of symbols, uncoded extra bits are also transmitted which identify the exact symbol (prediction error in the lossless codec, and DC difference in the lossy codec) within the category. Other well known standards like the CCITT Group 3 and Group 4 facsimile compression standards [3] and the MPEG video compression standards [7] also use a similar modified Huffman coding procedure to keep the Huffman table small relative to the underlying alphabet size they are entropy coding. Note that the alphabet partitioning approach can also be applied recursively. That is, the elements within each set can be further partitioned in subsets and so on. One popular way of doing this is used when encoding floating-point numerical data that arise in compression of scientific applications. Here, for example, the 32-bit floating-point numbers to be encoded are viewed as a sequence of four bytes and separate entropy codes are designed for each of the four bytes. This, clearly can be viewed as recursive alphabet partitioning. The first level partitions the numbers into 256 different sets and the elements within each set are further partitioned into 256 sets and so on. Similar comments apply if the numbers are 64 bits or 80 bits or any other representation and if the partitioning is done based on bytes or nibbles or 16-bit chunks, etc.. Another more sophisticated, but nevertheless ad-hoc, recursive partitioning approach called group partitioning is presented in [14]. 1.2 Our Contribution - Optimal Alphabet Partitioning Although the modified Huffman coding procedures designed by JPEG, CCITT and other standards are quite effective, they are clearly ad-hoc in nature. For example, the design of the JPEG Huffman code is guided by the fact that the underlying symbols are DCT coefficients and are well modeled by a Laplacian or a generalized Gaussian distribution [11]. This knowledge is exploited in designing a suitable modified Huffman code. The exponentially decaying pmf s of quantized DCT coefficients are matched with partitions of exponentially increasing size, resulting in each partition 3

being assigned a roughly equal probability mass. Given the ad-hoc nature of alphabet partitioning techniques in the literature, our work is motivated by the following question: given a finite input sequence S drawn from a very large alphabet Z but with an unknown distribution, how does one design an optimal semi-adaptive modified Huffman code? Or more generally, how does one optimally partition the alphabet into sets, such that any symbol is encoded by an entropy code (say, a Huffman code) of the index of the partition to which it belongs followed by a suitable encoding of its rank within the partition. By optimal here, we mean a code that minimizes the number of bits needed to encode the symbol plus the number of overhead bits to describe the code. The rest of this paper is organized as follows. In the next section we formulate the problem of optimal alphabet partitioning for the design of a two-layer entropy code and give a solution based on dynamic programming. However, the complexity of the dynamic programming approach can be quite prohibitive for a long sequence and a large alphabet size. Hence in Section 3 we give a greedy heuristic whose running time is O(N) where N is the number of distinct symbols being encoded. Finally, in Section 4 we give experimental results that demonstrate the fact that superior semi-adaptive entropy coding schemes for large alphabets can be designed using our approach as opposed to the typically ad-hoc partitioning approach applied in the literature. 1.3 Related Work The problem of large alphabet size also arises in adaptive single-pass entropy coding, for example, adaptive arithmetic coding. Here in addition to the large table size, the lack of sufficient samples in order to make reliable probability estimations becomes a more serious concern. Specifically, it is well known that the zero frequency problem can lead to serious coding inefficiency. The widely known PPM text compression algorithm [1] allocates explicit codes only to symbols that have occurred in the past and uses a special escape symbol to code a new symbol. In the work of Zhu et. al. [13], this approach is extended further by the use of a dual symbol set adaptive partitioning scheme, where the first set contains all the symbols that are likely to occur and the second set contains all the other symbols. Only the symbols in the first set are explicitly coded. They also give an adaptive procedure for moving symbols between the two sets. 4

More recently Yang and Jia [4] have given a universal scheme for coding sources with large and unbounded alphabets, which employs multi-level arithmetic coding. They dynamically construct a tree-based partition of the alphabet into small subsets and encode a symbol by its path to the leaf node that represents the subset of the symbol and by the index of the symbol within this subset. They prove that their code is universal, that is, it asymptotically achieves the source entropy. The problem of minimizing space and time complexity when constructing prefix codes for large alphabets has been extensively studied in a series of papers by Moffat, Turpin et. al.. For example, see [6] and the references therein. 2 Optimal Alphabet Partitioning by Dynamic Programming Our problem is to code a finite input sequence S drawn from a very large, possibly infinite, alphabet Z but with an unknown distribution. Since the size of alphabet is much larger than the input sequence length, Z >> S, one-pass universal coding approach may not work well in practice due to the sample sparsity, i.e., the input sequence does not supply enough samples to reach a good estimate of the source distribution. Instead, we propose a two-pass and two-layer coding approach based on the minimum description principle [9]. We assume that the alphabet Z is ordered, which may be unbounded. Suppose that the input sequence consists of N distinct values: z 1 < z 2 < < z N. Central to our two-layer code design is the concept of optimal alphabet partition. The range of alphabet Z is partitioned into M intervals: (, z q1 ], (z q1, z q2 ],, (z qm 1, ). Note the M-cell partition of Z is defined by M 1 existing symbol values of S, indexed by q 1 < q 2 < q M 1. Consequently, the symbols of input sequence S is partitioned into M subsets: S(q i 1, q i ] = {z j j (q i 1, q i ]}, 1 i M, where q 0 = 0 and q M = N by convention. This partition constitutes the first layer of the code. Given an alphabet partition, the self entropy of the first layer is M H(q 1, q 2,, q M 1 ) = P (S(q i 1, q i ]) log P (S(q i 1, q i ]). (1) i=1 This is the average bit rate required to identify the subset membership of an input symbol in the alphabet partition. 5

We introduce a so-called operationally optimal Golomb code to resolve the remaining uncertainty within a cell of the alphabet partition, which is the second layer of our code. Given a data set S(q i 1, q i ], we find the optimal orientation of the Golomb code since the pmf P (z j j (q i 1, q i ]) can have an increasing or decreasing tendency in j, and determine the optimal Golomb parameter that yields the minimum code length. Note that the Golomb coding scheme is only operationally and not information-theoretically optimal in the sense that we settle with the best the Golomb code can do with respect to the given pmf P (z j j (q i 1, q i ]) which may deviate significantly from the exponential distribution. Our design decision here is purely practical because Golomb code is known to admit a fast implementation and has only one parameter, which means small side information in a two-pass coding scheme. The proposed two-layer coding scheme is semi-adaptive. The encoder first scans the input sequence and collects the statistics to optimize the alphabet partition as described by the following algorithm. It then sends side information on alphabet partition, namely values z q1, z q2,, z qm 1, and the M optimal Golomb parameters and orientations, one for each alphabet interval. Now we present a dynamic programming algorithm for optimal alphabet partition. Let L(z j j (q i 1, q i ]) be the length of the operationally optimal Golomb code of the set {z j j (q i 1, q i ]}. Let R k (0, n] be the overall code length produced by the two-layer coding scheme under the optimal k-cell alphabet partition of the subset {z z < z n } of the input data set. Namely, { k } R k (0, n] = min L(z j j (q i 1, q i ]) + H(q 1, q 2,, q k 1 ) S(0, n] + kɛ. 0=q 0 <q 1 < <q k =n i=1 (2) Here we assume that ɛ bits of side information per partition cell are used to specify the orientation and optimal Golomb parameter. Our objectives are to find R M (0, N], M = arg min R k (0, N], (3) k and to construct the underlying alphabet partition given by 0 = q 0 < q 1 < < q M = N. The dynamic programming optimization procedure is based on the recursion: R k (0, n] = min 0<q<n {R k 1(0, q] + L(z j j (q, n]) S(q, n] log P (S(q, n])}, (4) After precomputing R(a, b] = L(z j j (a, b]) S(a, b] log P (S(a, b]), 0 < a < b N (5) 6

for all possible O(N 2 ) subsets, the dynamic programming algorithm proceeds as described by the following pseudo code. /* Initialization */ q 0 0; q M N; m := 1; R 1 (0, N] = L(z j j (0, N]); /* Minimize the description length */ repeat m := m + 1; for n := m to N do R m (0, n) := min 0<q<n {R m 1 (0, q] + L(z j j (q, n]) S(q, n] log P (S(q, n])}; Q m (n) := arg min 0<q<n {R m 1 (0, q] + L(z j j (q, n]) S(q, n] log P (S(q, n])}; until R m (0, N] + mɛ > R m 1 (0, N] + (m 1)ɛ. /* Construct the alphabet partition */ M := m 1; for m := M downto 2 do q m 1 = Q m (q m ); output final M-cell optimal alphabet partition (q 0, q 1,, q M ). The complexity of the above dynamic programming algorithm is clearly O(MN 2 ). It should be noted that the algorithm we have presented is quite general and can be adapted for different types of coding strategies. For example, in the above algorithm we use Golomb codes for encoding the rank of a symbol within a partition. However, we could just use fixed length coding for this purpose as well (as done by the JPEG modified Huffman code) and the above algorithm has to be quite trivially modified to yield a suitable optimal partitioning of the alphabet. 3 Greedy Heuristic The running time of the dynamic programming algorithm as given in the previous section is O(MN 2 ). When coding a large number of symbols (drawn from a large alphabet), say tens of thousands, this is clearly prohibitive, even though the cost 7

is only incurred by the encoder and not the decoder. In this section we present a sub-optimal greedy heuristic for partitioning the alphabet, which runs in time linear in the number of distinct symbols on the source sequence, that is, in O(N) time. In the next section we give experimental results showing that the greedy heuristic yields results quite close to the optimal and hence can be an attractive alternative in many applications. The greedy algorithm starts by first sort the distinct symbols that have occurred in the source sequence in increasing order. Then we make a linear scan of the symbols starting from the smallest symbol (or the equivalently the largest symbol). We assign this symbol to a partition in which it now is the only element. We then consider the next smallest symbol. We now have two choices. The first choice is to create a new partition for this new symbol, and thereby closing the previous partition. The second choice is to include this symbol into the previous partition that contained only the first symbol. We compare the cost of these two choices and using a greedy strategy to select the one that has lower cost. We continue in this manner visiting each new distinct symbol and assigning it either to the previous partition or to a new partition of its own. More generally, as before let us assume that the input sequence consists of N distinct values: z 1 < z 2 < < z N. Further let s assume our current working partition is (z qi, z qj ]. We now consider the symbol z qj+1. We compute the cost of the following two choices: 1. Add symbol z qj+1 to the partition (z qi, z qj ] so that our current working partition becomes (z qi, z qj+1 ]. 2. Terminate our current working partition (z qi, z qj ] and create a new working partition (z qj, z qj+1 ]. The question that arises is how to compute the cost of the two choices we are facing at each step. Essentially we calculate an estimate of the resulting cumulative code size for each choice and select the one that leads to lower code size. To estimate the cumulative code size we use the following expression: C(q j+1 ) = f(q 1 ) + f(q 2 ) +... f(q j ) + g(q j+1 ) (6) where f(q i ) is an estimate of the code length for the partition (z qi 1, z qi ] and is of the form: f(q i ) = n (log p + log d) (7) 8

where n is the number of values in the partition (z qi 1, z qi ], p is the statistics of the input values falling into this partition, i.e., n/n, and d is the distance between the upper bound and the lower bound in this partition, that is, z qi z qi 1. The component g( ) is an estimate of the size of the Huffman table, that is, g(q i ) = (log t + 32) t (8) where t is the number of partitions that we would have accumulated thus far after dealing with the current symbol z qj+1. Basically, we are assuming that the Huffman table is stored in the same canonical format as in the JPEG standard, namely, by storing codeword lengths, and log t is an estimate of this. However, in our case we also need to store the symbols up to and including the one where the current partition ends. This requires the additional 32 bits for each entry. Clearly the above algorithm is sub-optimal. In fact, scanning the symbols from low to high values and from high to low values will yield two different partitioning schemes. In practice, we scan in both directions and select the partitioning scheme that results in the lowest code length. There are other possible approaches for designing simple heuristics for the alphabet partitioning problem. We experimented with quite a few. For example, a hierarchical scheme, a pivoting scheme, and so on. However, the simple greedy technique described above appeared to give the best performance on all the datasets that we have tested on. 4 Experimental Results To evaluate the effectiveness of our algorithms we ran them on a set of data obtained from a computer graphics application. Specifically, each input dataset contains a vertex list of a triangle-mesh model. The four datasets used are: bunny, bunny2, phone and phone2. The bunny and bunny2 datasets represent the same model of a bunny but the vertices are sampled at different resolutions, where each vertex contains five components: its x, y, z, scalar, and confidence values, each represented as a 32-bit floating-point number. Similarly, the phone and phone2 datasets represent the same model of a phone but the vertices are sampled at different resolutions, where each vertex contains only three components: its x, y, and z values, again each represented as a 32-bit floating-point number. Typically these vertex lists are compressed in 9

the graphics literature using some simple modified Huffman coding or using a string compression algorithm like gzip. To remove first order redundancies in the vertex list we used the most widely employed technique for vertex compression, namely differential coding: store the first entry completely, and then store each remaining entry as its difference from the previous entry. We treated each of the five or three components (x, y, z, scalar and confidence components for the bunny models, and x, y, and z components for the phone models) separately, and constructed for each a separate modified Huffman code for the differences by the dynamic programming method described in Section 2 as well as the greedy heuristic given in Section 3, where in both methods we used Huffman code for the first-layer coding and fixed-length code for the second-layer coding. To compare our results we also constructed a separate Huffman code for each byte (8 bits), for each pair of bytes (16 bits), and for each four-byte word (32 bits), in each floating-point number, on the same sequence of symbols as encoded by the dynamic programming and the greedy approaches. In addition, we used an 8-bit and 16-bit modified Huffman code to the same sequence, as well as using gzip, a popular string compression algorithm. First we show in Table 1 the compression results on the bunny and bunny2 datasets. Since the dynamic programming method is time consuming, we only experimented on compressing the first 10,000 entries for each of the x, y, z, scalar and confidence components. We see that the greedy heuristic gives results within 4.7% of those obtained by dynamic programming. Surprisingly, in some rare cases such as the z component of both bunny and bunny2, the greedy heuristic gives a slightly better result than dynamic programming. This is counter-intuitive, since both methods try to optimize the same cost function (which estimates the resulting bit length after compression), and thus dynamic programming should always give the optimal solution with respect to the cost function. Taking a closer look, we see that in this cost function, there is a term for estimating the size of the resulting Huffman table for the first-layer code. Suppose in the final partitioning there are t partitions; this table size is estimated as g(q i ) = (log t + 32) t as given in Equation (8) in Section 3. In each intermediate step to decide how to partition the current range, the greedy method has the exact knowledge about the value of t (the number of partitions so far) but the dynamic programming method does not have this information and thus has to estimate this value, for each possible choice at the current step. Because of 10

bunny Dynamic Greedy 8-bit 16-bit 32-bit MH 8-bit MH 16-bit Gzip x 23840 24272 26040 36180 33436 32292 28156 25230 y 4920 4920 19664 9832 36032 34752 26140 6629 z 19364 18468 21164 26120 33032 31268 24756 29760 scalar 22560 23592 25192 37164 33036 31892 26184 32087 conf. 10824 11332 15032 17316 32688 31540 25424 8786 Total 81508 82584 107092 126612 168224 161744 130660 102492 bunny2 Dynamic Greedy 8-bit 16-bit 32-bit MH 8-bit MH 16-bit Gzip x 23964 24256 25976 32604 33032 32292 28156 25524 y 4916 4916 19664 9832 36028 34748 26124 8786 z 19160 18640 21540 25632 33436 31268 24756 30104 scalar 22560 23592 25192 37168 33036 31892 26184 32086 conf. 10824 11332 16032 17316 32688 31540 25424 8786 Total 81424 82736 108404 122552 168220 161740 130644 105286 Table 1: Compression results in bytes with the bunny and bunny2 graphics datasets using different coding schemes. For each of the x, y, z, scalar and confidence components, we show the results of compressing the first 10,000 entries; the original data size is 40,004 bytes for each such component (a 4-byte float for each of the 10,000 entries, plus a 4-byte integer to indicate the number of entries). this, the greedy method can give better results in some rare cases, although in most cases the dynamic programming method gives the best results. It should be noted that the dynamic programming method still gives the optimal solution with respect to the given cost function and the given estimation of the value of t. Comparing our results of dynamic programming and of greedy method with those of 8-bit, 16-bit, and 32-bit Huffman code in Table 1, we see that our results are significantly better. Comparing our results with those of 8-bit and 16-bit modified Huffman code, we see that the partitioning scheme we employ yields tremendous benefits over ad-hoc modified Huffman coding. Finally, compared with the results of gzip, again our results are significantly superior. Since the greedy approach runs much faster than the dynamic programming approach and gives compression results almost as good, in the remaining experiments we only ran the greedy heuristic as our representative results. In Table 2, we show the results of compressing all four graphics datasets, where for each dataset we compressed the entire vertex list (as opposed to the first 10,000 vertices in Table 1). Similar to what we observed in Table 1, our partitioning scheme based on the greedy 11

Data Greedy 8-bit 16-bit 32-bit MH 8-bit MH 16-bit Gzip bunny 337,440 467,684 374,268 1,324,404 611,568 597,652 588,091 bunny2 1,049,576 1,144,052 1,182,468 2,808,860 1,793,216 1,703,240 1,724,716 phone 457,620 504,448 587,280 1,382,468 867,820 828,976 895,969 phone2 1,313,328 1,519,952 1,331,272 3,626,644 2,602,880 2,423,876 2,584,875 Table 2: Compression results with the graphics datasets using different coding schemes. For each dataset, we show the total compressed size in bytes for the entire vertex list. The original vertex-list sizes in bytes are as follows: bunny: 718,948, bunny2: 2,107,968, phone: 996,536, and phone2: 2,988,092. Dataset Greedy 8-bit 16-bit 32-bit MH 8-bit MH 16-bit Gzip (0, 100) 1.21 1.51 1.42 1.52 3.12 2.12 1.72 (0, 1000) 1.66 1.88 2.55 3.67 3.42 4.03 2.31 (0, 10000) 2.16 2.33 3.57 6.15 3.42 2.12 2.83 Table 3: Compression results in bytes per number required to encode 10,000 random Gaussian numbers with different coding schemes, where (a, b) means the mean is a and the standard deviation is b. method results in significant advantages over all other methods being compared. All the above results also include the cost of storing the Huffman table. It should be noted that the dynamic programming and the greedy partitioning approaches often resulted in a partition with only one element. To handle this case we included one bit in every entry of the Huffman table to indicate this fact and hence saved the bits to code the rank of this symbol whenever it occurred. To show the applicability of our approach to a wider class of applications, we then generated random Gaussian numbers with different variances and used the greedy heuristic, Huffman coding of bytes, Huffman coding of pairs of bytes and Huffman coding of four-byte words, as well as 8-bit and 16-bit modified Huffman coding and gzip. Table 3 and Table 4 show the results obtained with a source sequence of length 10,000 and 100,000 respectively. As before all costs including that of storing the Huffman table are included in the results. Again we see that alphabet partitioning based on the greedy heuristic gives superior results. 12

Dataset Greedy 8-bit 16-bit 32-bit MH 8-bit MH 16-bit Gzip (0, 100) 1.17 1.45 1.30 1.20 3.12 2.13 1.69 (0, 1000) 1.62 1.82 1.84 1.93 3.12 2.13 2.21 (0, 10000) 2.07 2.23 2.95 4.15 3.12 2.13 2.79 (0, 100000) 2.57 2.53 3.37 6.52 3.12 2.48 3.18 Table 4: Compression results in bytes per number required to encode 100,000 random Gaussian numbers with different coding schemes, where (a, b) means the mean is a and the standard deviation is b. 5 Conclusions In this paper we formulated the problem of optimal alphabet partitioning for the design of a two-layer semi-adaptive code and gave a solution based on dynamic programming. Since the complexity of the dynamic programming approach can be quite prohibitive for a long sequence and a very large alphabet size, we presented a simple greedy heuristic that has more reasonable complexity. Our experimental results demonstrate that superior prefix coding schemes for large alphabets can be designed using our approach as opposed to the typically ad-hoc partitioning approach applied in the literature. References [1] J. G. Cleary and I. H. Witten. Data compression using adaptive coding and partial string matching. IEEE Transactions on Communications, 32(4):396 402, 1984. [2] D. A. Huffman. A method for the construction of minimum redundancy codes. Proc. IRE, 40:1098 1101, 1951. [3] R. Hunter and A. H. Robinson. International digital facsimile standards. Proceedings of the IEEE, 68(7):855 865, 1980. [4] En-Hui Yang; Yunwei Jia. Universal lossless coding of sources with large and unbounded alphabets. In IEEE International Symposium on Information Theory,, page 16, 2000. [5] D. E. Knuth. Dynamic Huffman coding. Journal of Algorithms, 6:163 180, 1985. 13

[6] A. Moffat, A.; Turpin. Efficient construction of minimum-redundancy codes for large alphabets. IEEE Transactions on Information Theory, 44(4):1650 1657, 1998. [7] MPEG. Information technology-generic coding of moving pictures and associated audio information: Video. MPEG: ISO/IEC 13818-2:1996(E), 1989. [8] W. B. Pennebaker and J. L. Mitchell. JPEG Still Image Data Compression Standard. Van Rostrand Reinhold, 1993. [9] J. J. Rissanen. Universal coding, information, prediction and estimation. IEEE Transactions on Information Theory, 30:629 636, 1984. [10] A. Said and W.A. Pearlman. Low-complexity waveform coding via alphabet and sample-set partitioning. In Visual Communications and Image Processing 97, Proc. SPIE Vol. 3024, pages 25 37, 1997. [11] S. Smoot and L. Rowe. Study of DCT coefficient distributions. In Proceedings of the SPIE Symposium on Electronic Imaging, Volume 2657, San Jose, CA, January, 1996. [12] J. S. Vitter. Design and analysis of dynamic Huffman codes. Journal of ACM, 34(4):825 845, 1987. [13] Bin Zhu, En hui Yang, and Ahmed Tewfik. Arithmetic coding with dual symbol sets and its performance analysis. IEEE Transactions on Image Processing, 8:1667 1676, December 1999. [14] X. Zou and W.A. Pearlman. Lapped orthogonal transform coding by amplitude and group partitioning. In Applications of Digital Image Processing XXII, Proceedings of SPIE Vol. 3808, pages 293 304, 1999. 14