Chapter 5 Lempel-Ziv Codes To set the stage for Lempel-Ziv codes, suppose we wish to nd the best block code for compressing a datavector X. Then we ha

Similar documents
16 Greedy Algorithms

UC San Diego UC San Diego Previously Published Works

V Advanced Data Structures

Data Compression - Seminar 4

An On-line Variable Length Binary. Institute for Systems Research and. Institute for Advanced Computer Studies. University of Maryland

V Advanced Data Structures

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Chapter 5 VARIABLE-LENGTH CODING Information Theory Results (II)

ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding

Greedy Algorithms CHAPTER 16

Information Theory and Communication

Text Compression through Huffman Coding. Terminology

Worst-case running time for RANDOMIZED-SELECT

Entropy Coding. - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic Code

Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1

Data Compression Techniques

Binomial Coefficient Identities and Encoding/Decoding

Move-to-front algorithm

MCS-375: Algorithms: Analysis and Design Handout #G2 San Skulrattanakulchai Gustavus Adolphus College Oct 21, Huffman Codes

CSC 310, Fall 2011 Solutions to Theory Assignment #1

ECE 333: Introduction to Communication Networks Fall Lecture 6: Data Link Layer II

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Exercise set #2 (29 pts)

1 Maximum Independent Set

COMPSCI 650 Applied Information Theory Feb 2, Lecture 5. Recall the example of Huffman Coding on a binary string from last class:

STA 4273H: Statistical Machine Learning

10.3 Recursive Programming in Datalog. While relational algebra can express many useful operations on relations, there

Heap-on-Top Priority Queues. March Abstract. We introduce the heap-on-top (hot) priority queue data structure that combines the

6. Finding Efficient Compressions; Huffman and Hu-Tucker

Unconstrained Optimization

SIGNAL COMPRESSION Lecture Lempel-Ziv Coding

Enumeration of Full Graphs: Onset of the Asymptotic Region. Department of Mathematics. Massachusetts Institute of Technology. Cambridge, MA 02139

Dictionary techniques

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017

Lecture 17. Lower bound for variable-length source codes with error. Coding a sequence of symbols: Rates and scheme (Arithmetic code)

Binomial Coefficients

Let the dynamic table support the operations TABLE-INSERT and TABLE-DELETE It is convenient to use the load factor ( )

Lossless Compression Algorithms

Complete Variable-Length "Fix-Free" Codes

Lecture 19. Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer

A Secondary storage Algorithms and Data Structures Supplementary Questions and Exercises

Chapter 2: Number Systems

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

Output: For each size provided as input, a figure of that size is to appear, followed by a blank line.

CS/COE 1501

Multimedia Systems. Part 20. Mahdi Vasighi

Horn Formulae. CS124 Course Notes 8 Spring 2018

Localization in Graphs. Richardson, TX Azriel Rosenfeld. Center for Automation Research. College Park, MD

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS

Radix Searching. The insert procedure for digital search trees also derives directly from the corresponding procedure for binary search trees:

in this web service Cambridge University Press

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

Part 2: Balanced Trees

6. Finding Efficient Compressions; Huffman and Hu-Tucker Algorithms

CMPS 10 Introduction to Computer Science Lecture Notes

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 10, OCTOBER

Algorithms Dr. Haim Levkowitz

Handout 9: Imperative Programs and State

II (Sorting and) Order Statistics

Dynamic Programming Algorithms

Hyperplane Ranking in. Simple Genetic Algorithms. D. Whitley, K. Mathias, and L. Pyeatt. Department of Computer Science. Colorado State University

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

CS 493: Algorithms for Massive Data Sets Dictionary-based compression February 14, 2002 Scribe: Tony Wirth LZ77

CPSC 320 Sample Solution, Playing with Graphs!

EE-575 INFORMATION THEORY - SEM 092

Chapter 14 Global Search Algorithms

Interleaving Schemes on Circulant Graphs with Two Offsets

Counting. Andreas Klappenecker

Algorithm Analysis and Design

Solutions to Homework 10

Excerpt from "Art of Problem Solving Volume 1: the Basics" 2014 AoPS Inc.

Intro. To Multimedia Engineering Lossless Compression

Theoretical Computer Science

14 Data Compression by Huffman Encoding

6. Advanced Topics in Computability

9/24/ Hash functions

3.2 Recursions One-term recursion, searching for a first occurrence, two-term recursion. 2 sin.

IS BINARY ENCODING APPROPRIATE FOR THE PROBLEM-LANGUAGE RELATIONSHIP?

As an additional safeguard on the total buer size required we might further

Exercise 2: Hopeld Networks

2 Computation with Floating-Point Numbers

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

ALGORITHMS OF INFORMATICS. Volume 3. APPLICATIONS AND DATA MANAGEMENT

Approximating Square Roots

6.001 Notes: Section 4.1

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014

Hashing. Hashing Procedures

This book is licensed under a Creative Commons Attribution 3.0 License

18.3 Deleting a key from a B-tree

February 24, :52 World Scientific Book - 9in x 6in soltys alg. Chapter 3. Greedy Algorithms

Algorithms Exam TIN093/DIT600

CSE100. Advanced Data Structures. Lecture 13. (Based on Paul Kube course materials)

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1

How invariants help writing loops Author: Sander Kooijmans Document version: 1.0

Encoding/Decoding. Russell Impagliazzo and Miles Jones Thanks to Janine Tiefenbruck. May 9, 2016

Theorem 2.9: nearest addition algorithm

Lempel-Ziv-Welch (LZW) Compression Algorithm

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding

Treewidth and graph minors

ARELAY network consists of a pair of source and destination

Transcription:

Chapter 5 Lempel-Ziv Codes To set the stage for Lempel-Ziv codes, suppose we wish to nd the best block code for compressing a datavector X. Then we have to take into account the complexity of the code. We could represent the total number of codebits at the decoder output as: [# of codebits to describe block code] + [# of codebits from using code on X] The codebits used to describe the block code that is chosen to compress X form a prex of the encoder output and constitute what is called the overhead of the encoding procedure. If we wish to choose the best block code for compressing X, from among block codes of all orders, we would choose the block code in order to minimize the total of the overhead codebits and the encoded datavector codebits. One could also adopt this approach to code design in order to choose the best nite memory code for compressing X, or, more generally, the best nite-state code. EXAMPLE 1. Suppose we wish to compress English text using nite memory codes. A nite memory code of order zero entails 51 bits of overhead. (Represent the Kraft vector used as a binary tree with 26 terminal nodes and 2 26, 1 = 51 nodes all together. You have to let the decoder know how to grow this tree it takes one bit of information at each of the 51 nodes to do that, since the decoder will either growtwo branches at each node, or none.) A nite memory rst order code for English text will entail 27 51 = 1377 bits of overhead. (You need a codebook of 27 dierent codes, with 51 bits to describe each code.) A nite memory second order code for English text can be described with 677 51 = 34527 bits of overhead. (There are 26 26 + 1 = 677 codes in the codebook, in this case.) You would keep increasing the order of your nite memory code until you nd the order for which you have minimized the sum of the amount ofoverhead plus the length of the encoded English text via the best nite memory code of that order. It would be nice to have a compression technique that entails no overhead, while performing at least as well as the block codes, or the nite memory codes, or the nite-state codes (provided the length of the datavector is long enough). Overhead is caused because statistics of the datavector (consisting of various frequency counts) are collected rst and then used to choose the code. Since the code arrived at depends on these statistics, overhead is needed to describe the code. Suppose instead that information about the datavector is collected \on the y" as you encode the samples in the datavector from left to right in encoding the current sample (or group of samples), you could use information collected about the previously encoded samples. A code which operates in this way might not need any overhead to describe it. Codes like this which require no overhead at the decoder output are called adaptive codes. The Lempel-Ziv code, the subject of this chapter, will be our rst example of an adaptive code. There are quite a number of variants of the Lempel-Ziv code. The variant we shall describe in this chapter is called LZ78, after the date of the paper [1]. 5.1 Lempel-Ziv Parsing In block coding, you rst partition the datavector into blocks of equal length. In Lempel-Ziv coding, you start by partitioning the datavector into variable-length blocks instead. The procedure via which this partitioning 5{1

takes place is called Lempel-Ziv parsing. The rst variable-length block arising from the Lempel-Ziv parsing of the datavector X = (X 1 ;X 2 ;:::;X n ) is the single sample X 1. The second block in the parsing is the shortest prex of (X 2 ;:::;X n ) which is not equal to X 1. Suppose this second block is(x 2 ;:::;X j ). Then, the third block in Lempel-Ziv parsing will be the shortest prex of (X j+1 ;:::;X n ) which is not equal to either X 1 or (X 2 ;:::;X j ). In general, suppose the Lempel-Ziv parsing procedure has produced the rst k variable-length blocks B 1 ;B 2 ;:::;B k in the parsing, and X (k) is that part left of X after B 1 ;B 2 ;:::;B k have been removed. Then the next block B k+1 in the parsing is the shortest prex of X (k) which is not equal to any of the preceding blocks B 1 ;B 2 ;:::;B k. (If there is no such block, then B k+1 = X (k) and the Lempel-Ziv parsing procedure terminates.) By construction, the sequence of variable-length blocks B 1 ;B 2 ;:::;B t produced by the Lempel-Ziv parsing of X are distinct, except that the last block B t could be equal to one of the preceding ones. EXAMPLE 2. The Lempel-Ziv parsing of X =(1; 1; 0; 1; 1; 0; 0; 0; 1; 1; 0; 1) is B 1 =1;B 2 =10;B 3 =11;B 4 =0;B 5 =00;B 6 = 110;B 7 =1 (1) This parsing can also be accomplished via MATLAB. Here are the results of a MATLAB session that the reader can try: x=[1 1 0 1 1 0 0 0 1 1 0 1]; y=lzparse(x); print_bitstrings(y) 1 10 11 0 00 110 1 The MATLAB function LZparse(x) (the m-le of which is given in Section 5.6) gives the indices of the variable-length blocks in the Lempel-Ziv parsing of the datavector x. Using the MATLAB function print bitstrings, wewere able to print out the blocks in the parsing on the screen. 5.2 Lempel-Ziv Encoder We suppose that the alphabet from which our datavector X =(X 1 ;X 2 ;:::;X n ) is formed is A = f0; 1;:::;k, 1g, where k is a positive integer. After obtaining the Lempel-Ziv parsing B 1 ;B 2 ;:::;B t of X, the next step is to represent each block in the parsing as a pair of integers. The rst block in the parsing, B 1, consists of a single symbol. It is represented as the pair (0;B 1 ). More generally, any block B j of length one is represented as the pair (0;B j ). If the block B j is of length greater than one, then it is represented as the pair (i; s), where s is the last symbol in B j and B i is the block in the parsing which coincides with the block obtained by removing s from the end of B j. (By construction of the Lempel-Ziv parsing, there will always be such a block B i.) EXAMPLE 3. The sequence of pairs corresponding to the parsing (1) is (0; 1); (1; 0); (1; 1); (0; 0); (4; 0); (3; 0); (0; 1) (2) For example, (4; 0) corresponds to the block 00 in the parsing. Since the last symbol of 00 is 0, the pair (4; 0) ends in 0. The 4 in the rst entry refers to the fact that B 4 = 0 is the preceding block in the parsing which is equal to what we get by deleting the last symbol of 00. For our next step, we replace each pair (i; s) by the integer ki+s. Thus, the sequence of pairs (2) becomes the sequence of integers 5{2

2 0+1=1; 2 1+0=2; 2 1+1=3; 2 0+0=0; 2 4+0=8; 2 3+0=6; 2 0+1=1 (3) To nish our description of the encoding process in Lempel-Ziv coding, let I 1 ;I 2 ;:::;I t denote the integers corresponding to the blocks B 1 ;B 2 ;:::;B t in the Lempel-Ziv parsing of the datavector X. Each integer I j is expanded to base two and these binary expansions are \padded" with zeroes on the left so that the overall length of the string of bits assigned to I j is dlog 2 (kj)e. The reason why this many bits is necessary and sucient is seen by examining the largest that I j can possibly be. Let (i; s) be the pair associated with I j. Then the biggest that i can be is j, 1 and the biggest that s can be is k, 1. Thus the biggest that I j can be is k (j, 1) + k, 1=kj, 1, and the number of bits in the binary expansion of kj, 1isdlog 2 (kj)e. Let W j be the string of bits of length dlog 2 (kj)e assigned to I j as described in the preceding. Then, the Lempel-Ziv encoder output is obtained by concatenating together the strings W 1 ;W 2 ;:::;W t. To illustrate, suppose a binary datavector has seven blocks B 1 ;B 2 ;:::;B 7 in its Lempel-Ziv parsing (such as in Example 2). These blocks are assigned, respectively, strings of codebits W 1 ;W 2 ;W 3 ;W 4 ;W 5 ;W 6 ;W 7 of lengths dlog 2 (2)e = 1 bits, dlog 2 (4)e = 2 bits, dlog 2 (6)e = 3 bits, dlog 2 (8)e = 3 bits, dlog 2 (10)e = 4 bits, dlog 2 (12)e = 4 bits, and dlog 2 (14)e = 4 bits. Therefore, any binary data vector with seven blocks in its Lempel-Ziv parsing would result in an encoder output of length 1 + 2 + 3 + 3 + 4 + 4 + 4 = 21 codebits. In particular, for the datavector in Example 2, the seven strings W 1 ;:::;W 7 are (referring to (3)): W 1 = 1 W 2 = 10 W 3 = 011 W 4 = 000 W 5 = 1000 W 6 = 0110 W 7 = 0001 Concatenating, we see the encoder output from the Lempel-Ziv coding of the datavector in Example 1 is 110011000100001100001. 5.3 Lempel-Ziv Decoder Suppose a datavector X with alphabet f0; 1; 2g was Lempel-Ziv encoded and the encoder output is: 001000010101010110000100000 (4) Let us decode to get X. For an alphabet of size three, dlog 2 (3j)e codebits are allocated to the j-th block in the Lempel-Ziv parsing. This gives us the following table of codebit allocations: codebit allocation table parsing block number # of codebits 1 2 2 3 3 4 4 4 5 4 6 5 7 5 8 5 Partitioning up the encoder output (4) according to the allocations in the above table, we obtain the partition: 00; 100; 0010; 1010; 1011; 00001; 00000 5{3

Converting these to integer form we get: 0; 4; 2; 10; 11; 1; 0 Dividing each of these integers by three and recording quotient and remainder in each case, we get the pairs (0; 0); (1; 1); (0; 2); (3; 1); (3; 2); (0; 1); (0; 0) Working backward from these pairs we obtain the Lempel-Ziv parsing and the datavector 0; 01; 2; 21; 22; 1; 0 5.4 Lempel-Ziv Parsing Tree X =(0; 0; 1; 2; 2; 1; 2; 2; 1; 0) In some implementations of Lempel-Ziv coding, both encoder and decoder grow from scratch a tree called the Lempel-Ziv parsing tree. Here is the Lempel-Ziv parsing tree for the datavector in Example 2: Figure 1: Lempel-Ziv Parsing Tree for Example 2 0 1 4 3 2 5 6 We explain to the reader the meaning of this tree. Label each left branch with a \1" and each right branch with a \0". For each node i (i = 1;:::;6) write down the variable-length block consisting of the bits encountered along the path from the root node (labelled 0) to node i this block B i is then the i-th block in the Lempel-Ziv parsing of the datavector. For example, if we follow the path from node 0 to node 6, we see a left branch, a left branch, and a right branch, which converts to the block 110. Thus, the sixth block in the Lempel-Ziv parsing of our datavector is 110. Let the datavector be X =(X 1 ;X 2 ;:::;X n ). The encoder grows the Lempel-Ziv parsing tree as follows. Suppose there are q distinct blocks in the Lempel-Ziv parsing, B 1 ;B 2 ;:::;B q. Then the encoder grows trees T 1 ;T 2 ;:::;T q. Tree T 1 consists of node 0, node 1, and a single branch going from node 0 to node 1 that is labelled with the symbol B 1 = X 1. For each i>1, tree T i is determined from tree T i,1 as follows: (a) Remove B 1 ;:::;B i,1 from the beginning of X and let the resulting datavector be called X (i). (b) Starting at the root node of T i,1, follow the path driven by X (i) until a terminal node of T i,1 is reached (the labels on the resulting path form a prex of X (i) which is one of the blocks B j 2fB 1 ;B 2 ;:::;B i,1 g, and the terminal node reached is labelled j). (c) Let X be the next symbol in X (i) to appear after B j. Grow a branch from node j of T i,1, label this branch with the symbol X, and label the new node at the end of this branch as \node i". This new tree is T i. The decoder can also grow the Lempel-Ziv parsing tree as decoding of the compressed datavector proceeds from left to right. We will leave it to the reader to see how that is done. Growing a Lempel-Ziv parsing tree allows the encoding and decoding operations in Lempel-Ziv coding to be done in a fast manner. Also, there are modications of Lempel-Ziv coding (not to be discussed here) in which enhancements in data compression are obtained by making use of the structure of the parsing tree. 5{4

5.5 Redundancy of LZ78 We want to see how much better the Lempel-Ziv code is than the block codes of various orders. We shall do this by comparing the Lempel-Ziv codeword length for the datavector to the blockentropies of the datavector introduced in Chapter 4. It makes sense to make this comparison because the block entropies tell us how well the best block codes do. The simplest case, which we discuss rst, would be to compare Lempel-Ziv code performance to the rst order entropy. Let X =(X 1 ;X 2 ;:::;X n ) denote the datavector to be compressed, and let LZ(X) denote the length of the codeword assigned by the Lempel-Ziv code to X. Comparing LZ(X) to the rst order entropy H 1 (X), one can derive a bound of the form LZ(X) nh 1 (X)+n n (5) The constant term n, which depends only on the datavector length n, is called the rst order redundancy and its units are bits per data sample. The better a data compression algorithm is, the smaller the redundancy will be. The following result gives the rst order redundancy for the Lempel-Ziv code. RESULT. The rst order redundancy n = C log 2 log 2 n log 2 n is achievable for the Lempel-Ziv code, where C is a positive constant that depends upon the size of the data alphabet. (In the preceding, we assume that the datavector length n is at least three, so that the redundancy will be well-dened.) INTERPRETATION. We introduce some notation which makes it more convenient to talk about redundancy. If fz n g is a sequence of real numbers, and f n g is a sequence of real numbers, we say that z n is O( n ) if there is a positive constant D such that z n Dj n j for all suciently large positive integers n. Using our new notation, we see that the above RESULT says that the rst order redundancy of the Lempel-Ziv code is O(log 2 log 2 n= log 2 n) (where n denotes the length of the datavector). What does our redundancy result say? Recall that H 1 (X) isalower bound on the compression rate that results when one compresses X using the best memoryless code that can be designed for X. Thus, the RESULT tells us that the Lempel-Ziv code yields a compression rate on any datavector of length n no worse than log 2 log 2 n= log 2 n bits per sample more than the compression rate for the best memoryless code for the datavector. Since the quantity log 2 log 2 n= log 2 n is very small when n is large, we can achieve through Lempel-Ziv coding a compression performance approximately no worse than that achievable by the best memoryless code for the given datavector. To show that the RESULT is true, we need the notion of unnormalized entropy. Let (Y 1 ;Y 2 ;:::;Y m )be a datavector. (We allow the case in which each entry Y i is itself a datavector; for example, the Y i 's may be blocks arising from a Lempel-Ziv parsing.) The unnormalized entropy H (Y 1 ;:::;Y m ) of the datavector (Y 1 ;:::;Y m ) is dened to be m, the length of the datavector, times the rst order entropy H 1 (Y 1 ;:::;Y m ) of the datavector. This gives us the formula H (Y 1 ;:::;Y m )= mx (6), log 2 p(y i ) (7) where p is the probability distribution on the set of entries of the datavector which assigns to each entry Y the probability p(y ) dened by p(y ) =#f1 i m : Y i = Y g=m (In other words, p is the rst-order empirical distribution for the datavector (Y 1 ;:::;Y m ).) In this argument, x an arbitrary datavector X =(X 1 ;X 2 ;:::;X n ). Let (B 1 ;B 2 ;:::;B t ) be the Lempel- Ziv parsing of the datavector. From Exercise 4 at the end of this chapter, we have the following inequality: H (B 1 ;B 2 ;:::;B t ) H (X 1 ;:::;X n )+H (jb 1 j; jb 2 j;:::;jb t j) 5{5

where jb i j denotes the length of the block B i. Since the blocks B 1 ;B 2 ;:::;B t,1 are distinct, We know that (t, 1) log 2 (t, 1) = H (B 1 ;:::;B t,1 ) H (B 1 ;:::;B t ) LZ(X) = tx dlog 2 (ki)e where k is the size of the data alphabet. Expanding out the right side of the preceding equation, one can see that there is a constant c 1 such that LZ(X) (c 1 + k)t +(t, 1) log 2 (t, 1) for all datavectors X. From Exercise 6 at the end of the chapter, By concavity of the logarithm function, and so where H (jb 1 j;:::;jb t j) log 2 (1 + log e n)+ tx log 2 jb i jt log 2 (t,1 tx LZ(X)=n H 1 (X)+(X) tx log 2 jb i j jb i j)=t log 2 (n=t) (X) =(c 1 + k)(t=n)+n,1 log 2 (1 + log e n)+ By Exercise 8 at the end of the chapter, there is a constant c 2 such that log2 (n=t) (n=t) (8) t log 2 n c 2 n Applying this to the rst and third terms on the right side of (8), it is seen that 1 log2 log (X) =O + O 2 n log2 log + O 2 n log 2 n n log 2 n Of the three terms on the right above, the third term is dominant. We have achieved the bound (5) with n given by (6). The RESULT is proved. We now want to compare the compression performance of the Lempel-Ziv code to the performance of block codes of an arbitrary order j. Consider an arbitrary datavector X = (X 1 ;:::;X n ) of length n a multiple of j. By a complicated argument similar the argument given above for j = 1 (which we omit), it can be shown that there is a constant C j such that LZ(X)=n H j (X)+C j log2 log 2 n log 2 n The second term on the right above is the j-th order redundancy of the Lempel-Ziv code. In other words, the relation (9) tells us that for any j, the j-th order redundancy of the Lempel-Ziv code is O(log 2 log 2 n= log 2 n), which becomes very small as n gets large. Recall from Chapter 4 that H j (X) is a lower bound on the compression rate of the best j-th order block code for X. We conclude that no matter how large the order of the block code that one attempts to use, the Lempel Ziv algorithm will yield a compression rate on an arbitrary datavector approximately no worse than that of the block code, provided the datavector is long enough relative to the order of the block code. Hence, one loses nothing in compression rate by using the Lempel-Ziv code instead of a block code. Also, one is able to compress a datavector faster via the Lempel- Ziv code than via block coding. To see this, one need only look at memoryless codes. For a datavector of (9) 5{6

length n, the overall time for best compression of the datavector via a memoryless code is proportional to n 2. (The overall compression time in this case would be the time it takes to design the Human code for the datavector plus the time it takes to compress the datavector with the Human code; since the rst time is proportional to n 2 and the second time is proportional to n, the overall compression time is proportional to n 2.) On the other hand, if the Lempel-Ziv code is implemented properly, it will take time proportional to n to compress any datavector of length n. (No time is wasted on design; the Lempel-Ziv code structure is the same for every datavector.) We conclude: Lempel-Ziv coding yields a compression performance as good as or better than the best block codes (provided the datavector is long enough). Lempel-Ziv coding yields faster compression of the data than does coding via the best block codes, because no time is wasted on design. The Lempel-Ziv code has been our rst example of a code which does at least as well as the block codes in terms of the redundancy of all orders becoming small with large datavector length. Such codes are called universal codes. Although the Lempel-Ziv code is a universal code, there are universal codes whose redundancy goes to zero faster with increasing datavector length than does the redundancy of the Lempel-Ziv code. This point is discussed further in Chapter 15. 5.6 MATLAB m-les We present two MATLAB programs in connection with Lempel-Ziv coding: LZparse LZcodelength 5.6.1 LZparse.m Here is the m-le for the MATLAB function LZparse: %This m-file is called LZparse.m %It accomplishes Lempel-Ziv parsing of a binary %datavector %x is a binary datavector %y = LZparse(x) is a vector consisting of the indices %of the blocks in the Lempel-Ziv parsing of x % function y = LZparse(x) N=length(x); dict=[]; lengthdict=0; while lengthdict < N i=lengthdict+1; k=0; while k==0 v=x(lengthdict+1:i); j=bitstring_to_index(v); A=(dict~=j); k=prod(a); if i==n k=1; else end 5{7

i=i+1; end dict=[dict j]; lengthdict=lengthdict + length(v); end y=dict; The function \LZparse" was illustrated in Example 2. 5.6.2 LZcodelength.m Here is the m-le for the MATLAB function LZcodelength: %This m-file is named LZcodelength.m %x = a binary datavector %LZcodelength(x) = length in codebits of the encoder %output resulting from the Lempel-Ziv coding of x % function y = LZcodelength(x) u=lzparse(x); t=length(u); S=0; for :t; S=S+ceil(log2(2*i)); end y=s; To illustrate the MATLAB function LZcodelength, we performed the following MATLAB session: x=[1 1 0 1 1 0 0 0 1 1 0 1]; LZcodelength(x) 21 As a result of this session, we computed the length of the codeword resulting from the Lempel-Ziv encoding of the datavector in Example 2, and \21" was printed out on the screen. This is the correct length of this codeword, as computed earlier in these notes. 5.7 Exercises 1. What is the minimum number of variable-length blocks that can appear in the Lempel-Ziv parsing of a binary datavector of length 28? What is the maximum number? 2. Find the binary codeword that results when the datavector 11101011100101001111 is encoded using the Lempel-Ziv code. 3. The alphabet of a datavector is f0; 1; 2g. The codeword 10100000001101010100010110010000 results when the datavector is Lempel-Ziv encoded. Find the datavector. 4. Let X =(X 1 ;X 2 ;:::;X n ) be a datavector and let B 1 ;B 2 ;:::;B t be variable-length blocks into which X is partitioned (from left to right). Show that H (B 1 ;B 2 ;:::;B t ) H (X)+H (jb 1 j; jb 2 j;:::;jb t j) (10) where jb i j is the length of B i. (Inequality (10) can be proved by grouping appropriately the terms that appear in the summation giving the unnormalized entropy H (B 1 ;B 2 ;:::;B t ); see formula (7).) 5{8

5. Let (X 1 ;X 2 ;:::;X n ) be a datavector and let A be the data alphabet. Show that H (X 1 ;X 2 ;:::;X n ) nx, log 2 p(x i ) for every probability distribution p on A. (Hint: Use the fact that X a2a p 1 (a) log 2 p1 (a) p 2 (a) for any two probability distributions p 1 ;p 2 on A; see Exercise 1 of Chapter 3.) 6. Consider a datavector (X 1 ;X 2 ;:::;X n ) in which each sample X i is a positive integer less than or equal to N. Show that 0 H (X 1 ;X 2 ;:::;X n ) log 2 (1 + log e N)+ (Hint: First, use the result of Exercise 5 with the probability distribution Then use the inequality p(j) = nx log 2 X i 1=j 1+(1=2)+(1=3) + :::+(1=N) ; j =1;:::;N (1=2) + (1=3) + :::+(1=N) Z N 1 (1=x)dx = log e N which can be seen by approximating the area under the curve y =1=x by a sum of areas of rectangles.) 7. Let A be an arbitrary nite alphabet. Dene L lz (n) to be the minimum Lempel-Ziv codeword length assigned to the datavectors of length n over the alphabet A. Show that log lim 2 L lz (n) =1=2 n!1 log 2 n This property points out a hidden defect of the Lempel-Ziv code. Because the limit on the left above is greater than zero, there exist certain datavectors which the Lempel-Ziv code does not compress very well. 8. Consider all datavectors of all lengths over a xed nite alphabet A. If X is such a datavector, let t(x) denote the number ofvariable-length blocks that appear in the Lempel-Ziv parsing of X. Show that there is a constant M (depending on the size of the alphabet A), such that for any integer n 2, and any datavector X of length n, t(x) Mn log 2 n (Hint: Let t = t(x) and let B 1 ;B 2 ;:::;B t,1 be the rst t, 1variable-length blocks in the Lempel-Ziv parsing of X. Let jb i j denote the length of block B i. In the inequality jb 1 j + jb 2 j + :::+ jb t,1 jn nd a lower bound for the left hand side using the fact that the B i 's are distinct.) 9. We discuss a variant of the Lempel-Ziv code which yields shorter codewords for some datavectors than does LZ78. Encoding is accomplished via three steps. In Step 1, we partition the datavector (X 1 ;:::;X n ) into variable-length blocks in which the rst block is of length one, and each succeeding block (except 5{9

for possibly the last block) is the shortest prex of the rest of the datavector which is not windowed in the datavector as we slide to the left. To illustrate, the datavector 000110 is partitioned into 0; 001; 10 (11) in Step 1. (On the other hand, LZ78 partitions this datavector into four blocks instead of three: 0; 00; 1; 10.) In Step 2, each block B in the sequence of blocks from Step 1 is represented as a triple (i; j; k) in which k is the last symbol in B, i is the length of the block B, and j is the smallest integer such that if welookatthei,1samples in the datavector starting with sample X j,wewill see windowed the block obtained by removing the last symbol from B. (Take j =0ifB has length one.) For example, for the blocks in (11), Step 2 gives us the triples (1; 0; 0); (3; 1; 1); (2; 4; 0) In Step 3, the sequence of triples from Step 2 is converted into a binary codeword. There is a clever way to do this which we shall not discuss here. All we need to know for the purposes of this exercise is that if there are t triples and the datavector length is n, then the approximate length of the binary codeword is t log 2 n. (a) Show that there are innitely many binary datavectors such that Step 1 yields a partition of the datavector into 5 blocks. (b) Let X (n) be the datavector consisting of n zeroes. Let LZ (X (n) ) be the length of the binary codeword which results when X (n) is encoded using the variant ofthe Lempel-Ziv code. Show that LZ (X (n) )=LZ(X (n) ) converges to zero as n!1. 5{10

References [1] J. Ziv and A. Lempel, \Compression of individual sequences via variable-rate coding," IEEE Trans. Inform. Theory, vol. 24, pp. 530{536, 1978. 5{11