Data Compression Techniques

Size: px
Start display at page:

Download "Data Compression Techniques"

Transcription

1 Data Compression Techniques Part 2: Text Compression Lecture 6: Dictionary Compression Juha Kärkkäinen / 17

2 Dictionary Compression The compression techniques we have seen so far replace individual symbols with a variable length codewords. In dictionary compression, variable length substrings are replaced by short, possibly even fixed length codewords. Compression is achieved by replacing long strings with shorter codewords. The general scheme is as follows: The dictionary D is a collection of strings, often called phrases. For completeness, the dictionary includes all single symbols. The text T is parsed into a sequence of phrases: T = T 1 T 2... T z, T i D. The sequence is called a parsing or a factorization of T with respect to D. The text is encoded by replacing each phrase T i with a codeword that acts as a pointer to the dictionary. 2 / 17

3 Example Here is a simple static dictionary encoding for English text: The dictionary consists of some set of English words plus individual symbols. Compute the frequencies of the words in some corpus of English texts. Compute the frequencies of symbols in the corpus from which the dictionary words have been removed. Number the words and symbols in descending order of their frequencies. To encode a text, replace each dictionary word and each symbol that does not belong to a dictionary word with its corresponding number. Encode the sequence of numbers using γ coding. 3 / 17

4 Lempel-Ziv compression In 1977 and 1978, Abraham Lempel and Jacob Ziv published two adaptive dictionary compression algorithms that soon became to dominate practical text compression. Numerous variants have been published and implemented, and they are still the most commonly used algorithms in general purpose compression tools. The common feature of the two algorithms and all their variants is that the dictionary consists of substrings of the already processed part of the text. This means that the dictionary adapts to the text. The two algorithms are known as LZ77 and LZ78, and most related methods can be categorized as a variant of one or the other. The primary difference is the encoding of the phrases: LZ77 uses direct pointers to the preceding text. LZ78 uses pointers to a separate dictionary. 4 / 17

5 LF78 The original LZ78 encodes a string T = t 0 t 1... t n 1 as follows: The dictionary consists of phrases numbered from 0 upwards: D = {Z 0, Z 1, Z 2,...}. Each new phrase inserted to the dictionary gets the next free number. Initially, D = {Z 0 = ε} (ε = empty string). Suppose we have so far computed the parsing T 1... T j 1 for T [0... i) and the next phrase T j starts at position i. Let Z k D be the longest phrase in the dictionary that is a prefix of T [i... n 2]. The next phrase is T j = T [i... i + Z k ] = Z k t i+ Zk, i.e., Z k plus a symbol, and it is inserted into the dictionary as Z j. The phrase T j is encoded as the pair k, t i+ Zk. Using fixed length codes, the pair needs log j + log σ bits. Example Let Σ = {a, b, c, d} and T = badadadabaab. index phrase ɛ b a d ad ada ba ab encoding 0, b 0, a 0, d 2, d 4, a 1, a 2, b code length / 17

6 LZW LZW is simple optimization of LZ78 used, e.g., in the unix tool compress. Initially, the dictionary D contains all individual symbols: D = {Z 1,..., Z σ }. Suppose the next phrase T j starts at position i. Let Z k D be the longest phrase in the dictionary that is a prefix of T [i... n). Now the next text phrase is T j = Z k and the phrase added to the dictionary is Z σ+j = T j t i+ Tj. The phrase T j is encoded with k requiring log(σ + j 1) bits. Omitting the symbol codes saves space in practice, even though the index codes can be longer and the phrases can be shorter. Example Let Σ = {a, b, c, d} and T = badadadabaab. text phrase b a d ad ada ba a b encoding code length dict. phrase a b c d ba ad da ada adab baa ab index / 17

7 There is a tricky detail in decoding LZW: In order to insert the phrase Z σ+j = T j t i+ Tj into the dictionary, the decoder needs to know the symbol t i+ Tj. For the encoder this is not a problem, but the decoder only knows that t i+ Tj is the first symbol of the next phrase T j+1. Thus the insertion is delayed until the next phrase has been decoded. The encoder inserts Z σ+j to the dictionary without a delay and has the option of choosing T j+1 = Z σ+j. If this happens, the decoder is faced with a reference to a phrase that is not yet in the dictionary! However, in this case the decoder knows that the unknown symbol t i+ Tj is the first symbol of Z σ+j, which is the same as the first symbol of T j. Given t i+ Tj the decoder can set Z σ+j = T j t i+ Tj, insert it into the dictionary, and decode T j+1 normally. The phrase T j+1 = Z σ+j is problematic because it is self-referencial. Example In our example, the phrase T 5 = Z 8 = ada is self-referential. 7 / 17

8 LZ77 The original LZ77 algorithm works as follows: A phrase T j starting at a position i is encoded as a triple of the form distance, length, symbol. A triple d, l, s means that: T j = T [i...i + l] = T [i d...i d + l)s In other words, the string T [i..i + l) of length l has another occurrence d positions earlier in the text. The values d and l should satisfy d [1..d max ] and l [0..l max ]. In other words, the earlier occurrence should be no longer than l max and should start within a window T [i d max..i 1]. The algorithm searches the window for the longest possible match under the above contraints, i.e., it tries to maximize l. The triple is encoded with fixed length codes using log d max + log(l max + 1) + log σ bits. The decoder is really simple and fast as it can just copy each phrase from the already decoded part of the text. Even the self-referential case l > d, when the strings T [i..i + l) and T [i d...i d + l) overlap, just requires the copying to happen in left-to-right order. 8 / 17

9 Example Let Σ = {a, b, c, d} and T = badadadabaab and d max = l max = 8. phrase b a d adadab aa b encoding 1, 0, b 1, 0, a 1, 0, d 2, 5, b 2, 1, a 1, 0, b Later variants have improved the encoding of the phrases: Avoid coding the extra symbol every time by replacing the triples with two types of codes length, distance and symbol (with some indicator to specify the type of the code). Use variable length codes instead of fixed length codes. Variations range from ad-hoc schemes to semiadaptive Huffman and even tans coding. The most advanced compressors can further have a complex model to estimate the probability of each bit in the encoding and then use arithmetic coding to encode them. Many popular compression programs, such as zip, gzip and 7zip, use these types of LZ77 variants. 9 / 17

10 LZ77 can also be optimized for encoding speed by replacing the exhaustive search of the window with an efficient data structure. Many different data structures including binary trees, hash tables and suffix trees have been used for the purpose. Fast searching enables larger window sizes or even unbounded distances and lengths. Increasing the window size can lead to longer phrases and thus better compression. On the other hand, the compression can suffer from longer codes needed for larger values. With the fixed length codes of the original LZ77, this is another reason to use a small window. With variable length codes, a small upper bound is not important if a smaller distance has a shorter code. Then the longest possible phrase is not always the optimal choice. A recent algorithm by Ferragina, Nitto and Venturini (SODA 2009) solves this optimization problem efficiently for many encoding schemes, i.e., it finds the parsing that minimizes the total length of the final encoding. 10 / 17

11 LZFG As a final example of LZ-type compression methods let us briefly look at LZFG that is a kind of hybrid of LZ77 and LZ78 algorithms: LZFG is like LZ77 but with the restriction that the earlier occurrence of each phrase has to begin at a previous phrase boundary. There is no restriction on the end of the phrase. Each phrase is encoded as a length, distance pair, but the distance now points to separate array recording the positions of phrase boundaries. In this sense, LZFG is an LZ78 type method. Using a large or unbounded window size is easier with LZFG than with LZ77, because the distance values are smaller and the data structures for finding phrases are simpler. Example Let T = badadadabaab. Assume two types of codes, length, distance and symbol, and no length or distance limits. phrase b a d adada ba a b encoding b a d 5, 2 2, 4 a b 11 / 17

12 An important attribute of Lempel Ziv compression methods is the size of their effective dictionary, i.e, the number of possible distinct phrases. For a text of length n with a parsing of size z, the effective dictionary sizes are bounded by: LZ78 LZW LZ77 (original) LZ77 (variant) LZFG (z + 1)σ σ + z d max (l max + 1)σ n 2 + σ zn + σ In general, the effective dictionary size of LZ78 type algorithms grows slowly with n, while LZ77 type algoritms can have a much faster growth rate. A larger dictionary usually leads to longer and thus fewer phrases. This does not necessarily mean better compression, because the code sizes increase too, but fewer phrases can speed up decoding. A faster dictionary growth rate can improve compression significantly on some highly compressible texts (exercise). 12 / 17

13 Grammar Compression A special type of semiadaptive dictionary compression is grammar compression that represents a text as a context-free grammar. Example T = a rose is a rose is a rose. S ABB A a rose B is A The grammar should generate exactly one string. Such a grammar is called a straight-line grammar because of the following properties: The are no branches, i.e., each non-terminal is the left-hand size of only one rule. Otherwise, multiple strings could be generated. There are no loops, i.e., no cyclic dependencies between non-terminals. Otherwise, infinite strings could be generated. A straight-line grammar in Chomsky normal form is called a straight-line program. 13 / 17

14 The size of a grammar is the total length of the right-hand sides. The smallest grammar problem of computing the smallest straight-line grammar that generates a given string is NP hard. Nevertheless, there are many algorithms for constructing small grammars for a text, for example: LZ78 parsing is easily transformed into a grammar with one rule for each phrase. The best provable approximation ratio O(log(n/g)), where g is the size of the smallest grammar, has been achieved by algorithms that transform an LZ77 parsing into a grammar. Greedy algorithms add one rule at a time as long as they can find a rule that reduces the size of the grammar. The right hand side of each new rule is chosen greedily by some criterion, for example: the longest substring with at least two occurrences the most frequent substring of length at least two the string that produces the biggest reduction in size. 14 / 17

15 Re-Pair Re-Pair is a greedy grammar compression algorithm that operates as follows: 1. Find the pair of symbols XY that is the most frequent in the text T. If no pair occurs twice in T, stop. 2. Create a new non-terminal Z and add the rule Z XY to the grammar. Replace all occurrences of XY in T by Z. Go to step 1. The whole process can be performed in linear time using suitable data structures. The details are omitted, but the key observation is that, if n XY is the number of occurrences of the most frequent pair XY in a given step, then the replacement reduces the size of the grammar by n XY 2. Thus we can spend O(n XY ) time to perform the step. Re-Pair often achieves the smallest grammar among practical algorithms. 15 / 17

16 Example T = singing do wah diddy diddy dum diddy do. rule added A d B dd C Ai D By E CD F in G Ao H F g text after replacement singingao wahaiddyaiddyaumaiddyao singingao wahaibyaibyaumaibyao singingao wahcbycbyaumcbyao singingao wahcdcdaumcdao singingao waheeaumeao sf gf gao waheeaumeao sf gf gg waheeaumeg shhg waheeaumeg Here is a simple encoding of the final result: The number r of rules and the length z of the final text in γ code. The right-hand sides of rules using 2 log(σ + i 1) -bit fixed length codes to encode the ith rule. The compressed text using log(σ + r) -bit fixed length codes. Better compression can be achieved with a more sophisticated encoding. 16 / 17

17 A common feature of most dictionary compression algorithms is asymmetry of encoding and decoding: The encoder needs to do a lot of work in choosing the phrases or rules. The decoder only needs to replace each phrase. Thus the decoder is often simple and fast. LZ77-type decoders are particularly simple and fast as they maintain no dictionary other than the already decoded part of the text. LZ78-type and grammar-based decoders need some extra effort in constructing and using the dictionary. The best possible compression may require a complex model and advanced entropy coding to encode the phrase, which can make the decoder dramatically slower. Thus there is a tradeoff between speed and compression ratio, and some compressors are optimized for speed and others for maximum compression. 17 / 17

CS 493: Algorithms for Massive Data Sets Dictionary-based compression February 14, 2002 Scribe: Tony Wirth LZ77

CS 493: Algorithms for Massive Data Sets Dictionary-based compression February 14, 2002 Scribe: Tony Wirth LZ77 CS 493: Algorithms for Massive Data Sets February 14, 2002 Dictionary-based compression Scribe: Tony Wirth This lecture will explore two adaptive dictionary compression schemes: LZ77 and LZ78. We use the

More information

SIGNAL COMPRESSION Lecture Lempel-Ziv Coding

SIGNAL COMPRESSION Lecture Lempel-Ziv Coding SIGNAL COMPRESSION Lecture 5 11.9.2007 Lempel-Ziv Coding Dictionary methods Ziv-Lempel 77 The gzip variant of Ziv-Lempel 77 Ziv-Lempel 78 The LZW variant of Ziv-Lempel 78 Asymptotic optimality of Ziv-Lempel

More information

Dictionary techniques

Dictionary techniques Dictionary techniques The final concept that we will mention in this chapter is about dictionary techniques. Many modern compression algorithms rely on the modified versions of various dictionary techniques.

More information

Entropy Coding. - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic Code

Entropy Coding. - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic Code Entropy Coding } different probabilities for the appearing of single symbols are used - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic

More information

Simple variant of coding with a variable number of symbols and fixlength codewords.

Simple variant of coding with a variable number of symbols and fixlength codewords. Dictionary coding Simple variant of coding with a variable number of symbols and fixlength codewords. Create a dictionary containing 2 b different symbol sequences and code them with codewords of length

More information

Lossless compression II

Lossless compression II Lossless II D 44 R 52 B 81 C 84 D 86 R 82 A 85 A 87 A 83 R 88 A 8A B 89 A 8B Symbol Probability Range a 0.2 [0.0, 0.2) e 0.3 [0.2, 0.5) i 0.1 [0.5, 0.6) o 0.2 [0.6, 0.8) u 0.1 [0.8, 0.9)! 0.1 [0.9, 1.0)

More information

ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding

ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding Huffman codes require us to have a fairly reasonable idea of how source symbol probabilities are distributed. There are a number of applications

More information

EE-575 INFORMATION THEORY - SEM 092

EE-575 INFORMATION THEORY - SEM 092 EE-575 INFORMATION THEORY - SEM 092 Project Report on Lempel Ziv compression technique. Department of Electrical Engineering Prepared By: Mohammed Akber Ali Student ID # g200806120. ------------------------------------------------------------------------------------------------------------------------------------------

More information

A Hybrid Approach to Text Compression

A Hybrid Approach to Text Compression A Hybrid Approach to Text Compression Peter C Gutmann Computer Science, University of Auckland, New Zealand Telephone +64 9 426-5097; email pgut 1 Bcs.aukuni.ac.nz Timothy C Bell Computer Science, University

More information

THE RELATIVE EFFICIENCY OF DATA COMPRESSION BY LZW AND LZSS

THE RELATIVE EFFICIENCY OF DATA COMPRESSION BY LZW AND LZSS THE RELATIVE EFFICIENCY OF DATA COMPRESSION BY LZW AND LZSS Yair Wiseman 1* * 1 Computer Science Department, Bar-Ilan University, Ramat-Gan 52900, Israel Email: wiseman@cs.huji.ac.il, http://www.cs.biu.ac.il/~wiseman

More information

58093 String Processing Algorithms. Lectures, Autumn 2013, period II

58093 String Processing Algorithms. Lectures, Autumn 2013, period II 58093 String Processing Algorithms Lectures, Autumn 2013, period II Juha Kärkkäinen 1 Contents 0. Introduction 1. Sets of strings Search trees, string sorting, binary search 2. Exact string matching Finding

More information

Compression. storage medium/ communications network. For the purpose of this lecture, we observe the following constraints:

Compression. storage medium/ communications network. For the purpose of this lecture, we observe the following constraints: CS231 Algorithms Handout # 31 Prof. Lyn Turbak November 20, 2001 Wellesley College Compression The Big Picture We want to be able to store and retrieve data, as well as communicate it with others. In general,

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

Engineering Mathematics II Lecture 16 Compression

Engineering Mathematics II Lecture 16 Compression 010.141 Engineering Mathematics II Lecture 16 Compression Bob McKay School of Computer Science and Engineering College of Engineering Seoul National University 1 Lossless Compression Outline Huffman &

More information

You can say that again! Text compression

You can say that again! Text compression Activity 3 You can say that again! Text compression Age group Early elementary and up. Abilities assumed Copying written text. Time 10 minutes or more. Size of group From individuals to the whole class.

More information

11 Dictionary-based compressors

11 Dictionary-based compressors 11 Dictionary-based compressors 11.1 LZ77... 11-1 11.2 LZ78... 11-4 11.3 LZW... 11-6 11.4 On the optimality of compressors... 11-7 The methods based on a dictionary take a totally different approach to

More information

Data Compression Fundamentals

Data Compression Fundamentals 1 Data Compression Fundamentals Touradj Ebrahimi Touradj.Ebrahimi@epfl.ch 2 Several classifications of compression methods are possible Based on data type :» Generic data compression» Audio compression»

More information

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 29 Source Coding (Part-4) We have already had 3 classes on source coding

More information

Optimal Parsing. In Dictionary-Symbolwise. Compression Algorithms

Optimal Parsing. In Dictionary-Symbolwise. Compression Algorithms Università degli Studi di Palermo Facoltà Di Scienze Matematiche Fisiche E Naturali Tesi Di Laurea In Scienze Dell Informazione Optimal Parsing In Dictionary-Symbolwise Compression Algorithms Il candidato

More information

An On-line Variable Length Binary. Institute for Systems Research and. Institute for Advanced Computer Studies. University of Maryland

An On-line Variable Length Binary. Institute for Systems Research and. Institute for Advanced Computer Studies. University of Maryland An On-line Variable Length inary Encoding Tinku Acharya Joseph F. Ja Ja Institute for Systems Research and Institute for Advanced Computer Studies University of Maryland College Park, MD 242 facharya,

More information

DEFLATE COMPRESSION ALGORITHM

DEFLATE COMPRESSION ALGORITHM DEFLATE COMPRESSION ALGORITHM Savan Oswal 1, Anjali Singh 2, Kirthi Kumari 3 B.E Student, Department of Information Technology, KJ'S Trinity College Of Engineering and Research, Pune, India 1,2.3 Abstract

More information

11 Dictionary-based compressors

11 Dictionary-based compressors 11 Dictionary-based compressors 11.1 LZ77... 11-1 11.2 LZ78... 11-4 11.3 LZW... 11-6 11.4 On the optimality of compressors... 11-7 The methods based on a dictionary take a totally different approach to

More information

Program Construction and Data Structures Course 1DL201 at Uppsala University Autumn 2010 / Spring 2011 Homework 6: Data Compression

Program Construction and Data Structures Course 1DL201 at Uppsala University Autumn 2010 / Spring 2011 Homework 6: Data Compression Program Construction and Data Structures Course 1DL201 at Uppsala University Autumn 2010 / Spring 2011 Homework 6: Data Compression Prepared by Pierre Flener Lab: Thursday 17 February 2011 Submission Deadline:

More information

Data Compression 신찬수

Data Compression 신찬수 Data Compression 신찬수 Data compression Reducing the size of the representation without affecting the information itself. Lossless compression vs. lossy compression text file image file movie file compression

More information

Data Compression. Media Signal Processing, Presentation 2. Presented By: Jahanzeb Farooq Michael Osadebey

Data Compression. Media Signal Processing, Presentation 2. Presented By: Jahanzeb Farooq Michael Osadebey Data Compression Media Signal Processing, Presentation 2 Presented By: Jahanzeb Farooq Michael Osadebey What is Data Compression? Definition -Reducing the amount of data required to represent a source

More information

An Advanced Text Encryption & Compression System Based on ASCII Values & Arithmetic Encoding to Improve Data Security

An Advanced Text Encryption & Compression System Based on ASCII Values & Arithmetic Encoding to Improve Data Security Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 10, October 2014,

More information

The Effect of Non-Greedy Parsing in Ziv-Lempel Compression Methods

The Effect of Non-Greedy Parsing in Ziv-Lempel Compression Methods The Effect of Non-Greedy Parsing in Ziv-Lempel Compression Methods R. Nigel Horspool Dept. of Computer Science, University of Victoria P. O. Box 3055, Victoria, B.C., Canada V8W 3P6 E-mail address: nigelh@csr.uvic.ca

More information

Compressing Data. Konstantin Tretyakov

Compressing Data. Konstantin Tretyakov Compressing Data Konstantin Tretyakov (kt@ut.ee) MTAT.03.238 Advanced April 26, 2012 Claude Elwood Shannon (1916-2001) C. E. Shannon. A mathematical theory of communication. 1948 C. E. Shannon. The mathematical

More information

MODELING DELTA ENCODING OF COMPRESSED FILES. and. and

MODELING DELTA ENCODING OF COMPRESSED FILES. and. and International Journal of Foundations of Computer Science c World Scientific Publishing Company MODELING DELTA ENCODING OF COMPRESSED FILES SHMUEL T. KLEIN Department of Computer Science, Bar-Ilan University

More information

Modeling Delta Encoding of Compressed Files

Modeling Delta Encoding of Compressed Files Modeling Delta Encoding of Compressed Files EXTENDED ABSTRACT S.T. Klein, T.C. Serebro, and D. Shapira 1 Dept of CS Bar Ilan University Ramat Gan, Israel tomi@cs.biu.ac.il 2 Dept of CS Bar Ilan University

More information

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding Fundamentals of Multimedia Lecture 5 Lossless Data Compression Variable Length Coding Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Fundamentals of Multimedia 1 Data Compression Compression

More information

Chapter 5 Lempel-Ziv Codes To set the stage for Lempel-Ziv codes, suppose we wish to nd the best block code for compressing a datavector X. Then we ha

Chapter 5 Lempel-Ziv Codes To set the stage for Lempel-Ziv codes, suppose we wish to nd the best block code for compressing a datavector X. Then we ha Chapter 5 Lempel-Ziv Codes To set the stage for Lempel-Ziv codes, suppose we wish to nd the best block code for compressing a datavector X. Then we have to take into account the complexity of the code.

More information

Ch. 2: Compression Basics Multimedia Systems

Ch. 2: Compression Basics Multimedia Systems Ch. 2: Compression Basics Multimedia Systems Prof. Ben Lee School of Electrical Engineering and Computer Science Oregon State University Outline Why compression? Classification Entropy and Information

More information

Horn Formulae. CS124 Course Notes 8 Spring 2018

Horn Formulae. CS124 Course Notes 8 Spring 2018 CS124 Course Notes 8 Spring 2018 In today s lecture we will be looking a bit more closely at the Greedy approach to designing algorithms. As we will see, sometimes it works, and sometimes even when it

More information

Multimedia Systems. Part 20. Mahdi Vasighi

Multimedia Systems. Part 20. Mahdi Vasighi Multimedia Systems Part 2 Mahdi Vasighi www.iasbs.ac.ir/~vasighi Department of Computer Science and Information Technology, Institute for dvanced Studies in asic Sciences, Zanjan, Iran rithmetic Coding

More information

Modeling Delta Encoding of Compressed Files

Modeling Delta Encoding of Compressed Files Shmuel T. Klein 1, Tamar C. Serebro 1, and Dana Shapira 2 1 Department of Computer Science Bar Ilan University Ramat Gan, Israel tomi@cs.biu.ac.il, t lender@hotmail.com 2 Department of Computer Science

More information

Comparative Study of Dictionary based Compression Algorithms on Text Data

Comparative Study of Dictionary based Compression Algorithms on Text Data 88 Comparative Study of Dictionary based Compression Algorithms on Text Data Amit Jain Kamaljit I. Lakhtaria Sir Padampat Singhania University, Udaipur (Raj.) 323601 India Abstract: With increasing amount

More information

Data Compression Techniques

Data Compression Techniques Data Compression Techniques Part 1: Entropy Coding Lecture 1: Introduction and Huffman Coding Juha Kärkkäinen 31.10.2017 1 / 21 Introduction Data compression deals with encoding information in as few bits

More information

Lempel-Ziv compression: how and why?

Lempel-Ziv compression: how and why? Lempel-Ziv compression: how and why? Algorithms on Strings Paweł Gawrychowski July 9, 2013 s July 9, 2013 2/18 Outline Lempel-Ziv compression Computing the factorization Using the factorization July 9,

More information

Overview. Last Lecture. This Lecture. Next Lecture. Data Transmission. Data Compression Source: Lecture notes

Overview. Last Lecture. This Lecture. Next Lecture. Data Transmission. Data Compression Source: Lecture notes Overview Last Lecture Data Transmission This Lecture Data Compression Source: Lecture notes Next Lecture Data Integrity 1 Source : Sections 10.1, 10.3 Lecture 4 Data Compression 1 Data Compression Decreases

More information

Data Compression. Guest lecture, SGDS Fall 2011

Data Compression. Guest lecture, SGDS Fall 2011 Data Compression Guest lecture, SGDS Fall 2011 1 Basics Lossy/lossless Alphabet compaction Compression is impossible Compression is possible RLE Variable-length codes Undecidable Pigeon-holes Patterns

More information

Error Resilient LZ 77 Data Compression

Error Resilient LZ 77 Data Compression Error Resilient LZ 77 Data Compression Stefano Lonardi Wojciech Szpankowski Mark Daniel Ward Presentation by Peter Macko Motivation Lempel-Ziv 77 lacks any form of error correction Introducing a single

More information

An Asymmetric, Semi-adaptive Text Compression Algorithm

An Asymmetric, Semi-adaptive Text Compression Algorithm An Asymmetric, Semi-adaptive Text Compression Algorithm Harry Plantinga Department of Computer Science University of Pittsburgh Pittsburgh, PA 15260 planting@cs.pitt.edu Abstract A new heuristic for text

More information

EE67I Multimedia Communication Systems Lecture 4

EE67I Multimedia Communication Systems Lecture 4 EE67I Multimedia Communication Systems Lecture 4 Lossless Compression Basics of Information Theory Compression is either lossless, in which no information is lost, or lossy in which information is lost.

More information

FPGA based Data Compression using Dictionary based LZW Algorithm

FPGA based Data Compression using Dictionary based LZW Algorithm FPGA based Data Compression using Dictionary based LZW Algorithm Samish Kamble PG Student, E & TC Department, D.Y. Patil College of Engineering, Kolhapur, India Prof. S B Patil Asso.Professor, E & TC Department,

More information

IMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1

IMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1 IMAGE COMPRESSION- I Week VIII Feb 25 02/25/2003 Image Compression-I 1 Reading.. Chapter 8 Sections 8.1, 8.2 8.3 (selected topics) 8.4 (Huffman, run-length, loss-less predictive) 8.5 (lossy predictive,

More information

S 1. Evaluation of Fast-LZ Compressors for Compacting High-Bandwidth but Redundant Streams from FPGA Data Sources

S 1. Evaluation of Fast-LZ Compressors for Compacting High-Bandwidth but Redundant Streams from FPGA Data Sources Evaluation of Fast-LZ Compressors for Compacting High-Bandwidth but Redundant Streams from FPGA Data Sources Author: Supervisor: Luhao Liu Dr. -Ing. Thomas B. Preußer Dr. -Ing. Steffen Köhler 09.10.2014

More information

Chapter 7 Lossless Compression Algorithms

Chapter 7 Lossless Compression Algorithms Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5 Dictionary-based Coding 7.6 Arithmetic Coding 7.7

More information

A Comparative Study Of Text Compression Algorithms

A Comparative Study Of Text Compression Algorithms International Journal of Wisdom Based Computing, Vol. 1 (3), December 2011 68 A Comparative Study Of Text Compression Algorithms Senthil Shanmugasundaram Department of Computer Science, Vidyasagar College

More information

Information Retrieval. Chap 7. Text Operations

Information Retrieval. Chap 7. Text Operations Information Retrieval Chap 7. Text Operations The Retrieval Process user need User Interface 4, 10 Text Text logical view Text Operations logical view 6, 7 user feedback Query Operations query Indexing

More information

Study of LZ77 and LZ78 Data Compression Techniques

Study of LZ77 and LZ78 Data Compression Techniques Study of LZ77 and LZ78 Data Compression Techniques Suman M. Choudhary, Anjali S. Patel, Sonal J. Parmar Abstract Data Compression is defined as the science and art of the representation of information

More information

Data Compression. An overview of Compression. Multimedia Systems and Applications. Binary Image Compression. Binary Image Compression

Data Compression. An overview of Compression. Multimedia Systems and Applications. Binary Image Compression. Binary Image Compression An overview of Compression Multimedia Systems and Applications Data Compression Compression becomes necessary in multimedia because it requires large amounts of storage space and bandwidth Types of Compression

More information

Multimedia Networking ECE 599

Multimedia Networking ECE 599 Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on B. Lee s lecture notes. 1 Outline Compression basics Entropy and information theory basics

More information

14.4 Description of Huffman Coding

14.4 Description of Huffman Coding Mastering Algorithms with C By Kyle Loudon Slots : 1 Table of Contents Chapter 14. Data Compression Content 14.4 Description of Huffman Coding One of the oldest and most elegant forms of data compression

More information

Lossless Compression Algorithms

Lossless Compression Algorithms Multimedia Data Compression Part I Chapter 7 Lossless Compression Algorithms 1 Chapter 7 Lossless Compression Algorithms 1. Introduction 2. Basics of Information Theory 3. Lossless Compression Algorithms

More information

Image Compression. cs2: Computational Thinking for Scientists.

Image Compression. cs2: Computational Thinking for Scientists. Image Compression cs2: Computational Thinking for Scientists Çetin Kaya Koç http://cs.ucsb.edu/~koc/cs2 koc@cs.ucsb.edu The course was developed with input from: Ömer Eǧecioǧlu (Computer Science), Maribel

More information

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 26 Source Coding (Part 1) Hello everyone, we will start a new module today

More information

Lossless compression II

Lossless compression II Lossless II D 44 R 52 B 81 C 84 D 86 R 82 A 85 A 87 A 83 R 88 A 8A B 89 A 8B Symbol Probability Range a 0.2 [0.0, 0.2) e 0.3 [0.2, 0.5) i 0.1 [0.5, 0.6) o 0.2 [0.6, 0.8) u 0.1 [0.8, 0.9)! 0.1 [0.9, 1.0)

More information

Lecture 7 February 26, 2010

Lecture 7 February 26, 2010 6.85: Advanced Data Structures Spring Prof. Andre Schulz Lecture 7 February 6, Scribe: Mark Chen Overview In this lecture, we consider the string matching problem - finding all places in a text where some

More information

Indexing. CS6200: Information Retrieval. Index Construction. Slides by: Jesse Anderton

Indexing. CS6200: Information Retrieval. Index Construction. Slides by: Jesse Anderton Indexing Index Construction CS6200: Information Retrieval Slides by: Jesse Anderton Motivation: Scale Corpus Terms Docs Entries A term incidence matrix with V terms and D documents has O(V x D) entries.

More information

Encoding. A thesis submitted to the Graduate School of University of Cincinnati in

Encoding. A thesis submitted to the Graduate School of University of Cincinnati in Lossless Data Compression for Security Purposes Using Huffman Encoding A thesis submitted to the Graduate School of University of Cincinnati in a partial fulfillment of requirements for the degree of Master

More information

Lempel-Ziv-Welch (LZW) Compression Algorithm

Lempel-Ziv-Welch (LZW) Compression Algorithm Lempel-Ziv-Welch (LZW) Compression lgorithm Introduction to the LZW lgorithm Example 1: Encoding using LZW Example 2: Decoding using LZW LZW: Concluding Notes Introduction to LZW s mentioned earlier, static

More information

V.2 Index Compression

V.2 Index Compression V.2 Index Compression Heap s law (empirically observed and postulated): Size of the vocabulary (distinct terms) in a corpus E[ distinct terms in corpus] n with total number of term occurrences n, and constants,

More information

Implementation of Robust Compression Technique using LZ77 Algorithm on Tensilica s Xtensa Processor

Implementation of Robust Compression Technique using LZ77 Algorithm on Tensilica s Xtensa Processor 2016 International Conference on Information Technology Implementation of Robust Compression Technique using LZ77 Algorithm on Tensilica s Xtensa Processor Vasanthi D R and Anusha R M.Tech (VLSI Design

More information

Text Compression. Jayadev Misra The University of Texas at Austin July 1, A Very Incomplete Introduction To Information Theory 2

Text Compression. Jayadev Misra The University of Texas at Austin July 1, A Very Incomplete Introduction To Information Theory 2 Text Compression Jayadev Misra The University of Texas at Austin July 1, 2003 Contents 1 Introduction 1 2 A Very Incomplete Introduction To Information Theory 2 3 Huffman Coding 5 3.1 Uniquely Decodable

More information

Distributed source coding

Distributed source coding Distributed source coding Suppose that we want to encode two sources (X, Y ) with joint probability mass function p(x, y). If the encoder has access to both X and Y, it is sufficient to use a rate R >

More information

David Rappaport School of Computing Queen s University CANADA. Copyright, 1996 Dale Carnegie & Associates, Inc.

David Rappaport School of Computing Queen s University CANADA. Copyright, 1996 Dale Carnegie & Associates, Inc. David Rappaport School of Computing Queen s University CANADA Copyright, 1996 Dale Carnegie & Associates, Inc. Data Compression There are two broad categories of data compression: Lossless Compression

More information

Compression; Error detection & correction

Compression; Error detection & correction Compression; Error detection & correction compression: squeeze out redundancy to use less memory or use less network bandwidth encode the same information in fewer bits some bits carry no information some

More information

University of Waterloo CS240R Winter 2018 Help Session Problems

University of Waterloo CS240R Winter 2018 Help Session Problems University of Waterloo CS240R Winter 2018 Help Session Problems Reminder: Final on Monday, April 23 2018 Note: This is a sample of problems designed to help prepare for the final exam. These problems do

More information

CSE 421 Greedy: Huffman Codes

CSE 421 Greedy: Huffman Codes CSE 421 Greedy: Huffman Codes Yin Tat Lee 1 Compression Example 100k file, 6 letter alphabet: File Size: ASCII, 8 bits/char: 800kbits 2 3 > 6; 3 bits/char: 300kbits a 45% b 13% c 12% d 16% e 9% f 5% Why?

More information

Ch. 2: Compression Basics Multimedia Systems

Ch. 2: Compression Basics Multimedia Systems Ch. 2: Compression Basics Multimedia Systems Prof. Thinh Nguyen (Based on Prof. Ben Lee s Slides) Oregon State University School of Electrical Engineering and Computer Science Outline Why compression?

More information

Category: Informational May DEFLATE Compressed Data Format Specification version 1.3

Category: Informational May DEFLATE Compressed Data Format Specification version 1.3 Network Working Group P. Deutsch Request for Comments: 1951 Aladdin Enterprises Category: Informational May 1996 DEFLATE Compressed Data Format Specification version 1.3 Status of This Memo This memo provides

More information

S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 165

S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 165 S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani 165 5.22. You are given a graph G = (V, E) with positive edge weights, and a minimum spanning tree T = (V, E ) with respect to these weights; you may

More information

COS 226 Algorithms and Data Structures Fall Final Solutions. 10. Remark: this is essentially the same question from the midterm.

COS 226 Algorithms and Data Structures Fall Final Solutions. 10. Remark: this is essentially the same question from the midterm. COS 226 Algorithms and Data Structures Fall 2011 Final Solutions 1 Analysis of algorithms (a) T (N) = 1 10 N 5/3 When N increases by a factor of 8, the memory usage increases by a factor of 32 Thus, T

More information

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms Efficient Sequential Algorithms, Comp39 Part 3. String Algorithms University of Liverpool References: T. H. Cormen, C. E. Leiserson, R. L. Rivest Introduction to Algorithms, Second Edition. MIT Press (21).

More information

CMPSCI 240 Reasoning Under Uncertainty Homework 4

CMPSCI 240 Reasoning Under Uncertainty Homework 4 CMPSCI 240 Reasoning Under Uncertainty Homework 4 Prof. Hanna Wallach Assigned: February 24, 2012 Due: March 2, 2012 For this homework, you will be writing a program to construct a Huffman coding scheme.

More information

CS 202 Data Structures (Spring 2007)

CS 202 Data Structures (Spring 2007) CS 202 Data Structures (Spring 2007) Homework Assignment 4: Data Compression Assigned on 1 May, 2007 Due by 23:00:00 on Thursday 17 May, 2007 Graded by Deniz Türdü (denizturdu@... ) PLEASE NOTE: The deadline

More information

MCS-375: Algorithms: Analysis and Design Handout #G2 San Skulrattanakulchai Gustavus Adolphus College Oct 21, Huffman Codes

MCS-375: Algorithms: Analysis and Design Handout #G2 San Skulrattanakulchai Gustavus Adolphus College Oct 21, Huffman Codes MCS-375: Algorithms: Analysis and Design Handout #G2 San Skulrattanakulchai Gustavus Adolphus College Oct 21, 2016 Huffman Codes CLRS: Ch 16.3 Ziv-Lempel is the most popular compression algorithm today.

More information

Compression by Induction of Hierarchical Grammars

Compression by Induction of Hierarchical Grammars Compression by Induction of Hierarchical Grammars Craig G. Nevill-Manning Computer Science, University of Waikato, Hamilton, New Zealand Telephone +64 7 838 4021; email cgn@waikato.ac.nz Ian H. Witten

More information

CHAPTER II LITERATURE REVIEW

CHAPTER II LITERATURE REVIEW CHAPTER II LITERATURE REVIEW 2.1 BACKGROUND OF THE STUDY The purpose of this chapter is to study and analyze famous lossless data compression algorithm, called LZW. The main objective of the study is to

More information

University of Waterloo CS240 Spring 2018 Help Session Problems

University of Waterloo CS240 Spring 2018 Help Session Problems University of Waterloo CS240 Spring 2018 Help Session Problems Reminder: Final on Wednesday, August 1 2018 Note: This is a sample of problems designed to help prepare for the final exam. These problems

More information

WIRE/WIRELESS SENSOR NETWORKS USING K-RLE ALGORITHM FOR A LOW POWER DATA COMPRESSION

WIRE/WIRELESS SENSOR NETWORKS USING K-RLE ALGORITHM FOR A LOW POWER DATA COMPRESSION WIRE/WIRELESS SENSOR NETWORKS USING K-RLE ALGORITHM FOR A LOW POWER DATA COMPRESSION V.KRISHNAN1, MR. R.TRINADH 2 1 M. Tech Student, 2 M. Tech., Assistant Professor, Dept. Of E.C.E, SIR C.R. Reddy college

More information

Huffman Code Application. Lecture7: Huffman Code. A simple application of Huffman coding of image compression which would be :

Huffman Code Application. Lecture7: Huffman Code. A simple application of Huffman coding of image compression which would be : Lecture7: Huffman Code Lossless Image Compression Huffman Code Application A simple application of Huffman coding of image compression which would be : Generation of a Huffman code for the set of values

More information

Design and Implementation of FPGA- based Systolic Array for LZ Data Compression

Design and Implementation of FPGA- based Systolic Array for LZ Data Compression Design and Implementation of FPGA- based Systolic Array for LZ Data Compression Mohamed A. Abd El ghany Electronics Dept. German University in Cairo Cairo, Egypt E-mail: mohamed.abdel-ghany@guc.edu.eg

More information

The Effect of Flexible Parsing for Dynamic Dictionary Based Data Compression

The Effect of Flexible Parsing for Dynamic Dictionary Based Data Compression The Effect of Flexible Parsing for Dynamic Dictionary Based Data Compression Yossi Matias Nasir Rajpoot Süleyman Cenk Ṣahinalp Abstract We report on the performance evaluation of greedy parsing with a

More information

Basic Compression Library

Basic Compression Library Basic Compression Library Manual API version 1.2 July 22, 2006 c 2003-2006 Marcus Geelnard Summary This document describes the algorithms used in the Basic Compression Library, and how to use the library

More information

Lempel-Ziv Compressed Full-Text Self-Indexes

Lempel-Ziv Compressed Full-Text Self-Indexes Lempel-Ziv Compressed Full-Text Self-Indexes Diego G. Arroyuelo Billiardi Ph.D. Student, Departamento de Ciencias de la Computación Universidad de Chile darroyue@dcc.uchile.cl Advisor: Gonzalo Navarro

More information

Suffix-based text indices, construction algorithms, and applications.

Suffix-based text indices, construction algorithms, and applications. Suffix-based text indices, construction algorithms, and applications. F. Franek Computing and Software McMaster University Hamilton, Ontario 2nd CanaDAM Conference Centre de recherches mathématiques in

More information

6. Finding Efficient Compressions; Huffman and Hu-Tucker

6. Finding Efficient Compressions; Huffman and Hu-Tucker 6. Finding Efficient Compressions; Huffman and Hu-Tucker We now address the question: how do we find a code that uses the frequency information about k length patterns efficiently to shorten our message?

More information

Practical Fixed Length Lempel Ziv Coding

Practical Fixed Length Lempel Ziv Coding Practical Fixed Length Lempel Ziv Coding Shmuel T. Klein 1 and Dana Shapira 2 1 Dept. of Computer Science, Bar Ilan University, Ramat Gan 52900, Israel tomi@cs.biu.ac.il 2 Dept. of Computer Science, Ashkelon

More information

Dictionary Based Compression for Images

Dictionary Based Compression for Images Dictionary Based Compression for Images Bruno Carpentieri Abstract Lempel-Ziv methods were original introduced to compress one-dimensional data (text, object codes, etc.) but recently they have been successfully

More information

Lecture 18 April 12, 2005

Lecture 18 April 12, 2005 6.897: Advanced Data Structures Spring 5 Prof. Erik Demaine Lecture 8 April, 5 Scribe: Igor Ganichev Overview In this lecture we are starting a sequence of lectures about string data structures. Today

More information

15 Data Compression 2014/9/21. Objectives After studying this chapter, the student should be able to: 15-1 LOSSLESS COMPRESSION

15 Data Compression 2014/9/21. Objectives After studying this chapter, the student should be able to: 15-1 LOSSLESS COMPRESSION 15 Data Compression Data compression implies sending or storing a smaller number of bits. Although many methods are used for this purpose, in general these methods can be divided into two broad categories:

More information

Compiler Construction

Compiler Construction Compiler Construction Exercises 1 Review of some Topics in Formal Languages 1. (a) Prove that two words x, y commute (i.e., satisfy xy = yx) if and only if there exists a word w such that x = w m, y =

More information

Greedy Algorithms. Alexandra Stefan

Greedy Algorithms. Alexandra Stefan Greedy Algorithms Alexandra Stefan 1 Greedy Method for Optimization Problems Greedy: take the action that is best now (out of the current options) it may cause you to miss the optimal solution You build

More information

LCP Array Construction

LCP Array Construction LCP Array Construction The LCP array is easy to compute in linear time using the suffix array SA and its inverse SA 1. The idea is to compute the lcp values by comparing the suffixes, but skip a prefix

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 11 Coding Strategies and Introduction to Huffman Coding The Fundamental

More information

CS/COE 1501

CS/COE 1501 CS/COE 1501 www.cs.pitt.edu/~lipschultz/cs1501/ Compression What is compression? Represent the same data using less storage space Can get more use out a disk of a given size Can get more use out of memory

More information

University of Waterloo CS240R Fall 2017 Review Problems

University of Waterloo CS240R Fall 2017 Review Problems University of Waterloo CS240R Fall 2017 Review Problems Reminder: Final on Tuesday, December 12 2017 Note: This is a sample of problems designed to help prepare for the final exam. These problems do not

More information

Multiple-Pattern Matching In LZW Compressed Files Using Aho-Corasick Algorithm ABSTRACT 1 INTRODUCTION

Multiple-Pattern Matching In LZW Compressed Files Using Aho-Corasick Algorithm ABSTRACT 1 INTRODUCTION Multiple-Pattern Matching In LZW Compressed Files Using Aho-Corasick Algorithm Tao Tao, Amar Mukherjee School of Electrical Engineering and Computer Science University of Central Florida, Orlando, Fl.32816

More information