EE-575 INFORMATION THEORY - SEM 092

Size: px
Start display at page:

Download "EE-575 INFORMATION THEORY - SEM 092"

Transcription

1 EE-575 INFORMATION THEORY - SEM 092 Project Report on Lempel Ziv compression technique. Department of Electrical Engineering Prepared By: Mohammed Akber Ali Student ID # g King Fahd University Of Petroleum & Minerals Dhahran, Saudi Arabia. 1

2 Context 1. Introduction 3 2. Dictionary coding 4 3. Lempel Ziv coding The coding process The decoding process Flowchart for coding process.9 7. Flowchart for decoding process Problem eg solved theoretically Problem eg solved theoretically Problem exc solved theoretically Problem exc solved theoretically Advantages, Disadvantages & Applications Results Conclusion References 21 2

3 INTRODUCTION: Data Compression seeks to reduce the number of bits used to store or transmit information. It encompasses a wide variety of software and hardware compression techniques. Data compression consists of taking a stream of symbols and transforming them into codes. For effective compression, the resultant stream of codes will be smaller than the original symbol. For e.g., Huffman coding is a type of coding where the actual output of encoder is determined by a set of probabilities. Here the problem is that it uses an integral number of bits & also, one must have the prior information of probabilities. Well-known lossless compression techniques include: Run-length coding: Replace strings of repeated symbols with a count and only one symbol. Example: aaaaabbbbbbccccc -> 5a6b5c Statistical techniques: Huffman coding: Replace fixed-length codes (such as ASCII) by variable-length codes, assigning shorter codewords to the more frequently occurring symbols and thus decreasing the overall length of the data. When using variable-length codewords, it is desirable to create a (uniquely decipherable) prefix-code, avoiding the need for a separator to determine codeword boundaries. Huffman coding creates such a code. Arithmetic coding: Code message as a whole using a floating point number in an interval from zero to one. PPM (prediction by partial matching): Analyze the data and predict the probability of a character in a given context. Usually, arithmetic coding is used for encoding the data. PPM techniques yield the best results of statistical compression techniques. The Lempel Ziv algorithms belong to yet another category of lossless compression techniques known as dictionary coders. The problem of statistical model is solved by using adaptive dictionary which is discussed below. 3

4 DICTIONARY CODING Dictionary codes are compression codes that dynamically construct their own coding and decoding tables on the fly by looking at the data stream itself. As they have these capabilities it is not necessary for us to have to know the symbol probabilities beforehand. The codes take advantage of the fact that, quite often certain strings can be assigned code words that represent the entire string of symbols. Dictionary coding techniques rely upon the observation that there are correlations between parts of data (recurring patterns). The basic idea is to replace those repetitions by (shorter) references to a "dictionary" containing the original. (i) Static Dictionary The simplest forms of dictionary coding use a static dictionary. Such a dictionary may contain frequently occurring phrases of arbitrary length, digrams (two-letter combinations) or n-grams. This kind of dictionary can easily be built upon an existing coding such as ASCII by using previously unused codewords or extending the length of the codewords to accommodate the dictionary entries. A static dictionary achieves little compression for most data sources. The dictionary can be completely unsuitable for compressing particular data, thus resulting in an increased message size (caused by the longer codewords needed for the dictionary). (ii) Semi-Adaptive Dictionary The aforementioned problems can be avoided by using a semi-adaptive encoder. This class of encoders creates a dictionary custom-tailored for the message to be compressed. Unfortunately, this makes it necessary to transmit/store the dictionary together with the data. Also, this method usually requires two passes over the data, one to build the dictionary and another one to compress the data. A question arising with the use of this technique is how to create an optimal dictionary for a given message. It has been shown that this problem is NP-complete (vertex cover problem). Fortunately, there exist heuristic algorithms for finding near-optimal dictionaries. (iii) Adaptive Dictionary The Lempel Ziv algorithms belong to this third category of dictionary coders. The dictionary is being built in a single pass, while at the same time also encoding the data. As we will see, it is not necessary to explicitly transmit/store the dictionary because the decoder can build up the dictionary in the same way as the encoder while decompressing the data. 4

5 LEMPEL-ZIV CODING: History: In 1983 Sperry filed a patent for an algorithm developed by Terry Welch, an employee at the Sperry Research Center. This algorithm is Welch's variation on a data compression technique first proposed by Jakob Ziv and Abraham Lempel in Welch's technique is both simpler and faster. He published an article in the June 1984 issue of IEEE Computer Magazine describing the technique. The technique became very popular and was widely adopted. LZ compression is a form of substitution compression. In this form of compression, a specific, unique string of characters is replaced with a reference to that phrase, which is maintained in a dictionary. The resulting data compresses because the reference to the repeated phrase is much smaller. While LZ compression is very fast, it is best suited for files that contain repetitive data. Text files and monochrome graphic images are ideal for LZW compression. Compressed files that do not contain repetitive data will actually grow in size because of the LZW data dictionary. LZ compression today is in the public domain, and freely available for use by anyone. The U.S. patent expired in 2003, and the European, Canadian and Japanese patents expired in A Linked List LZ algorithm: As per the book of Richard B. Wells we try using the algorithm given in text, which is a mild modification of the actual LZW algorithm. The algorithm begins by defining the structure of the dictionary. Each entry in the dictionary is given an address m. Each entry consists of an ordered pair <n,a i >, where n is a pointer to another location in the dictionary and a i is a symbol drawn from the source alphabet. This order pairs in the dictionary is said to make up a linked list. The pointer variables n also serve as the transmitted code words. As the total number of dictionary entries exceeds the number of symbols, M, in the source alphabet, where each transmitted code word actually contains more bits than it would take to represent the alphabet A. Therefore most of the code words actually represent strings of source symbols and in a long message it is more economical to encode these strings than it is to encode the individual symbols. 5

6 The Coding Process: A dictionary is initialized to contain the single-character strings corresponding to all the possible input characters (and nothing else except the clear and stop codes if they're being used). The algorithm works by scanning through the input string for successively longer substrings until it finds one that is not in the dictionary. When such a string is found, the index for the string less the last character (i.e., the longest substring that is in the dictionary) is retrieved from the dictionary and sent to output, and the new string (including the last character) is added to the dictionary with the next available code. The last input character is then used as the next starting point to scan for substrings. In this way, successively longer strings are registered in the dictionary and made available for subsequent encoding as single output values. The algorithm works best on data with repeated patterns, so the initial parts of a message will see little compression. As the message grows, however, the compression ratio tends asymptotically to the maximum. The LZ algorithm uses above principle with a vengeance and with the added twist that the strings can be variable length. The algorithm is initialized by constructing the first M+1 entries in the dictionary as following: Address Dictionary Entry 0 0, Null 1 0, a 0 m 0, a m-1 M 0, a M-1 The 0-address entry in the dictionary is a null symbol, helpful to let the decoder know where strings end. The pointers n in these first M+1 entries are zero. They point to the null entry at the address 0. The initialization also initializes pointer variable n=0 and address pointer m=m+1. The address pointer m points to the next blank location in the dictionary. After the initialization, the encoder iteratively executes the following steps: 6

7 1. Fetch next source symbol a; 2. If the ordered pair <n,a> is already in the dictionary then n= dictionary address of entry <n,a>; else transmit n create new dictionary entry <n,a> at the dictionary address m m=m+1 n=dictionary address of entry <0,a>; 3. Return to step 1. If <n,a> is already in the dictionary in step 2, the encoder is processing a string of symbols that has occurred at least once previously. Setting the next value of n to this address constructs a linked list allows the string of symbols to be traced. If <n,a> is not already in the dictionary in step 2, the encoder is encountering a new string that was not processed previously. It transmits the code symbol n, which lets the receiver know the dictionary address of the last source symbol in the previous string. Whenever the encoder transmits a code symbol, it also creates a new dictionary entry. The encoder s dictionary building and code symbol transmission process can be developed using Matlab program. The Decoding Process: The decoding algorithm works by reading a value from the encoded input and outputting the corresponding string from the initialized dictionary. At the same time it obtains the next value from the input, and adds to the dictionary the concatenation of the string just output and the first character of the string obtained by decoding the next input value. The decoder then proceeds to the next input value (which was already read in as the "next value" in the previous pass) and repeats the process until there is no more input, at which point the final input value is decoded without any more additions to the dictionary. In this way the decoder builds up a dictionary which is identical to that used by the encoder, and uses it to decode subsequent input values. Thus the full dictionary does not need be sent with the encoded data; just the initial dictionary containing the single-character strings is sufficient (and is typically defined beforehand within the encoder and decoder rather than being explicitly sent with the encoded data.) 7

8 The decoder at the receiver must also be able to construct an identical dictionary based on the symbol codes received. The decoder performs following decoding iterations: 1. Reception of any code word means that a new dictionary entry must be constructed.; 2. Pointer n for this new dictionary entry is the same as the received code word n; 3. Source symbol a for this entry is no yet known, since it is the root symbol of the next string (which has not yet been transmitted by the encoder). If the address of this next dictionary entry is m, we see that the decoder can only construct a partial entry <n,?> since it must await the next received code word to find the root symbol a for this entry. It can however, fill in the missing symbol in its previous dictionary entry at address m-1. It can also decode the source symbol string associated with received code word n. This decoding process also can be realized with the help of matlab code. 8

9 Flow chart for LEMPEL ZIV Encoder: Input= Sequence to be coded; S=size of input sequence; Initializing Dictionary, Address Pointer (Pm), pointer variable (Pn=0) & other variables. Initialize while loop to consider each symbol of the input sequence one by one. Set flag ak =1; Initialize for i=0: length of dictionary, loop to match the present symbol with Dictionary elements. Next Symbol If Present i/p symbol=dictionary entry & Pn= address ponter of entry. Else Update Pn=dictionary address, set flag ak=1, break the for loop. If ak==1, record a new dictionary entry; And transmit Pn (pointer variable), record it in array Using for loop, check for root entry for the new dictionary entry and update Pn= addr. pointer, increment while loop variable to receive next symbol. Output Display Dictionary & transmitted Sequence 9

10 Flow chart for LEMPEL ZIV Decoder: Input= Received Sequence to be decoded; S=size of input sequence; Initializing Dictionary, Address Pointer (Pm), pointer variable (Pn=0) & other variables. Initializing & incrementing for loop to consider each symbol of the input sequence one by one. Initialize for i=0: length of dictionary, loop to match the present Received symbol with Dictionary elements. Next Symbol If Present rcvd symbol=dictionary entry & Pn= 0; i.e. is it a root entry. Else Update the symbol pointer and record the Dictionary element as decoded symbol & new entry. Record the pointer variable and treat it as an address pointer each time(using While loop), until root element is reached, Also keep a track of all elements confronted in this process and update decoded symbols list in reverse order. Also record the root element to update previous partial dictionary entry. Update Partial dictionary entry for previous symbol & create a new partial dictionary entry for current symbol. Then fetch next symbol. If no next symbol then Display the Decoded Sequence 10

11 In example a binary information source emits the sequence of symbols etc. The Encoding sequential procedure is shown in the following table along with encoder s dictionary being constructed. Given that A = {0,1} We Initialize the dictionary as shown(in block letters) with address 0 to 2.The initial values for n & m are n=0 &m =3. The Encoders operation for the source that emits 0,1 are as follows: Source Symbol Present n Present m Transmit Next n Dictionary Entry , , , , , , , , , , , Dictionary Address Dictionary Entry 0,null 0,0 0,1 2,1 2,0 1,0 5,1 4,1 3,0 6,0 1,1 3,1 4,0 6,1 14 No yet entry 11

12 The decoding process in example can be explicitly seen with the help of table below: The Decoder begins by constructing the same first three entries as the encoder. It can do this because the source alphabet is known a priori by the decoder. The decoder is initialized by value for the next dictionary entry is 4. Received Bit Dictionary address Dictionary Entry Tracing back Symbol Coded 0 0,null 1 0,0 2 0, ,1 <0,1> ,0 <0,1> ,0 <0,0> 0, ,1 <1,0>--<0,0> 1, ,1 <2,0>--<0,1>.. 1, ,0 <2,1>--<0,1> 0,0, ,0 <5,1>--<1,0>--<0,0> ,1 <0,0> 1, ,1 <2,1>--<0,1> 1, ,0 <2,0>--<0,1> 0,0, ,1 <5,1>--<1,0>--<0,0> 14 Therefore the sequence decoded is and the dictionary constructed from the received signals is above. 12

13 In exercise problem 1.5.1, A discrete memory less source with A={a,b,c} emits the following string bccacbcccccccccccaccca. The Encoding sequential procedure is shown in the following table along with encoder s dictionary being constructed. Given that A = {a, b, c} We Initialize the dictionary as shown with address 0 to 3.The initial values for n & m are n=0 &m =4. The Encoders operation for the source that emits a, b, c are as follows: Source Symbol Present n Present m Transmit Next n Dictionary Entry b c <2,c> c <3,c> a <3,a> c <1,c> b <3,b> c c <4,c> c c <5,c> c c c <10,c> c c c c <11,c> a c <6,c> c c a <10,a> 1 15 Dictionary Address Dictionary Entry 0,null 0,a 0,b 0,c 2,c 3,c 3,a 1,c 3,b 4,c 5,c 10,c 11,c 6,c 10,a 15 No yet entry 13

14 The decoding process in problem can be explicitly seen with the help of table below: The Decoder begins by constructing the same first three entries as the encoder. It can do this because the source alphabet is known a priori by the decoder. The decoder is initialized by value for the next dictionary entry is 4. Received Bit Dictionary address Dictionary Entry Tracing back Symbol Coded 0 0,null 1 0,a 2 0,b 3 0,c b 2 4 2,c <0,b> c 3 5 3,c <0,c> c 3 6 3,a <0,c> a 1 7 1,c <0,a>.. c 3 8 3,b <0,c> b, c 4 9 4,c <2,c>--<0,b> c, c ,c <3,c>--<0,c> c, c, c ,c <5,c>--<3,c>--<0,c> c, c, c, c ,c <10,c>--<5,c>--<3,c>--<0,c> c, a ,c <3,a>--<0,c> c, c, c ,a <5,c>--<3,c>--<0,c> a 15 <0,a> Therefore the sequence decoded is bccacbcccccccccccaccca and the dictionary constructed from the received signals is above. 14

15 Advantages of LZ compression technique: An LZ algorithm uses adaptive approach with universal coding scheme, without any need to transmit/store dictionary with a single-pass transmission (dictionary creation on-thefly i.e. decompression recreates the codeword dictionary so it does not need to be passed). LZ compression works best for files containing lots of repetitive data. This is often the case with text and monochrome images. Files that are compressed but that do not contain any repetitive information at all can even grow bigger! LZ compression is simple, fast and good compression. Disadvantages of LZ compression technique: The LZ compression technique substitutes the detected repeated patterns with references to a dictionary. Unfortunately the larger the dictionary, the greater the number of bits that are necessary for the references. The optimal size of the dictionary also varies for different types of data; the more variable the data, the smaller the optimal size of the dictionary, hence does not endow with an optimum compression ratio. Also LZ is a fairly old compression technique; all recent computer systems have the horsepower to use more efficient algorithms. Applications of LZ compression technique: When it was introduced, LZ compression provided the best compression ratio among all wellknown methods available at that time. It became the first widely used universal data compression method on computers. A large English text file can typically be compressed via LZ to about half its original size. LZ was used in the program compress, which became a more or less standard utility in Unix systems circa It has since disappeared from many distributions, for both legal and technical reasons, but as of 2008 at least FreeBSD includes both compress and uncompress as a part of the distribution. Several other popular compression utilities also used LZ, or closely related methods. LZW became very widely used when it became part of the GIF image format in It may also (optionally) be used in TIFF and PDF files. (Although LZ is available in Adobe Acrobat 15

16 software, Acrobat by default uses the DEFLATE algorithm for most text and color-table-based image data in PDF files.) RESULTS: LZ encoder Outputs from GUI: 16

17 17

18 LZ decoder Outputs from GUI: 18

19 19

20 Conclusion: It is somewhat difficult to characterize the results of any data compression technique. The level of compression achieved varies quite a bit depending on several factors. LZ compression excels when confronted with data streams that have any type of repeated strings. Because of this, it does extremely well when compressing English text. Compression levels of 50% or better should be expected. In results the code is tested for examples 1.5.1, and exercise problems & The code attached along with this report was written and tested on MATLAB, and was successfully compiled and executed. The code consists of Coding and decoding routines for binary sources (0, 1) as well as other discrete memory sources (i.e. a, b, c). The Code can be extended to discrete source that transmits more than three symbols, by assigning proper ASCII values to each symbol and appending the dictionary in right manner. The code gives a (Graphical user interface)gui output that is user helpful to give any input and obtain respective output. 20

21 References: [1] Applied Coding and Information Theory for Engineers text book by Richard B. Wells. [2] [3] [4] [5] [6] The Lempel Ziv Algorithm, Christina Zeeh,Seminar Famous Algorithms January 16,

ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding

ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding Huffman codes require us to have a fairly reasonable idea of how source symbol probabilities are distributed. There are a number of applications

More information

Entropy Coding. - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic Code

Entropy Coding. - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic Code Entropy Coding } different probabilities for the appearing of single symbols are used - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic

More information

Dictionary techniques

Dictionary techniques Dictionary techniques The final concept that we will mention in this chapter is about dictionary techniques. Many modern compression algorithms rely on the modified versions of various dictionary techniques.

More information

LZW Compression. Ramana Kumar Kundella. Indiana State University December 13, 2014

LZW Compression. Ramana Kumar Kundella. Indiana State University December 13, 2014 LZW Compression Ramana Kumar Kundella Indiana State University rkundella@sycamores.indstate.edu December 13, 2014 Abstract LZW is one of the well-known lossless compression methods. Since it has several

More information

Lossless compression II

Lossless compression II Lossless II D 44 R 52 B 81 C 84 D 86 R 82 A 85 A 87 A 83 R 88 A 8A B 89 A 8B Symbol Probability Range a 0.2 [0.0, 0.2) e 0.3 [0.2, 0.5) i 0.1 [0.5, 0.6) o 0.2 [0.6, 0.8) u 0.1 [0.8, 0.9)! 0.1 [0.9, 1.0)

More information

SIGNAL COMPRESSION Lecture Lempel-Ziv Coding

SIGNAL COMPRESSION Lecture Lempel-Ziv Coding SIGNAL COMPRESSION Lecture 5 11.9.2007 Lempel-Ziv Coding Dictionary methods Ziv-Lempel 77 The gzip variant of Ziv-Lempel 77 Ziv-Lempel 78 The LZW variant of Ziv-Lempel 78 Asymptotic optimality of Ziv-Lempel

More information

Multimedia Systems. Part 20. Mahdi Vasighi

Multimedia Systems. Part 20. Mahdi Vasighi Multimedia Systems Part 2 Mahdi Vasighi www.iasbs.ac.ir/~vasighi Department of Computer Science and Information Technology, Institute for dvanced Studies in asic Sciences, Zanjan, Iran rithmetic Coding

More information

Engineering Mathematics II Lecture 16 Compression

Engineering Mathematics II Lecture 16 Compression 010.141 Engineering Mathematics II Lecture 16 Compression Bob McKay School of Computer Science and Engineering College of Engineering Seoul National University 1 Lossless Compression Outline Huffman &

More information

Lempel-Ziv-Welch (LZW) Compression Algorithm

Lempel-Ziv-Welch (LZW) Compression Algorithm Lempel-Ziv-Welch (LZW) Compression lgorithm Introduction to the LZW lgorithm Example 1: Encoding using LZW Example 2: Decoding using LZW LZW: Concluding Notes Introduction to LZW s mentioned earlier, static

More information

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 29 Source Coding (Part-4) We have already had 3 classes on source coding

More information

Data Compression. Media Signal Processing, Presentation 2. Presented By: Jahanzeb Farooq Michael Osadebey

Data Compression. Media Signal Processing, Presentation 2. Presented By: Jahanzeb Farooq Michael Osadebey Data Compression Media Signal Processing, Presentation 2 Presented By: Jahanzeb Farooq Michael Osadebey What is Data Compression? Definition -Reducing the amount of data required to represent a source

More information

CIS 121 Data Structures and Algorithms with Java Spring 2018

CIS 121 Data Structures and Algorithms with Java Spring 2018 CIS 121 Data Structures and Algorithms with Java Spring 2018 Homework 6 Compression Due: Monday, March 12, 11:59pm online 2 Required Problems (45 points), Qualitative Questions (10 points), and Style and

More information

THE RELATIVE EFFICIENCY OF DATA COMPRESSION BY LZW AND LZSS

THE RELATIVE EFFICIENCY OF DATA COMPRESSION BY LZW AND LZSS THE RELATIVE EFFICIENCY OF DATA COMPRESSION BY LZW AND LZSS Yair Wiseman 1* * 1 Computer Science Department, Bar-Ilan University, Ramat-Gan 52900, Israel Email: wiseman@cs.huji.ac.il, http://www.cs.biu.ac.il/~wiseman

More information

Lossless Compression Algorithms

Lossless Compression Algorithms Multimedia Data Compression Part I Chapter 7 Lossless Compression Algorithms 1 Chapter 7 Lossless Compression Algorithms 1. Introduction 2. Basics of Information Theory 3. Lossless Compression Algorithms

More information

EE67I Multimedia Communication Systems Lecture 4

EE67I Multimedia Communication Systems Lecture 4 EE67I Multimedia Communication Systems Lecture 4 Lossless Compression Basics of Information Theory Compression is either lossless, in which no information is lost, or lossy in which information is lost.

More information

Data Compression. An overview of Compression. Multimedia Systems and Applications. Binary Image Compression. Binary Image Compression

Data Compression. An overview of Compression. Multimedia Systems and Applications. Binary Image Compression. Binary Image Compression An overview of Compression Multimedia Systems and Applications Data Compression Compression becomes necessary in multimedia because it requires large amounts of storage space and bandwidth Types of Compression

More information

Chapter 5 VARIABLE-LENGTH CODING Information Theory Results (II)

Chapter 5 VARIABLE-LENGTH CODING Information Theory Results (II) Chapter 5 VARIABLE-LENGTH CODING ---- Information Theory Results (II) 1 Some Fundamental Results Coding an Information Source Consider an information source, represented by a source alphabet S. S = { s,

More information

OPTIMIZATION OF LZW (LEMPEL-ZIV-WELCH) ALGORITHM TO REDUCE TIME COMPLEXITY FOR DICTIONARY CREATION IN ENCODING AND DECODING

OPTIMIZATION OF LZW (LEMPEL-ZIV-WELCH) ALGORITHM TO REDUCE TIME COMPLEXITY FOR DICTIONARY CREATION IN ENCODING AND DECODING Asian Journal Of Computer Science And Information Technology 2: 5 (2012) 114 118. Contents lists available at www.innovativejournal.in Asian Journal of Computer Science and Information Technology Journal

More information

CS 493: Algorithms for Massive Data Sets Dictionary-based compression February 14, 2002 Scribe: Tony Wirth LZ77

CS 493: Algorithms for Massive Data Sets Dictionary-based compression February 14, 2002 Scribe: Tony Wirth LZ77 CS 493: Algorithms for Massive Data Sets February 14, 2002 Dictionary-based compression Scribe: Tony Wirth This lecture will explore two adaptive dictionary compression schemes: LZ77 and LZ78. We use the

More information

An On-line Variable Length Binary. Institute for Systems Research and. Institute for Advanced Computer Studies. University of Maryland

An On-line Variable Length Binary. Institute for Systems Research and. Institute for Advanced Computer Studies. University of Maryland An On-line Variable Length inary Encoding Tinku Acharya Joseph F. Ja Ja Institute for Systems Research and Institute for Advanced Computer Studies University of Maryland College Park, MD 242 facharya,

More information

Simple variant of coding with a variable number of symbols and fixlength codewords.

Simple variant of coding with a variable number of symbols and fixlength codewords. Dictionary coding Simple variant of coding with a variable number of symbols and fixlength codewords. Create a dictionary containing 2 b different symbol sequences and code them with codewords of length

More information

Image coding and compression

Image coding and compression Image coding and compression Robin Strand Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Today Information and Data Redundancy Image Quality Compression Coding

More information

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson zhuyongxin@sjtu.edu.cn 2 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information

More information

Intro. To Multimedia Engineering Lossless Compression

Intro. To Multimedia Engineering Lossless Compression Intro. To Multimedia Engineering Lossless Compression Kyoungro Yoon yoonk@konkuk.ac.kr 1/43 Contents Introduction Basics of Information Theory Run-Length Coding Variable-Length Coding (VLC) Dictionary-based

More information

Data Compression Techniques

Data Compression Techniques Data Compression Techniques Part 2: Text Compression Lecture 6: Dictionary Compression Juha Kärkkäinen 15.11.2017 1 / 17 Dictionary Compression The compression techniques we have seen so far replace individual

More information

Compressing Data. Konstantin Tretyakov

Compressing Data. Konstantin Tretyakov Compressing Data Konstantin Tretyakov (kt@ut.ee) MTAT.03.238 Advanced April 26, 2012 Claude Elwood Shannon (1916-2001) C. E. Shannon. A mathematical theory of communication. 1948 C. E. Shannon. The mathematical

More information

Compression. storage medium/ communications network. For the purpose of this lecture, we observe the following constraints:

Compression. storage medium/ communications network. For the purpose of this lecture, we observe the following constraints: CS231 Algorithms Handout # 31 Prof. Lyn Turbak November 20, 2001 Wellesley College Compression The Big Picture We want to be able to store and retrieve data, as well as communicate it with others. In general,

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Enhanced LZW (Lempel-Ziv-Welch) Algorithm by Binary Search with

More information

7: Image Compression

7: Image Compression 7: Image Compression Mark Handley Image Compression GIF (Graphics Interchange Format) PNG (Portable Network Graphics) MNG (Multiple-image Network Graphics) JPEG (Join Picture Expert Group) 1 GIF (Graphics

More information

Overview. Last Lecture. This Lecture. Next Lecture. Data Transmission. Data Compression Source: Lecture notes

Overview. Last Lecture. This Lecture. Next Lecture. Data Transmission. Data Compression Source: Lecture notes Overview Last Lecture Data Transmission This Lecture Data Compression Source: Lecture notes Next Lecture Data Integrity 1 Source : Sections 10.1, 10.3 Lecture 4 Data Compression 1 Data Compression Decreases

More information

You can say that again! Text compression

You can say that again! Text compression Activity 3 You can say that again! Text compression Age group Early elementary and up. Abilities assumed Copying written text. Time 10 minutes or more. Size of group From individuals to the whole class.

More information

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding Fundamentals of Multimedia Lecture 5 Lossless Data Compression Variable Length Coding Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Fundamentals of Multimedia 1 Data Compression Compression

More information

Text Compression. Jayadev Misra The University of Texas at Austin July 1, A Very Incomplete Introduction To Information Theory 2

Text Compression. Jayadev Misra The University of Texas at Austin July 1, A Very Incomplete Introduction To Information Theory 2 Text Compression Jayadev Misra The University of Texas at Austin July 1, 2003 Contents 1 Introduction 1 2 A Very Incomplete Introduction To Information Theory 2 3 Huffman Coding 5 3.1 Uniquely Decodable

More information

Chapter 7 Lossless Compression Algorithms

Chapter 7 Lossless Compression Algorithms Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5 Dictionary-based Coding 7.6 Arithmetic Coding 7.7

More information

Information Retrieval. Chap 7. Text Operations

Information Retrieval. Chap 7. Text Operations Information Retrieval Chap 7. Text Operations The Retrieval Process user need User Interface 4, 10 Text Text logical view Text Operations logical view 6, 7 user feedback Query Operations query Indexing

More information

Lossless compression II

Lossless compression II Lossless II D 44 R 52 B 81 C 84 D 86 R 82 A 85 A 87 A 83 R 88 A 8A B 89 A 8B Symbol Probability Range a 0.2 [0.0, 0.2) e 0.3 [0.2, 0.5) i 0.1 [0.5, 0.6) o 0.2 [0.6, 0.8) u 0.1 [0.8, 0.9)! 0.1 [0.9, 1.0)

More information

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year Image compression Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Methods for Image Processing academic year 2017 2018 Data and information The representation of images in a raw

More information

Optimized Compression and Decompression Software

Optimized Compression and Decompression Software 2015 IJSRSET Volume 1 Issue 3 Print ISSN : 2395-1990 Online ISSN : 2394-4099 Themed Section: Engineering and Technology Optimized Compression and Decompression Software Mohd Shafaat Hussain, Manoj Yadav

More information

DEFLATE COMPRESSION ALGORITHM

DEFLATE COMPRESSION ALGORITHM DEFLATE COMPRESSION ALGORITHM Savan Oswal 1, Anjali Singh 2, Kirthi Kumari 3 B.E Student, Department of Information Technology, KJ'S Trinity College Of Engineering and Research, Pune, India 1,2.3 Abstract

More information

IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I

IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I 1 Need For Compression 2D data sets are much larger than 1D. TV and movie data sets are effectively 3D (2-space, 1-time). Need Compression for

More information

Comparative Study of Dictionary based Compression Algorithms on Text Data

Comparative Study of Dictionary based Compression Algorithms on Text Data 88 Comparative Study of Dictionary based Compression Algorithms on Text Data Amit Jain Kamaljit I. Lakhtaria Sir Padampat Singhania University, Udaipur (Raj.) 323601 India Abstract: With increasing amount

More information

The Effect of Non-Greedy Parsing in Ziv-Lempel Compression Methods

The Effect of Non-Greedy Parsing in Ziv-Lempel Compression Methods The Effect of Non-Greedy Parsing in Ziv-Lempel Compression Methods R. Nigel Horspool Dept. of Computer Science, University of Victoria P. O. Box 3055, Victoria, B.C., Canada V8W 3P6 E-mail address: nigelh@csr.uvic.ca

More information

Lecture 6 Review of Lossless Coding (II)

Lecture 6 Review of Lossless Coding (II) Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Lecture 6 Review of Lossless Coding (II) May 28, 2009 Outline Review Manual exercises on arithmetic coding and LZW dictionary coding 1 Review Lossy coding

More information

FPGA based Data Compression using Dictionary based LZW Algorithm

FPGA based Data Compression using Dictionary based LZW Algorithm FPGA based Data Compression using Dictionary based LZW Algorithm Samish Kamble PG Student, E & TC Department, D.Y. Patil College of Engineering, Kolhapur, India Prof. S B Patil Asso.Professor, E & TC Department,

More information

VIDEO SIGNALS. Lossless coding

VIDEO SIGNALS. Lossless coding VIDEO SIGNALS Lossless coding LOSSLESS CODING The goal of lossless image compression is to represent an image signal with the smallest possible number of bits without loss of any information, thereby speeding

More information

Ch. 2: Compression Basics Multimedia Systems

Ch. 2: Compression Basics Multimedia Systems Ch. 2: Compression Basics Multimedia Systems Prof. Ben Lee School of Electrical Engineering and Computer Science Oregon State University Outline Why compression? Classification Entropy and Information

More information

Analysis of Parallelization Effects on Textual Data Compression

Analysis of Parallelization Effects on Textual Data Compression Analysis of Parallelization Effects on Textual Data GORAN MARTINOVIC, CASLAV LIVADA, DRAGO ZAGAR Faculty of Electrical Engineering Josip Juraj Strossmayer University of Osijek Kneza Trpimira 2b, 31000

More information

Study of LZ77 and LZ78 Data Compression Techniques

Study of LZ77 and LZ78 Data Compression Techniques Study of LZ77 and LZ78 Data Compression Techniques Suman M. Choudhary, Anjali S. Patel, Sonal J. Parmar Abstract Data Compression is defined as the science and art of the representation of information

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

Visualizing Lempel-Ziv-Welch

Visualizing Lempel-Ziv-Welch Visualizing Lempel-Ziv-Welch The following slides assume you have read and (more or less) understood the description of the LZW algorithm in the 6.02 notes. The intent here is to help consolidate your

More information

Digital Image Processing

Digital Image Processing Lecture 9+10 Image Compression Lecturer: Ha Dai Duong Faculty of Information Technology 1. Introduction Image compression To Solve the problem of reduncing the amount of data required to represent a digital

More information

Error Resilient LZ 77 Data Compression

Error Resilient LZ 77 Data Compression Error Resilient LZ 77 Data Compression Stefano Lonardi Wojciech Szpankowski Mark Daniel Ward Presentation by Peter Macko Motivation Lempel-Ziv 77 lacks any form of error correction Introducing a single

More information

Improving LZW Image Compression

Improving LZW Image Compression European Journal of Scientific Research ISSN 1450-216X Vol.44 No.3 (2010), pp.502-509 EuroJournals Publishing, Inc. 2010 http://www.eurojournals.com/ejsr.htm Improving LZW Image Compression Sawsan A. Abu

More information

Basic Compression Library

Basic Compression Library Basic Compression Library Manual API version 1.2 July 22, 2006 c 2003-2006 Marcus Geelnard Summary This document describes the algorithms used in the Basic Compression Library, and how to use the library

More information

A Comprehensive Review of Data Compression Techniques

A Comprehensive Review of Data Compression Techniques Volume-6, Issue-2, March-April 2016 International Journal of Engineering and Management Research Page Number: 684-688 A Comprehensive Review of Data Compression Techniques Palwinder Singh 1, Amarbir Singh

More information

Procedural Compression: Efficient, Low Bandwidth Remote Android Graphics

Procedural Compression: Efficient, Low Bandwidth Remote Android Graphics : Efficient, Low Bandwidth Remote Android Graphics Joel Isaacson. Copyright 2014 Joel Isaacson joel@ascender.com of GUI Rendered Video Streams The compression system described allows high performance interactive

More information

A Compression Technique Based On Optimality Of LZW Code (OLZW)

A Compression Technique Based On Optimality Of LZW Code (OLZW) 2012 Third International Conference on Computer and Communication Technology A Compression Technique Based On Optimality Of LZW (OLZW) Utpal Nandi Dept. of Comp. Sc. & Engg. Academy Of Technology Hooghly-712121,West

More information

Data Compression. Guest lecture, SGDS Fall 2011

Data Compression. Guest lecture, SGDS Fall 2011 Data Compression Guest lecture, SGDS Fall 2011 1 Basics Lossy/lossless Alphabet compaction Compression is impossible Compression is possible RLE Variable-length codes Undecidable Pigeon-holes Patterns

More information

Encoding. A thesis submitted to the Graduate School of University of Cincinnati in

Encoding. A thesis submitted to the Graduate School of University of Cincinnati in Lossless Data Compression for Security Purposes Using Huffman Encoding A thesis submitted to the Graduate School of University of Cincinnati in a partial fulfillment of requirements for the degree of Master

More information

IMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1

IMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1 IMAGE COMPRESSION- I Week VIII Feb 25 02/25/2003 Image Compression-I 1 Reading.. Chapter 8 Sections 8.1, 8.2 8.3 (selected topics) 8.4 (Huffman, run-length, loss-less predictive) 8.5 (lossy predictive,

More information

AAL 217: DATA STRUCTURES

AAL 217: DATA STRUCTURES Chapter # 4: Hashing AAL 217: DATA STRUCTURES The implementation of hash tables is frequently called hashing. Hashing is a technique used for performing insertions, deletions, and finds in constant average

More information

Data Compression Techniques

Data Compression Techniques Data Compression Techniques Part 1: Entropy Coding Lecture 1: Introduction and Huffman Coding Juha Kärkkäinen 31.10.2017 1 / 21 Introduction Data compression deals with encoding information in as few bits

More information

Implementation of Robust Compression Technique using LZ77 Algorithm on Tensilica s Xtensa Processor

Implementation of Robust Compression Technique using LZ77 Algorithm on Tensilica s Xtensa Processor 2016 International Conference on Information Technology Implementation of Robust Compression Technique using LZ77 Algorithm on Tensilica s Xtensa Processor Vasanthi D R and Anusha R M.Tech (VLSI Design

More information

CS/COE 1501

CS/COE 1501 CS/COE 1501 www.cs.pitt.edu/~lipschultz/cs1501/ Compression What is compression? Represent the same data using less storage space Can get more use out a disk of a given size Can get more use out of memory

More information

Category: Informational May DEFLATE Compressed Data Format Specification version 1.3

Category: Informational May DEFLATE Compressed Data Format Specification version 1.3 Network Working Group P. Deutsch Request for Comments: 1951 Aladdin Enterprises Category: Informational May 1996 DEFLATE Compressed Data Format Specification version 1.3 Status of This Memo This memo provides

More information

ENSC Multimedia Communications Engineering Topic 4: Huffman Coding 2

ENSC Multimedia Communications Engineering Topic 4: Huffman Coding 2 ENSC 424 - Multimedia Communications Engineering Topic 4: Huffman Coding 2 Jie Liang Engineering Science Simon Fraser University JieL@sfu.ca J. Liang: SFU ENSC 424 1 Outline Canonical Huffman code Huffman

More information

CHAPTER II LITERATURE REVIEW

CHAPTER II LITERATURE REVIEW CHAPTER II LITERATURE REVIEW 2.1 BACKGROUND OF THE STUDY The purpose of this chapter is to study and analyze famous lossless data compression algorithm, called LZW. The main objective of the study is to

More information

15 Data Compression 2014/9/21. Objectives After studying this chapter, the student should be able to: 15-1 LOSSLESS COMPRESSION

15 Data Compression 2014/9/21. Objectives After studying this chapter, the student should be able to: 15-1 LOSSLESS COMPRESSION 15 Data Compression Data compression implies sending or storing a smaller number of bits. Although many methods are used for this purpose, in general these methods can be divided into two broad categories:

More information

MODELING DELTA ENCODING OF COMPRESSED FILES. and. and

MODELING DELTA ENCODING OF COMPRESSED FILES. and. and International Journal of Foundations of Computer Science c World Scientific Publishing Company MODELING DELTA ENCODING OF COMPRESSED FILES SHMUEL T. KLEIN Department of Computer Science, Bar-Ilan University

More information

Modeling Delta Encoding of Compressed Files

Modeling Delta Encoding of Compressed Files Modeling Delta Encoding of Compressed Files EXTENDED ABSTRACT S.T. Klein, T.C. Serebro, and D. Shapira 1 Dept of CS Bar Ilan University Ramat Gan, Israel tomi@cs.biu.ac.il 2 Dept of CS Bar Ilan University

More information

Indexing. CS6200: Information Retrieval. Index Construction. Slides by: Jesse Anderton

Indexing. CS6200: Information Retrieval. Index Construction. Slides by: Jesse Anderton Indexing Index Construction CS6200: Information Retrieval Slides by: Jesse Anderton Motivation: Scale Corpus Terms Docs Entries A term incidence matrix with V terms and D documents has O(V x D) entries.

More information

A Novel Image Compression Technique using Simple Arithmetic Addition

A Novel Image Compression Technique using Simple Arithmetic Addition Proc. of Int. Conf. on Recent Trends in Information, Telecommunication and Computing, ITC A Novel Image Compression Technique using Simple Arithmetic Addition Nadeem Akhtar, Gufran Siddiqui and Salman

More information

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 26 Source Coding (Part 1) Hello everyone, we will start a new module today

More information

CSE 421 Greedy: Huffman Codes

CSE 421 Greedy: Huffman Codes CSE 421 Greedy: Huffman Codes Yin Tat Lee 1 Compression Example 100k file, 6 letter alphabet: File Size: ASCII, 8 bits/char: 800kbits 2 3 > 6; 3 bits/char: 300kbits a 45% b 13% c 12% d 16% e 9% f 5% Why?

More information

Ch. 2: Compression Basics Multimedia Systems

Ch. 2: Compression Basics Multimedia Systems Ch. 2: Compression Basics Multimedia Systems Prof. Thinh Nguyen (Based on Prof. Ben Lee s Slides) Oregon State University School of Electrical Engineering and Computer Science Outline Why compression?

More information

Abdullah-Al Mamun. CSE 5095 Yufeng Wu Spring 2013

Abdullah-Al Mamun. CSE 5095 Yufeng Wu Spring 2013 Abdullah-Al Mamun CSE 5095 Yufeng Wu Spring 2013 Introduction Data compression is the art of reducing the number of bits needed to store or transmit data Compression is closely related to decompression

More information

University of Waterloo CS240 Spring 2018 Help Session Problems

University of Waterloo CS240 Spring 2018 Help Session Problems University of Waterloo CS240 Spring 2018 Help Session Problems Reminder: Final on Wednesday, August 1 2018 Note: This is a sample of problems designed to help prepare for the final exam. These problems

More information

Journal of Computer Engineering and Technology (IJCET), ISSN (Print), International Journal of Computer Engineering

Journal of Computer Engineering and Technology (IJCET), ISSN (Print), International Journal of Computer Engineering Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print) ISSN 0976 6375(Online) Volume

More information

CS/COE 1501

CS/COE 1501 CS/COE 1501 www.cs.pitt.edu/~nlf4/cs1501/ Compression What is compression? Represent the same data using less storage space Can get more use out a disk of a given size Can get more use out of memory E.g.,

More information

GZIP is a software application used for file compression. It is widely used by many UNIX

GZIP is a software application used for file compression. It is widely used by many UNIX Behram Mistree & Dmitry Kashlev 6.375 Final Project Report GZIP Encoding and Decoding in Hardware GZIP Introduction GZIP is a software application used for file compression. It is widely used by many UNIX

More information

Volume 2, Issue 9, September 2014 ISSN

Volume 2, Issue 9, September 2014 ISSN Fingerprint Verification of the Digital Images by Using the Discrete Cosine Transformation, Run length Encoding, Fourier transformation and Correlation. Palvee Sharma 1, Dr. Rajeev Mahajan 2 1M.Tech Student

More information

Distributed source coding

Distributed source coding Distributed source coding Suppose that we want to encode two sources (X, Y ) with joint probability mass function p(x, y). If the encoder has access to both X and Y, it is sufficient to use a rate R >

More information

Compression; Error detection & correction

Compression; Error detection & correction Compression; Error detection & correction compression: squeeze out redundancy to use less memory or use less network bandwidth encode the same information in fewer bits some bits carry no information some

More information

Data Compression Scheme of Dynamic Huffman Code for Different Languages

Data Compression Scheme of Dynamic Huffman Code for Different Languages 2011 International Conference on Information and Network Technology IPCSIT vol.4 (2011) (2011) IACSIT Press, Singapore Data Compression Scheme of Dynamic Huffman Code for Different Languages Shivani Pathak

More information

A study in compression algorithms

A study in compression algorithms Master Thesis Computer Science Thesis no: MCS-004:7 January 005 A study in compression algorithms Mattias Håkansson Sjöstrand Department of Interaction and System Design School of Engineering Blekinge

More information

A Comparative Study Of Text Compression Algorithms

A Comparative Study Of Text Compression Algorithms International Journal of Wisdom Based Computing, Vol. 1 (3), December 2011 68 A Comparative Study Of Text Compression Algorithms Senthil Shanmugasundaram Department of Computer Science, Vidyasagar College

More information

WIRE/WIRELESS SENSOR NETWORKS USING K-RLE ALGORITHM FOR A LOW POWER DATA COMPRESSION

WIRE/WIRELESS SENSOR NETWORKS USING K-RLE ALGORITHM FOR A LOW POWER DATA COMPRESSION WIRE/WIRELESS SENSOR NETWORKS USING K-RLE ALGORITHM FOR A LOW POWER DATA COMPRESSION V.KRISHNAN1, MR. R.TRINADH 2 1 M. Tech Student, 2 M. Tech., Assistant Professor, Dept. Of E.C.E, SIR C.R. Reddy college

More information

Text Compression. General remarks and Huffman coding Adobe pages Arithmetic coding Adobe pages 15 25

Text Compression. General remarks and Huffman coding Adobe pages Arithmetic coding Adobe pages 15 25 Text Compression General remarks and Huffman coding Adobe pages 2 14 Arithmetic coding Adobe pages 15 25 Dictionary coding and the LZW family Adobe pages 26 46 Performance considerations Adobe pages 47

More information

Text Compression through Huffman Coding. Terminology

Text Compression through Huffman Coding. Terminology Text Compression through Huffman Coding Huffman codes represent a very effective technique for compressing data; they usually produce savings between 20% 90% Preliminary example We are given a 100,000-character

More information

Image Coding and Data Compression

Image Coding and Data Compression Image Coding and Data Compression Biomedical Images are of high spatial resolution and fine gray-scale quantisiation Digital mammograms: 4,096x4,096 pixels with 12bit/pixel 32MB per image Volume data (CT

More information

David Rappaport School of Computing Queen s University CANADA. Copyright, 1996 Dale Carnegie & Associates, Inc.

David Rappaport School of Computing Queen s University CANADA. Copyright, 1996 Dale Carnegie & Associates, Inc. David Rappaport School of Computing Queen s University CANADA Copyright, 1996 Dale Carnegie & Associates, Inc. Data Compression There are two broad categories of data compression: Lossless Compression

More information

IMAGE COMPRESSION TECHNIQUES

IMAGE COMPRESSION TECHNIQUES IMAGE COMPRESSION TECHNIQUES A.VASANTHAKUMARI, M.Sc., M.Phil., ASSISTANT PROFESSOR OF COMPUTER SCIENCE, JOSEPH ARTS AND SCIENCE COLLEGE, TIRUNAVALUR, VILLUPURAM (DT), TAMIL NADU, INDIA ABSTRACT A picture

More information

Data Compression 신찬수

Data Compression 신찬수 Data Compression 신찬수 Data compression Reducing the size of the representation without affecting the information itself. Lossless compression vs. lossy compression text file image file movie file compression

More information

An Advanced Text Encryption & Compression System Based on ASCII Values & Arithmetic Encoding to Improve Data Security

An Advanced Text Encryption & Compression System Based on ASCII Values & Arithmetic Encoding to Improve Data Security Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 10, October 2014,

More information

Noise Reduction in Data Communication Using Compression Technique

Noise Reduction in Data Communication Using Compression Technique Digital Technologies, 2016, Vol. 2, No. 1, 9-13 Available online at http://pubs.sciepub.com/dt/2/1/2 Science and Education Publishing DOI:10.12691/dt-2-1-2 Noise Reduction in Data Communication Using Compression

More information

An Image Lossless Compression Patent

An Image Lossless Compression Patent An Image Lossless Compression Patent Wenyan Wang College of Electronic Engineering, Guangxi Normal University, Guilin 541004, China Tel: 86-773-582-6559 E-mail: wwy@mailbox.gxnu.edu.cn The research is

More information

PREDICTIVE CODING WITH NEURAL NETS: APPLICATION TO TEXT COMPRESSION

PREDICTIVE CODING WITH NEURAL NETS: APPLICATION TO TEXT COMPRESSION PREDICTIVE CODING WITH NEURAL NETS: APPLICATION TO TEXT COMPRESSION J iirgen Schmidhuber Fakultat fiir Informatik Technische Universitat Miinchen 80290 Miinchen, Germany Stefan Heil Abstract To compress

More information

Image coding and compression

Image coding and compression Chapter 2 Image coding and compression 2. Lossless and lossy compression We have seen that image files can be very large. It is thus important for reasons both of storage and file transfer to make these

More information

15 July, Huffman Trees. Heaps

15 July, Huffman Trees. Heaps 1 Huffman Trees The Huffman Code: Huffman algorithm uses a binary tree to compress data. It is called the Huffman code, after David Huffman who discovered d it in 1952. Data compression is important in

More information

Implementation and Optimization of LZW Compression Algorithm Based on Bridge Vibration Data

Implementation and Optimization of LZW Compression Algorithm Based on Bridge Vibration Data Available online at www.sciencedirect.com Procedia Engineering 15 (2011) 1570 1574 Advanced in Control Engineeringand Information Science Implementation and Optimization of LZW Compression Algorithm Based

More information