Figure-2.1. Information system with encoder/decoders.

Size: px
Start display at page:

Download "Figure-2.1. Information system with encoder/decoders."

Transcription

1 2. Entropy Coding In the section on Information Theory, information system is modeled as the generationtransmission-user triplet, as depicted in fig-1.1, to emphasize the information aspect of the system. Let us break up the system to have more detail in order to serve our purpose of understanding and designing actual systems-algorithms. Figure-2.1. Information system with encoder/decoders. The information transmission channel shown in fig-2.1 can be any data transfer or storage system; a twisted pair cable between computer systems, a fiber optic cable between two cities, a model of an atmosphere through which satellite radio waves propagates or a magnetic disk used to store data. Their common property is that they are, more or less, susceptible to external disruptions commonly modeled as additive noise. These disruptions on the signal, light or magnetic field result in erroneous data and consequently incorrect information on the user side of the system. Shannon s second theorem states, in summary, that one can achieve desired reasonable performance against the noise by using more resources in terms of time and bandwidth. Since this subject is out of this documents scope, here we only mention that channel encoder-decoder pair is designed to achieve this goal. The goal of the source encoder-decoder pair is to minimize the required data flow corresponding information transfer. Source coding process tries to find code which maximizes the coding efficiency we have seen in the Information Theory section, thus reducing the average code length of the output alphabet. Minimum code length achievable is the entropy of the information source itself. Hence, source coding is usually called as entropy coding. It is interesting to note that source encoder actually removes the redundancy in the data while the channel encoder inserts some redundancy. Of the common entropy coding techniques two classes can be identified; Statistical techniques, where the source probabilities must be available beforehand Dictionary based techniques, in which there is no such requirement. In the first, an optimal ensemble ( B, v) is created using the original ensemble (, z). This requires the probability distribution z to be known at the beginning. If entire data sequence to be encoded is known beforehand then the distribution can be calculated, otherwise the statistics calculations can be employed on a sample data in the hand. In the latter case there is always a chance that the sample data is not a very good representative of the entire set, which results in a poorer code. Recall the example 1.6 where four symbols were coded with codes of different lengths. Such a code is generated by the simplest statistical technique; Shannon-Fano. ESKISEHIR OSMNGZI UNIVERSITY, DEPT. OF ELECTRICL & ELECTRONICS ENG 1

2 Shannon-Fano Coding In this technique, the source symbols and their probabilities are first sorted and listed in the order of their probabilities as shown in the fig-2.2a with the values and symbols from the example 1.6. The list is divided into two such that the sums of the probabilities in each part are as close to each other as possible. Obviously, at this step, these values are 0.49 and Ideally these values could be 0.5 each for a better code. Upper and lower parts are assigned bit values 0 and 1 afterwards as shown in the fig-2.2a. a i P ( a i ) ssigned bit value nd step 3 rd step Figure-2.2. a) Sorted list and first bit assignments. b) 2 nd and 3 rd steps. Continuing this procedure until only one symbol is left in all parts, each symbol gets assigned a unique bit sequence as shown in fig-2.b. Rightmost binary number in each row is the assigned bit sequence for the corresponding symbol. Notice that the length of the bit sequence for each symbol monotonically decreases with the probability of the symbol. Shannon-Fano technique has the advantage of simplicity, and can be performed in place. The average code length is 1.81 [bits/symbol] as calculated in the example 1.6 whereas the entropy is 1.76 [bits/symbol]. lthough not optimal, it is easy to see that H ( v) Lavg < H ( v) + 1. Example 2.1: Find the Shannon-Fano code for the probability set v = [ ] T (Notice that the coding process does not require the symbols but uses their probabilities. ctual symbols may also be binary blocks of either fixed or variable length.) Steps of bit assignments are show in fig-2.3. P i Final code Figure-2.3. Solution steps of the example 2.1. ESKISEHIR OSMNGZI UNIVERSITY, DEPT. OF ELECTRICL & ELECTRONICS ENG 2

3 Notice that, in fig-2.3, at the second step choosing the split point or for the lower part would create the same distance from the ideal split point In the example is chosen (marked with dashed line). The average code length is calculated to be Lavg = 3.19, whereas the entropy is H (v) lthough H ( v) Lavg < H ( v) + 1 is satisfied once again, we see that the code did not achieve the optimal which is the entropy of the source itself. Obviously, generation of the block codes do not actually compress the data. For that, the input data must be expressed in terms of the new codes. Example 2.2: Let us encode the stream using Shannon-Fano code just generated in the previous example where each probability value corresponds a decimal symbol in the alphabet {0,1,2,3,4,5,6,7,8,9}. (Just a note here; If straight binary coding (or BDC) were used we would need 4 bits although 4 bits could represent 16 distinct symbols, indicating clear inefficiency.) We just replace the symbols in the stream with the corresponding binary code in the last column of the fig-2.3; Sub-streams are separated with a. for clarity. Such a separation is not actually required in practice since the code is unique, that is it is readily possible to identify the sub-streams from a continuous binary stream provided that we have the block code alphabet. 105 The average code length of the output stream is L avg = [bits/symbol]. 32 This result is better than 4-bit BCD but not as good as the L avg calculated using the codes and their probabilities in the previous example (3.19), and certainly has noticeable difference from the calculated entropy. So, what went wrong? The answer is the statistics. The code in the previous example is found using the probabilities given in that example. Had we have a stream which strictly conform the given statistics we could have the exact Lavg we expect to get. Poorer agreement of input streams to the statistics results in poorer compression. s mentioned above, decoding the binary stream is straightforward; just collect bits from the stream until a block code in the alphabet is seen, and replace that sub-stream with corresponding symbol/decimal digit. Huffman Coding Huffman coding (1952) is known to be the legend of the entropy coding. In this technique, the symbol probabilities are again listed in the order of non-increasing probability as shown in fig-2.4. Ordering here does not really effect the operation and performance of the technique but make the technique easier to understand. Ordering allows easy recognition of the symbols with the lowest probabilities. Starting with the lowest, at each step, two symbols with the lowest probabilities in the list are combined and an imaginary block symbol is created with the probability equal to the sum of theirs. ESKISEHIR OSMNGZI UNIVERSITY, DEPT. OF ELECTRICL & ELECTRONICS ENG 3

4 Continuing this process until we have only one block symbol representing all symbols and with the probability of 1, a binary tree named as Huffman tree is created. Example 2.3: Let us build the binary Huffman tree for the probability set given in the previous example. s usual the symbols themselves are not used but their probabilities are. Figure-2.4 illustrates the creation of the tree starting from the lowest probability elements. The ordering at each step is omitted; instead, combined pairs are indicated by lines in the figure. Figure-2.4. Huffman tree for the example 2.3. The number of bits that each symbol should be represented by can be determined from the Huffman tree. The number of nodes passed from the node to the root inclusive equals to the code length of that symbol. With that provision many different techniques can be employed in order to assign actual bit patterns to the symbols. In one well known technique, starting from the root of the tree, the upper branches receive a 0 and the lower branches receive a 1 at each node. For example, for node with the probability1.00 the branches 0.42 and 0.58 receive the values 0 and 1 respectively. Since most significant bits (prefixes) are carried from root to leaves, this process generates a minimum prefix code. One may chose the opposite assignment convention or may not follow any convention at all. Nevertheless, assigning bit values using the first technique mentioned for our example, the bit assignments shown in fig-2.5 are obtained. Inverting each bit value (replacing 0s with 1s and vice versa) would generate an inverted code with exactly the same characteristics. Once again the code generated is uniquely decodable and instantaneous; that is, decoders shall be able to determine the last bit of the current symbol as soon as the bit is received. For this example, average symbol length of the Huffman code is the same as the one found in the Shannon-Fano case; Lavg = 3.19, since the assigned bit-lengths of the individual symbols are the same for this individual example. Figure-2.6, on the other hand, demonstrates a distribution where Huffman s technique creates an optimal code but Shannon-Fano does not. ESKISEHIR OSMNGZI UNIVERSITY, DEPT. OF ELECTRICL & ELECTRONICS ENG 4

5 Figure-2.5. Huffman tree bit value assignments for example 2.3 P(s m ) Shannon- Fano Huffman H(s) = L avg = 2.29 L avg = 2.28 Figure-2.6. n example where Huffman is a bit better than Shannon-Fano. Unlike Shannon-Fano technique which is not guaranteed to generate an optimal code, Huffman s technique is guaranteed to generate a minimal redundancy code. This means that, there is no code with an average code-length shorter than Huffman s exists. However the complexity of the algorithm, especially for large alphabets, raised a need for truncations on the algorithm with some penalty on the average code length. Truncated Huffman Code With truncations, a tradeoff between the cost of calculating the optimal code and the cost of transmitting/storing extra bits of the suboptimal code is made. Usually 2 c symbols with lowest probabilities in the ordered set are selected and, instead of them, an imaginary block symbol with the probability equal to the sum of probabilities of these 2 c symbols is used. Huffman code is found as usual afterwards. The code corresponding to the block symbol is used as a prefix and appended to the left of the unique 2 c codes representing the individual symbols. Example 2.4: Let us find a truncated Huffman code for the symbols whose probabilities are given in example 2.3. lthough the number of symbols to be contained in the block symbol can be selected arbitrarily, it is intuitive to make it equal to an exponent of 2 for an ESKISEHIR OSMNGZI UNIVERSITY, DEPT. OF ELECTRICL & ELECTRONICS ENG 5

6 efficient sub-coding. In the case where 4 symbols with lowest probabilities are selected, 2 bits are needed to represent each excluding the prefix bits. Figure-2.7 shows the Huffman tree and bit assignments. The sum of the probabilities of 4 lowest probability values is shown bold at the bottom of the list. Those probabilities are {0.08, 0.07, 0.05, 0.02}. Their sub-codes, then, shall be {00, 01, 10, 11}. The final codes for the symbols which are used in the block symbol, using the prefix 11 appended at each sub-code, are {1100, 1101, 1110, 1111}. The entropy of the modified source is H(u) = The average code length shall be calculated, after replacing the block symbol with individual 4 bit codes, as L avg = Note that this value is very close to the full Huffman code. lso, truncated tree has 6 nodes whereas full tree had 9 nodes, indicating roughly 30% savings in calculation complexity. The complexity is out of this documents scope. The full block symbol alphabet is given in fig-2.8 Figure-2.7. Truncated Huffman tree for the example X xx Figure-2.8. Full conversion table for the truncated Huffman code (ex.2.4). The statistical data compression methods we have seen, Shannon-Fano and Huffman, assign variable length bit streams to symbols according to the symbols probability. If the probability of symbol is high then it is assigned less number of bits. Conversely, if the symbol probability is low then it is assigned a bit stream probably longer than the average, possibly very long bit streams in cases of large input alphabets with diverse probabilities. It is, by Huffman, proven that the Shannon-Fano method is not guaranteed to generate optimal codes whereas Huffman s minimum redundancy codes are the optimal among the codes which assign integral number of bits to symbols. Huffman code probably will assign 1 bit to a symbol with a probability of 0.5. The problem with that is it would also assign 1 bit to a symbol with a probability of 0.9. It is not possible to assign 0.1 bit, for example, to a symbol. lthough theoretically much better performance is possible (via block codes), Huffman s code can only be the best among sub-optimal techniques in its ESKISEHIR OSMNGZI UNIVERSITY, DEPT. OF ELECTRICL & ELECTRONICS ENG 6

7 class. Here we assume that the data stream to be compressed is as long as the stream over which the statistics were calculated and has similar statistical characteristics. nother negative that we can chalk up against Huffman s technique is the complexity of the algorithm. Even with the employment of additional ease-up shortcuts like truncations just discussed, the algorithm itself is still a big resource consumer. Yet it may not, for some probability distributions, achieve the optimal performance noting that the optimal is the entropy itself. In the following sections, two techniques addressing two problems mentioned above are discussed. rithmetic Coding Data compression algorithms, considering their use in computers, accept one or more data file(s) as input and, after processing, spit out another data file, presumably compressed; that is, the output is expected to be shorter than the input. Here, the output means all data needed to reconstruct original data/files, including the look up tables used for conversion. In statistical techniques the processing involves, not surprisingly, the calculation of statistics; the symbol probabilities. The symbols are generally bytes or bits, since the compression operations are done in computers and computers use bytes and bits. There is no method, yet, to determine the symbol definition(s) for minimum entropy plus minimum alphabet. Shannon s theorem states that larger the symbols better the compression. This is no solution for selection of the symbols since one might select entire file as the single symbol and generate an entropy of 0, while still being left with the task of transmitting the alphabet; entire file. Stated in more general words; Can one create a code which, on the average, represents the data with less number of bits than the entropy? The answer is no, at least in the domain of symbols where the entropy is calculated. This answer inherently states that one can change the message domain to have more efficient symbols. Typical example for this is the run-length coding for bi-level images, in which the symbols or messages are the number of same-valued pixels in a run instead of the pixel values themselves. We shall discuss this technique in the following sections. The techniques discussed previously (Shannon-Fano and Huffman) are fixed-to-variablelength. That is they assign variable length codes for given fixed length symbols (after the statistics, of course). Ziv-Lempel s technique can be considered doing the opposite; assign fixed length codes to variable length input symbols which we shall discuss in the coming sections. Given the weaknesses of the two statistical compression techniques discussed, it is no surprise that new techniques are continuously being searched and actually found. rithmetic coding has been a strong candidate to overcome the deficiency of assignment of integer number of bits to symbols. lthough the mathematics had been known for decades, however, it was not possible to implement it. rithmetic coding uses floating point numbers of almost unlimited precision in theory which is difficult to implement if not impossible. When successfully implemented by J.G.Witten in 1987, it became clear that no floating point numbers are needed to represent floating point numbers. The most important property of the arithmetic coding is the inherent assignment of non-integer number of bits for the symbols, hence a considerable leap towards the entropy compared to Huffman. ESKISEHIR OSMNGZI UNIVERSITY, DEPT. OF ELECTRICL & ELECTRONICS ENG 7

8 rithmetic coding initially allocates, from the semi-closed range [0, 1), the semi-closed ranges to the input symbols according to their probabilities. It is not required to sort the symbols in any way as long as the same order is used in both the encoder and the decoder. The leftmost vertical scale in fig- 2.9 shows an example range assignments for the characters of the string S= BRCDBR. The alphabet of the source is {, B, C, D, R} and corresponding probability set is z = [ 5/11 2 /11 1/11 1/11 2 / 11] T. The first symbol of the alphabet,, owns the range [0, 5/11), B owns the range [5/11, 7/11) and so on. The first symbol taken from the string is, and it corresponds to the probability range [0, 5/11). This range is now scaled to cover entire [0,1) range. Scaling operation is shown in fig-2.9 with two lines. The second symbol from the string S is B, and is shown in the second vertical scale as bold. The scaling operation is applied again. The rest of the symbols are treated similarly and operation continues until the last symbol in the string is processed. Obviously the range marked in the last vertical scale represents the probability of the string S= BRCDBR occurring from the alphabet given. It is marked as P(S) and is equivalent to the product of symbol probabilities in the string. 11 P ( S) = P( ) (2.1) i= 1 S i s a byproduct we have found a range, by scaling the last range back to the originalleftmost scale. It can be proven that any number within that range represents the string S. (It is also worth noting that the range [0, 1) represents all strings that can be generated from this source.) Instead of back-scaling, the range could also be obtained by updating the scale at each step. The algorithm is illustrated by a pseudo-code given in fig-2.10 and the final range is in fig Figure-2.9. Expansion of ranges in arithmetic coding. Running the algorithm for our example outputs final range as L= H= with 16 digit precision. It is stated that any number between these numbers actually represents the input string will do. Inspecting the algorithm and the intermediate values of L, H and the difference between them it is seen that the numbers L and H gets closer and closer as each new symbol is processed as depicted in fig It is ESKISEHIR OSMNGZI UNIVERSITY, DEPT. OF ELECTRICL & ELECTRONICS ENG 8

9 clear that the precision required to carry these numbers grows without caring about the precision limit employed by the standards for computers. L = 0.0 H = 1.0 Start_Loop R = H - L H = L + R * H_of_i_th_symbol L = L + R * L_of_i_th_symbol Loop_Until_the_Last_Symbol_is_Processed Output something_between_l_and_h Figure Pseudo code for arithmetic coding. Step L H H-L B R C D B R Figure The intermediate values for L, H and H-L. 1 0,9 0,8 0,7 L 0,6 H 0,5 0,4 0,3 0,2 0,1 0 B R C D B R Figure pproach of H and L values to final range. ESKISEHIR OSMNGZI UNIVERSITY, DEPT. OF ELECTRICL & ELECTRONICS ENG 9

10 The decoding process is, not surprisingly, the opposite of the encoding, provided that the alphabet, the probability set and the number of symbols encoded are known beforehand. The pseudo-code algorithm for decoder is shown in fig X = encoded_number Start Loop Find_range_enclosing_X Output the_symbol_of_the_range_found R = H_of_symbol_found - L_of_symbol_found X = (X - L_of_symbol_found) / R Loop_until_the_last_symbol_output Figure lgorithm of the decoding process. Figure-2.14 shows the progress of the decoding process for our example when low value of the range found in encoding is taken as the input to the decoder. X L H Output B R C D B R Figure Intermediate values of the decoding process. Recalling that any number between high and low values of the range could be used, one would obtain the same string using the high value as input to the decoder. lgorithms require lots of floating point operations compared to previously discussed techniques. But this is not the single thing that makes arithmetic coding difficult. Coding and decoding processes are straightforward, but they require the use of impractically high precision floating point numbers. Even for the shortest strings with the length of about 20 symbols the standard double precision numbers underflow. ESKISEHIR OSMNGZI UNIVERSITY, DEPT. OF ELECTRICL & ELECTRONICS ENG 10

Lossless Compression Algorithms

Lossless Compression Algorithms Multimedia Data Compression Part I Chapter 7 Lossless Compression Algorithms 1 Chapter 7 Lossless Compression Algorithms 1. Introduction 2. Basics of Information Theory 3. Lossless Compression Algorithms

More information

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding Fundamentals of Multimedia Lecture 5 Lossless Data Compression Variable Length Coding Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Fundamentals of Multimedia 1 Data Compression Compression

More information

Information Theory and Communication

Information Theory and Communication Information Theory and Communication Shannon-Fano-Elias Code and Arithmetic Codes Ritwik Banerjee rbanerjee@cs.stonybrook.edu c Ritwik Banerjee Information Theory and Communication 1/12 Roadmap Examples

More information

Compressing Data. Konstantin Tretyakov

Compressing Data. Konstantin Tretyakov Compressing Data Konstantin Tretyakov (kt@ut.ee) MTAT.03.238 Advanced April 26, 2012 Claude Elwood Shannon (1916-2001) C. E. Shannon. A mathematical theory of communication. 1948 C. E. Shannon. The mathematical

More information

EE67I Multimedia Communication Systems Lecture 4

EE67I Multimedia Communication Systems Lecture 4 EE67I Multimedia Communication Systems Lecture 4 Lossless Compression Basics of Information Theory Compression is either lossless, in which no information is lost, or lossy in which information is lost.

More information

Chapter 5 VARIABLE-LENGTH CODING Information Theory Results (II)

Chapter 5 VARIABLE-LENGTH CODING Information Theory Results (II) Chapter 5 VARIABLE-LENGTH CODING ---- Information Theory Results (II) 1 Some Fundamental Results Coding an Information Source Consider an information source, represented by a source alphabet S. S = { s,

More information

Chapter 2: Number Systems

Chapter 2: Number Systems Chapter 2: Number Systems Logic circuits are used to generate and transmit 1s and 0s to compute and convey information. This two-valued number system is called binary. As presented earlier, there are many

More information

DLD VIDYA SAGAR P. potharajuvidyasagar.wordpress.com. Vignana Bharathi Institute of Technology UNIT 1 DLD P VIDYA SAGAR

DLD VIDYA SAGAR P. potharajuvidyasagar.wordpress.com. Vignana Bharathi Institute of Technology UNIT 1 DLD P VIDYA SAGAR UNIT I Digital Systems: Binary Numbers, Octal, Hexa Decimal and other base numbers, Number base conversions, complements, signed binary numbers, Floating point number representation, binary codes, error

More information

COMPSCI 650 Applied Information Theory Feb 2, Lecture 5. Recall the example of Huffman Coding on a binary string from last class:

COMPSCI 650 Applied Information Theory Feb 2, Lecture 5. Recall the example of Huffman Coding on a binary string from last class: COMPSCI 650 Applied Information Theory Feb, 016 Lecture 5 Instructor: Arya Mazumdar Scribe: Larkin Flodin, John Lalor 1 Huffman Coding 1.1 Last Class s Example Recall the example of Huffman Coding on a

More information

Multimedia Systems. Part 20. Mahdi Vasighi

Multimedia Systems. Part 20. Mahdi Vasighi Multimedia Systems Part 2 Mahdi Vasighi www.iasbs.ac.ir/~vasighi Department of Computer Science and Information Technology, Institute for dvanced Studies in asic Sciences, Zanjan, Iran rithmetic Coding

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 11 Coding Strategies and Introduction to Huffman Coding The Fundamental

More information

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson zhuyongxin@sjtu.edu.cn 2 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information

More information

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 26 Source Coding (Part 1) Hello everyone, we will start a new module today

More information

CHAPTER 2 Data Representation in Computer Systems

CHAPTER 2 Data Representation in Computer Systems CHAPTER 2 Data Representation in Computer Systems 2.1 Introduction 37 2.2 Positional Numbering Systems 38 2.3 Decimal to Binary Conversions 38 2.3.1 Converting Unsigned Whole Numbers 39 2.3.2 Converting

More information

CHAPTER 2 Data Representation in Computer Systems

CHAPTER 2 Data Representation in Computer Systems CHAPTER 2 Data Representation in Computer Systems 2.1 Introduction 37 2.2 Positional Numbering Systems 38 2.3 Decimal to Binary Conversions 38 2.3.1 Converting Unsigned Whole Numbers 39 2.3.2 Converting

More information

IMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1

IMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1 IMAGE COMPRESSION- I Week VIII Feb 25 02/25/2003 Image Compression-I 1 Reading.. Chapter 8 Sections 8.1, 8.2 8.3 (selected topics) 8.4 (Huffman, run-length, loss-less predictive) 8.5 (lossy predictive,

More information

IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I

IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I 1 Need For Compression 2D data sets are much larger than 1D. TV and movie data sets are effectively 3D (2-space, 1-time). Need Compression for

More information

On Generalizations and Improvements to the Shannon-Fano Code

On Generalizations and Improvements to the Shannon-Fano Code Acta Technica Jaurinensis Vol. 10, No.1, pp. 1-12, 2017 DOI: 10.14513/actatechjaur.v10.n1.405 Available online at acta.sze.hu On Generalizations and Improvements to the Shannon-Fano Code D. Várkonyi 1,

More information

Intro. To Multimedia Engineering Lossless Compression

Intro. To Multimedia Engineering Lossless Compression Intro. To Multimedia Engineering Lossless Compression Kyoungro Yoon yoonk@konkuk.ac.kr 1/43 Contents Introduction Basics of Information Theory Run-Length Coding Variable-Length Coding (VLC) Dictionary-based

More information

Data Compression. An overview of Compression. Multimedia Systems and Applications. Binary Image Compression. Binary Image Compression

Data Compression. An overview of Compression. Multimedia Systems and Applications. Binary Image Compression. Binary Image Compression An overview of Compression Multimedia Systems and Applications Data Compression Compression becomes necessary in multimedia because it requires large amounts of storage space and bandwidth Types of Compression

More information

6. Finding Efficient Compressions; Huffman and Hu-Tucker

6. Finding Efficient Compressions; Huffman and Hu-Tucker 6. Finding Efficient Compressions; Huffman and Hu-Tucker We now address the question: how do we find a code that uses the frequency information about k length patterns efficiently to shorten our message?

More information

Number Systems CHAPTER Positional Number Systems

Number Systems CHAPTER Positional Number Systems CHAPTER 2 Number Systems Inside computers, information is encoded as patterns of bits because it is easy to construct electronic circuits that exhibit the two alternative states, 0 and 1. The meaning of

More information

Review of Number Systems

Review of Number Systems Review of Number Systems The study of number systems is important from the viewpoint of understanding how data are represented before they can be processed by any digital system including a digital computer.

More information

Image coding and compression

Image coding and compression Chapter 2 Image coding and compression 2. Lossless and lossy compression We have seen that image files can be very large. It is thus important for reasons both of storage and file transfer to make these

More information

Chapter 7 Lossless Compression Algorithms

Chapter 7 Lossless Compression Algorithms Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5 Dictionary-based Coding 7.6 Arithmetic Coding 7.7

More information

Multimedia Networking ECE 599

Multimedia Networking ECE 599 Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on B. Lee s lecture notes. 1 Outline Compression basics Entropy and information theory basics

More information

Compression. storage medium/ communications network. For the purpose of this lecture, we observe the following constraints:

Compression. storage medium/ communications network. For the purpose of this lecture, we observe the following constraints: CS231 Algorithms Handout # 31 Prof. Lyn Turbak November 20, 2001 Wellesley College Compression The Big Picture We want to be able to store and retrieve data, as well as communicate it with others. In general,

More information

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

Image Compression for Mobile Devices using Prediction and Direct Coding Approach Image Compression for Mobile Devices using Prediction and Direct Coding Approach Joshua Rajah Devadason M.E. scholar, CIT Coimbatore, India Mr. T. Ramraj Assistant Professor, CIT Coimbatore, India Abstract

More information

CSC 310, Fall 2011 Solutions to Theory Assignment #1

CSC 310, Fall 2011 Solutions to Theory Assignment #1 CSC 310, Fall 2011 Solutions to Theory Assignment #1 Question 1 (15 marks): Consider a source with an alphabet of three symbols, a 1,a 2,a 3, with probabilities p 1,p 2,p 3. Suppose we use a code in which

More information

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106 CHAPTER 6 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform Page No 6.1 Introduction 103 6.2 Compression Techniques 104 103 6.2.1 Lossless compression 105 6.2.2 Lossy compression

More information

Data Compression Techniques

Data Compression Techniques Data Compression Techniques Part 1: Entropy Coding Lecture 1: Introduction and Huffman Coding Juha Kärkkäinen 31.10.2017 1 / 21 Introduction Data compression deals with encoding information in as few bits

More information

Module 7 VIDEO CODING AND MOTION ESTIMATION

Module 7 VIDEO CODING AND MOTION ESTIMATION Module 7 VIDEO CODING AND MOTION ESTIMATION Lesson 20 Basic Building Blocks & Temporal Redundancy Instructional Objectives At the end of this lesson, the students should be able to: 1. Name at least five

More information

David Rappaport School of Computing Queen s University CANADA. Copyright, 1996 Dale Carnegie & Associates, Inc.

David Rappaport School of Computing Queen s University CANADA. Copyright, 1996 Dale Carnegie & Associates, Inc. David Rappaport School of Computing Queen s University CANADA Copyright, 1996 Dale Carnegie & Associates, Inc. Data Compression There are two broad categories of data compression: Lossless Compression

More information

Signed umbers. Sign/Magnitude otation

Signed umbers. Sign/Magnitude otation Signed umbers So far we have discussed unsigned number representations. In particular, we have looked at the binary number system and shorthand methods in representing binary codes. With m binary digits,

More information

1. NUMBER SYSTEMS USED IN COMPUTING: THE BINARY NUMBER SYSTEM

1. NUMBER SYSTEMS USED IN COMPUTING: THE BINARY NUMBER SYSTEM 1. NUMBER SYSTEMS USED IN COMPUTING: THE BINARY NUMBER SYSTEM 1.1 Introduction Given that digital logic and memory devices are based on two electrical states (on and off), it is natural to use a number

More information

Floating Point Considerations

Floating Point Considerations Chapter 6 Floating Point Considerations In the early days of computing, floating point arithmetic capability was found only in mainframes and supercomputers. Although many microprocessors designed in the

More information

Fault Tolerance & Reliability CDA Chapter 2 Additional Interesting Codes

Fault Tolerance & Reliability CDA Chapter 2 Additional Interesting Codes Fault Tolerance & Reliability CDA 5140 Chapter 2 Additional Interesting Codes m-out-of-n codes - each binary code word has m ones in a length n non-systematic codeword - used for unidirectional errors

More information

Divisibility Rules and Their Explanations

Divisibility Rules and Their Explanations Divisibility Rules and Their Explanations Increase Your Number Sense These divisibility rules apply to determining the divisibility of a positive integer (1, 2, 3, ) by another positive integer or 0 (although

More information

Ch. 2: Compression Basics Multimedia Systems

Ch. 2: Compression Basics Multimedia Systems Ch. 2: Compression Basics Multimedia Systems Prof. Ben Lee School of Electrical Engineering and Computer Science Oregon State University Outline Why compression? Classification Entropy and Information

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

Engineering Mathematics II Lecture 16 Compression

Engineering Mathematics II Lecture 16 Compression 010.141 Engineering Mathematics II Lecture 16 Compression Bob McKay School of Computer Science and Engineering College of Engineering Seoul National University 1 Lossless Compression Outline Huffman &

More information

UNIT 7A Data Representation: Numbers and Text. Digital Data

UNIT 7A Data Representation: Numbers and Text. Digital Data UNIT 7A Data Representation: Numbers and Text 1 Digital Data 10010101011110101010110101001110 What does this binary sequence represent? It could be: an integer a floating point number text encoded with

More information

Digital Image Processing

Digital Image Processing Lecture 9+10 Image Compression Lecturer: Ha Dai Duong Faculty of Information Technology 1. Introduction Image compression To Solve the problem of reduncing the amount of data required to represent a digital

More information

More Programming Constructs -- Introduction

More Programming Constructs -- Introduction More Programming Constructs -- Introduction We can now examine some additional programming concepts and constructs Chapter 5 focuses on: internal data representation conversions between one data type and

More information

Excerpt from: Stephen H. Unger, The Essence of Logic Circuits, Second Ed., Wiley, 1997

Excerpt from: Stephen H. Unger, The Essence of Logic Circuits, Second Ed., Wiley, 1997 Excerpt from: Stephen H. Unger, The Essence of Logic Circuits, Second Ed., Wiley, 1997 APPENDIX A.1 Number systems and codes Since ten-fingered humans are addicted to the decimal system, and since computers

More information

Data Representation 1

Data Representation 1 1 Data Representation Outline Binary Numbers Adding Binary Numbers Negative Integers Other Operations with Binary Numbers Floating Point Numbers Character Representation Image Representation Sound Representation

More information

P( Hit 2nd ) = P( Hit 2nd Miss 1st )P( Miss 1st ) = (1/15)(15/16) = 1/16. P( Hit 3rd ) = (1/14) * P( Miss 2nd and 1st ) = (1/14)(14/15)(15/16) = 1/16

P( Hit 2nd ) = P( Hit 2nd Miss 1st )P( Miss 1st ) = (1/15)(15/16) = 1/16. P( Hit 3rd ) = (1/14) * P( Miss 2nd and 1st ) = (1/14)(14/15)(15/16) = 1/16 CODING and INFORMATION We need encodings for data. How many questions must be asked to be certain where the ball is. (cases: avg, worst, best) P( Hit 1st ) = 1/16 P( Hit 2nd ) = P( Hit 2nd Miss 1st )P(

More information

Binary representation and data

Binary representation and data Binary representation and data Loriano Storchi loriano@storchi.org http:://www.storchi.org/ Binary representation of numbers In a positional numbering system given the base this directly defines the number

More information

Digital Fundamentals

Digital Fundamentals Digital Fundamentals Tenth Edition Floyd Chapter 2 2009 Pearson Education, Upper 2008 Pearson Saddle River, Education NJ 07458. All Rights Reserved Decimal Numbers The position of each digit in a weighted

More information

P( Hit 2nd ) = P( Hit 2nd Miss 1st )P( Miss 1st ) = (1/15)(15/16) = 1/16. P( Hit 3rd ) = (1/14) * P( Miss 2nd and 1st ) = (1/14)(14/15)(15/16) = 1/16

P( Hit 2nd ) = P( Hit 2nd Miss 1st )P( Miss 1st ) = (1/15)(15/16) = 1/16. P( Hit 3rd ) = (1/14) * P( Miss 2nd and 1st ) = (1/14)(14/15)(15/16) = 1/16 CODING and INFORMATION We need encodings for data. How many questions must be asked to be certain where the ball is. (cases: avg, worst, best) P( Hit 1st ) = 1/16 P( Hit 2nd ) = P( Hit 2nd Miss 1st )P(

More information

NUMBER SYSTEMS AND CODES

NUMBER SYSTEMS AND CODES C H A P T E R 69 Learning Objectives Number Systems The Decimal Number System Binary Number System Binary to Decimal Conversion Binary Fractions Double-Dadd Method Decimal to Binary Conversion Shifting

More information

Module 2: Computer Arithmetic

Module 2: Computer Arithmetic Module 2: Computer Arithmetic 1 B O O K : C O M P U T E R O R G A N I Z A T I O N A N D D E S I G N, 3 E D, D A V I D L. P A T T E R S O N A N D J O H N L. H A N N E S S Y, M O R G A N K A U F M A N N

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

Appendix. Numbering Systems. In This Appendix...

Appendix. Numbering Systems. In This Appendix... Numbering Systems ppendix In This ppendix... Introduction... inary Numbering System... exadecimal Numbering System... Octal Numbering System... inary oded ecimal () Numbering System... 5 Real (Floating

More information

Table : IEEE Single Format ± a a 2 a 3 :::a 8 b b 2 b 3 :::b 23 If exponent bitstring a :::a 8 is Then numerical value represented is ( ) 2 = (

Table : IEEE Single Format ± a a 2 a 3 :::a 8 b b 2 b 3 :::b 23 If exponent bitstring a :::a 8 is Then numerical value represented is ( ) 2 = ( Floating Point Numbers in Java by Michael L. Overton Virtually all modern computers follow the IEEE 2 floating point standard in their representation of floating point numbers. The Java programming language

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Image Compression Caution: The PDF version of this presentation will appear to have errors due to heavy use of animations Material in this presentation is largely based on/derived

More information

ENSC Multimedia Communications Engineering Huffman Coding (1)

ENSC Multimedia Communications Engineering Huffman Coding (1) ENSC 424 - Multimedia Communications Engineering Huffman Coding () Jie Liang Engineering Science Simon Fraser University JieL@sfu.ca J. Liang: SFU ENSC 424 Outline Entropy Coding Prefix code Kraft-McMillan

More information

Data Compression. Guest lecture, SGDS Fall 2011

Data Compression. Guest lecture, SGDS Fall 2011 Data Compression Guest lecture, SGDS Fall 2011 1 Basics Lossy/lossless Alphabet compaction Compression is impossible Compression is possible RLE Variable-length codes Undecidable Pigeon-holes Patterns

More information

Appendix. Numbering Systems. In this Appendix

Appendix. Numbering Systems. In this Appendix Numbering Systems ppendix n this ppendix ntroduction... inary Numbering System... exadecimal Numbering System... Octal Numbering System... inary oded ecimal () Numbering System... 5 Real (Floating Point)

More information

6. Finding Efficient Compressions; Huffman and Hu-Tucker Algorithms

6. Finding Efficient Compressions; Huffman and Hu-Tucker Algorithms 6. Finding Efficient Compressions; Huffman and Hu-Tucker Algorithms We now address the question: How do we find a code that uses the frequency information about k length patterns efficiently, to shorten

More information

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 29 Source Coding (Part-4) We have already had 3 classes on source coding

More information

CIS 121 Data Structures and Algorithms with Java Spring 2018

CIS 121 Data Structures and Algorithms with Java Spring 2018 CIS 121 Data Structures and Algorithms with Java Spring 2018 Homework 6 Compression Due: Monday, March 12, 11:59pm online 2 Required Problems (45 points), Qualitative Questions (10 points), and Style and

More information

Information Technology Department, PCCOE-Pimpri Chinchwad, College of Engineering, Pune, Maharashtra, India 2

Information Technology Department, PCCOE-Pimpri Chinchwad, College of Engineering, Pune, Maharashtra, India 2 Volume 5, Issue 5, May 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Adaptive Huffman

More information

PRINCIPLES OF COMPILER DESIGN UNIT I INTRODUCTION TO COMPILERS

PRINCIPLES OF COMPILER DESIGN UNIT I INTRODUCTION TO COMPILERS Objective PRINCIPLES OF COMPILER DESIGN UNIT I INTRODUCTION TO COMPILERS Explain what is meant by compiler. Explain how the compiler works. Describe various analysis of the source program. Describe the

More information

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY Rashmi Gadbail,, 2013; Volume 1(8): 783-791 INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK EFFECTIVE XML DATABASE COMPRESSION

More information

5 Binary Coding In Noisy Channels

5 Binary Coding In Noisy Channels 5 Binary Coding In Noisy Channels 5. Introduction We have seen previously that in a communication system we may be concerned either with the speed of transmission of information, or with the accuracy of

More information

A Comparative Study of Entropy Encoding Techniques for Lossless Text Data Compression

A Comparative Study of Entropy Encoding Techniques for Lossless Text Data Compression A Comparative Study of Entropy Encoding Techniques for Lossless Text Data Compression P. RATNA TEJASWI 1 P. DEEPTHI 2 V.PALLAVI 3 D. GOLDIE VAL DIVYA 4 Abstract: Data compression is the art of reducing

More information

FACULTY OF ENGINEERING LAB SHEET INFORMATION THEORY AND ERROR CODING ETM 2126 ETN2126 TRIMESTER 2 (2011/2012)

FACULTY OF ENGINEERING LAB SHEET INFORMATION THEORY AND ERROR CODING ETM 2126 ETN2126 TRIMESTER 2 (2011/2012) FACULTY OF ENGINEERING LAB SHEET INFORMATION THEORY AND ERROR CODING ETM 2126 ETN2126 TRIMESTER 2 (2011/2012) Experiment 1: IT1 Huffman Coding Note: Students are advised to read through this lab sheet

More information

Number System. Introduction. Decimal Numbers

Number System. Introduction. Decimal Numbers Number System Introduction Number systems provide the basis for all operations in information processing systems. In a number system the information is divided into a group of symbols; for example, 26

More information

EE-575 INFORMATION THEORY - SEM 092

EE-575 INFORMATION THEORY - SEM 092 EE-575 INFORMATION THEORY - SEM 092 Project Report on Lempel Ziv compression technique. Department of Electrical Engineering Prepared By: Mohammed Akber Ali Student ID # g200806120. ------------------------------------------------------------------------------------------------------------------------------------------

More information

ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding

ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding Huffman codes require us to have a fairly reasonable idea of how source symbol probabilities are distributed. There are a number of applications

More information

Error-Correcting Codes

Error-Correcting Codes Error-Correcting Codes Michael Mo 10770518 6 February 2016 Abstract An introduction to error-correcting codes will be given by discussing a class of error-correcting codes, called linear block codes. The

More information

ENSC Multimedia Communications Engineering Topic 4: Huffman Coding 2

ENSC Multimedia Communications Engineering Topic 4: Huffman Coding 2 ENSC 424 - Multimedia Communications Engineering Topic 4: Huffman Coding 2 Jie Liang Engineering Science Simon Fraser University JieL@sfu.ca J. Liang: SFU ENSC 424 1 Outline Canonical Huffman code Huffman

More information

Data Compression Techniques

Data Compression Techniques Data Compression Techniques Part 2: Text Compression Lecture 6: Dictionary Compression Juha Kärkkäinen 15.11.2017 1 / 17 Dictionary Compression The compression techniques we have seen so far replace individual

More information

Text Compression. Jayadev Misra The University of Texas at Austin July 1, A Very Incomplete Introduction To Information Theory 2

Text Compression. Jayadev Misra The University of Texas at Austin July 1, A Very Incomplete Introduction To Information Theory 2 Text Compression Jayadev Misra The University of Texas at Austin July 1, 2003 Contents 1 Introduction 1 2 A Very Incomplete Introduction To Information Theory 2 3 Huffman Coding 5 3.1 Uniquely Decodable

More information

Integers. N = sum (b i * 2 i ) where b i = 0 or 1. This is called unsigned binary representation. i = 31. i = 0

Integers. N = sum (b i * 2 i ) where b i = 0 or 1. This is called unsigned binary representation. i = 31. i = 0 Integers So far, we've seen how to convert numbers between bases. How do we represent particular kinds of data in a certain (32-bit) architecture? We will consider integers floating point characters What

More information

Get Free notes at Module-I One s Complement: Complement all the bits.i.e. makes all 1s as 0s and all 0s as 1s Two s Complement: One s complement+1 SIGNED BINARY NUMBERS Positive integers (including zero)

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year Image compression Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Methods for Image Processing academic year 2017 2018 Data and information The representation of images in a raw

More information

Chapter 2. Data Representation in Computer Systems

Chapter 2. Data Representation in Computer Systems Chapter 2 Data Representation in Computer Systems Chapter 2 Objectives Understand the fundamentals of numerical data representation and manipulation in digital computers. Master the skill of converting

More information

Image Coding and Data Compression

Image Coding and Data Compression Image Coding and Data Compression Biomedical Images are of high spatial resolution and fine gray-scale quantisiation Digital mammograms: 4,096x4,096 pixels with 12bit/pixel 32MB per image Volume data (CT

More information

Information Retrieval. Chap 7. Text Operations

Information Retrieval. Chap 7. Text Operations Information Retrieval Chap 7. Text Operations The Retrieval Process user need User Interface 4, 10 Text Text logical view Text Operations logical view 6, 7 user feedback Query Operations query Indexing

More information

Entropy Coding. - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic Code

Entropy Coding. - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic Code Entropy Coding } different probabilities for the appearing of single symbols are used - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic

More information

Chapter 3: Operators, Expressions and Type Conversion

Chapter 3: Operators, Expressions and Type Conversion 101 Chapter 3 Operators, Expressions and Type Conversion Chapter 3: Operators, Expressions and Type Conversion Objectives To use basic arithmetic operators. To use increment and decrement operators. To

More information

Bits, Words, and Integers

Bits, Words, and Integers Computer Science 52 Bits, Words, and Integers Spring Semester, 2017 In this document, we look at how bits are organized into meaningful data. In particular, we will see the details of how integers are

More information

CHW 261: Logic Design

CHW 261: Logic Design CHW 261: Logic Design Instructors: Prof. Hala Zayed Dr. Ahmed Shalaby http://www.bu.edu.eg/staff/halazayed14 http://bu.edu.eg/staff/ahmedshalaby14# Slide 1 Slide 2 Slide 3 Digital Fundamentals CHAPTER

More information

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms

Efficient Sequential Algorithms, Comp309. Motivation. Longest Common Subsequence. Part 3. String Algorithms Efficient Sequential Algorithms, Comp39 Part 3. String Algorithms University of Liverpool References: T. H. Cormen, C. E. Leiserson, R. L. Rivest Introduction to Algorithms, Second Edition. MIT Press (21).

More information

Encoding. A thesis submitted to the Graduate School of University of Cincinnati in

Encoding. A thesis submitted to the Graduate School of University of Cincinnati in Lossless Data Compression for Security Purposes Using Huffman Encoding A thesis submitted to the Graduate School of University of Cincinnati in a partial fulfillment of requirements for the degree of Master

More information

Chapter S:II. II. Search Space Representation

Chapter S:II. II. Search Space Representation Chapter S:II II. Search Space Representation Systematic Search Encoding of Problems State-Space Representation Problem-Reduction Representation Choosing a Representation S:II-1 Search Space Representation

More information

Number Systems (2.1.1)

Number Systems (2.1.1) Number Systems (2.1.1) Concept of a register. Operations of register, Complementation, Ranges, Left and right shifts, Addition of two binary number, Numerical overflow, 2 s complement representation, Binary

More information

Course notes for Data Compression - 2 Kolmogorov complexity Fall 2005

Course notes for Data Compression - 2 Kolmogorov complexity Fall 2005 Course notes for Data Compression - 2 Kolmogorov complexity Fall 2005 Peter Bro Miltersen September 29, 2005 Version 2.0 1 Kolmogorov Complexity In this section, we present the concept of Kolmogorov Complexity

More information

Numerical Conversion to Mixed Base without Division and Vice-Versa

Numerical Conversion to Mixed Base without Division and Vice-Versa Numerical Conversion to Mixed Base without Division and Vice-Versa Dick Grune dick@dickgrune.com Feb. 2, 2011 1 Introduction As soon as there were computers they were used for numerical conversion, in

More information

File System Interface and Implementation

File System Interface and Implementation Unit 8 Structure 8.1 Introduction Objectives 8.2 Concept of a File Attributes of a File Operations on Files Types of Files Structure of File 8.3 File Access Methods Sequential Access Direct Access Indexed

More information

DIGITAL SYSTEM FUNDAMENTALS (ECE 421) DIGITAL ELECTRONICS FUNDAMENTAL (ECE 422) COURSE / CODE NUMBER SYSTEM

DIGITAL SYSTEM FUNDAMENTALS (ECE 421) DIGITAL ELECTRONICS FUNDAMENTAL (ECE 422) COURSE / CODE NUMBER SYSTEM COURSE / CODE DIGITAL SYSTEM FUNDAMENTALS (ECE 421) DIGITAL ELECTRONICS FUNDAMENTAL (ECE 422) NUMBER SYSTEM A considerable subset of digital systems deals with arithmetic operations. To understand the

More information

Excerpt from "Art of Problem Solving Volume 1: the Basics" 2014 AoPS Inc.

Excerpt from Art of Problem Solving Volume 1: the Basics 2014 AoPS Inc. Chapter 5 Using the Integers In spite of their being a rather restricted class of numbers, the integers have a lot of interesting properties and uses. Math which involves the properties of integers is

More information

in this web service Cambridge University Press

in this web service Cambridge University Press 978-0-51-85748- - Switching and Finite Automata Theory, Third Edition Part 1 Preliminaries 978-0-51-85748- - Switching and Finite Automata Theory, Third Edition CHAPTER 1 Number systems and codes This

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

6.338 Final Paper: Parallel Huffman Encoding and Move to Front Encoding in Julia

6.338 Final Paper: Parallel Huffman Encoding and Move to Front Encoding in Julia 6.338 Final Paper: Parallel Huffman Encoding and Move to Front Encoding in Julia Gil Goldshlager December 2015 1 Introduction 1.1 Background The Burrows-Wheeler transform (BWT) is a string transform used

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras. Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology Madras Lecture No # 06 Simplex Algorithm Initialization and Iteration (Refer Slide

More information

14.4 Description of Huffman Coding

14.4 Description of Huffman Coding Mastering Algorithms with C By Kyle Loudon Slots : 1 Table of Contents Chapter 14. Data Compression Content 14.4 Description of Huffman Coding One of the oldest and most elegant forms of data compression

More information