Yes, it is non singular. The two codewords are different.

Size: px
Start display at page:

Download "Yes, it is non singular. The two codewords are different."

Transcription

1 Yes, it is non singular. The two codewords are different. Yes, it is UD. Given a sequence of codewords, first isolate occurrences of 01 (i.e., find all the ones) and then parse the rest into 0's. Remark: This code is suffix-free. No codeword is a suffix of any other codeword. To see this, use backward decoding. For any received sequence/string, we work backward from the end, and look for the reversed codewords. Since the codewords satisfy the suffix-free condition, the reversed codewords satisfy the prefix-free condition, and therefore we can uniquely decode the reversed sequence/string. No, the code is not prefix-free. The first codeword (0) is a prefix of the second codeword (01). Reminder: prefix-free code is the same as prefix code. This is more accurate in terms of meaning. This is the name that has been traditionally used. Remark: Observe that we can specify whether a code is nonsingular, uniquely decodable, or prefix-free even when there is no information about the shared alphabet nor the pmf of the DMS symbols

2 Code tree: From the code tree, all the codewords are located at leaf nodes of the tree. So, yes,thecodeisprefix-free. Yes, any prefix-free code is uniquely decodable (UD). This is one of many reasons we consider prefix-free code odeideadie

3

4 Each entropy value can be calculated directly from the given row vectors, Here, we use a MATLAB script (entropy2.m) to do this task. See 2.44 in the lecture note

5 (a) The code {00,01,10,110} can be shortened to {00,01,10,11}. The prefix-free property is preserved; so the shortened code is still UD. Regardless of the probabilty associated with each source symbol (as long as the last symbol has non-zero probability), the shortened code will have smaller expected length. Because Huffman code is optimal, the original code can't be Huffman. (b) The code {01,10) can be shortened to {0,1}. The prefix-free property is preserved; so the shortened code is still UD. Regardless of the probabilty associated with each source symbol, the shortened code will have smaller expected length. Because Huffman code is optimal, the original code can't be Huffman. (c) The code {0,01} is not prefix-free. Any Huffman code must be prefix-free. Therefore, it cannot be a Huffman code. Alternatively, one can also use the same reasoning in part (a) and (b): The code {0,01) can be shortened to {0,1}. The new code is prefix-free and hence still UD. Regardless of the probabilty associated with each source symbol, the shortened code will have smaller expected length. Because Huffman code is optimal, the original code can't be Huffman.

6 Following the hint, we create a Huffman code from the suggested probability values This column is added for comparison Note that the answer for this question is not unique. You may check that This condition is here to guarantee that x = 2 and x = 3 are grouped frist.

7

8 Method 1 Method 2: Directly read the transition (conditional) probabilities from the channel diagram. To get the matrix P, we simply scale each row of matrix Q by the corresponding p(x) Method 1: Method 2:

9 In some of these parts, we will also try to derive the answer in a general form. To get the matrix P, we simply scale each row of matrix Q by the corresponding p(x). First, we find the Q matrix,

10 The convention for our class is that these numbers are ordered in the same way that they are specified in the support. We can get the P matrix by scaling each row of the Q matrix using the corresponding input probability p(x). Method 1: Method 2:

11 For naive decoder, look at each column of P and select (circle) the element whose corresponding x value is the same as y in that column. For DIY decoder, look at each column of P and select the element whose corresponding x value is the same as x(y) in the decoder table. Change all '2' to '1'. Change all '1' and '3' to '0'

12 We want to come up with some simple example. Therefore, we shall start by considering the Bernoulli RV X which has only two possible values: Because there are only two possible values, the pairing in the Huffman coding process must be between these two values: Therefore, Huffman coding (without extension) always needs 1 bit per symbol. You may recall that in Section 2.4, this expression is called the binary entropy function. The plot of this function is shown in the lecture notes; it is sketches below: Also note that we don't want to have p = 0 or p = 1 because they would make our RV X degenerated (deterministic). For degenerated RV, we don't have to waste any bit to convey its value. (The value is already pre-determined.) For example, if p = 1, we know that P[X = 1] = 1 and hence X 1 all the time. There is no uncertainty. Anyone can guess the value of X with 100% accuracy by simply guessing the value 1 every time. Alternatively, we may think of the situation here as sending the empty string ( ) to the receiver.

13 We first need to list all the probabilities for the (extended) source string (vector/block.) Idea 1: Ex when n = 4: We then put together these value in a pmf vector using for loop. Idea 2: Use MATLAB's "kron" command (a) (b)

14 From Q2c of HW2, From the P matrix, we select (circle) the maximum in each column and find the corresponding x value. Alternatively, we know that the MAP decoder must be optimal. Therefore, looking at the answers in Q2e of HW2, we see that the decoder that minimizes the error probability is (the first row) and the corresponding From Q2c of HW2, From the Q matrix, we select the maximum in each column. Alternatively, once we have determined the ML decoder, we see that, in this problem, it is exactly the same as the MAP decoder above and therefore the error probability must be the same. corresponding x value of the selected value in each column

15 From Q3b of HW2, From the P matrix, we select (circle) the maximum in each column and find the corresponding x value. Alternatively, we know that the MAP decoder must be optimal. Therefore, looking at the answers in Q3d of HW2, we see that the decoder that minimizes the error probability is (the third row) and the corresponding From Q3b of HW2, From the Q matrix, we select the maximum in each column. Alternatively, once we have determined the ML decoder, we see that and therefore, looking at the answers in Q3d of HW2, we see that (first row). corresponding x value of the selected value in each column From Q4a of HW2, For MAP detector, look at each column of P and select the element with max value. The decoder table can be read from the selected (circled) values simply by finding the corresponding x value. For ML decoder, look at each column of Q and select (circle) the element with max value From Q4a of HW2, For ML decoder, the elements selected in the P matrix are simply those corresponding to positions selected in the Q matrix. After the fact, we observe that the ML decoder here is the same as the naive decoder in Q4.c.vi of HW2. Therefore, it should have the same decoding error probability (of 0.5) as the naive decoder as well.

16 Codebook: Usually, for block coding, this should be a row vector because we work with k info-bit at a time treating the block as a row vector of length k. However, for repetition code, we have k = 1. Recall that the code rate of a repetition code is 1/n where n is the number of times that each info-bit is repeated. In class, we have shown that for repetition code over BSC with p < 0.5, the ML decoder is the same as a majority-voting decoder: Note that the real decoding table should have 32 rows to account for all possible 5-bit y. However, because only the number of 0s (or the number of 1s) is important, we can shorten the decoding table into the one given below: Error probability is found using the same technique discussed in ECS315 [See the lecture notes from last semester (Ch.6. p.65-67)] "Decoding Table":

17 Given an observed channel output, we can find the most likely codeword that was transmitted by using the MAP decoder. However, recall that, 1. When the codewords are equally-likely, ML decoder the same as the MAP decoder. 2) When the crossover probability of the BSC p is < 0.5, ML decoder is the same as the min distance decoder. Therefore, we can avoid complicated calculation and use minimum-distance decoder here: From the table, we see that the most likely transmitted codeword is The given expression for the joint pmf can be expressed using the joint pmf matrix as The sum of all elements in the P matrix should be 1:

18 The given expression for the joint pmf can be expressed using the joint pmf matrix as The sum of all elements in the P matrix should be 1: Recall that two random variables X and Y are indepepndent if and only if

19 These are the results of knowing that X and Y are independent. In class, we have shown that the MAPD for block coding over the BSC (with crossover probability p) is given by We have also whown that when the block coding is done simply by using repetition code, If this is set to be 0, Because we have exactly the same detector, the probability of decoding error should also be the same.

20 Additional Comments sum(binopdf(2:5,5,0.4)) 1-sum(binopdf(0:1,5,0.4)) sum(binopdf(0:1,5,1-0.4)) The "any" or "don't care" or "does not matter" in the table means that the MAP decoder can choose to guess 1 or 0 without changing the overall error-rate performance. However, some decoder may issue a "decoding error" warning and avoid making a guess. (We will not discuss this type of decoder.)

21 This is the same answer as majority voting.

22 When p = 0.4, the conditional probabilities are the same and the two cases are equally likely. Majority voting would always guess 0 because the #0s in is greater then the #1s. This would agree with our answer here when p > 0.4 [See additional remark at the end] In general, for transmission over BSC, we have

23 Additional Comments for Q8b,c It should be noted that parts (b.iii) and (c.iii) are almost the same as part (a). All of them consider MAP decoding. However, parts (b.iii) and (c.iii) consider only one case (one row): Alternative solution for Q From Q5.c

24

25 p Q q q P naive P naive y xmap y P MAP P MAP

26 y xml y P ML P ML

27 (ii) Note that this is a noisy channel with non-overlapping outputs. (Only one non-zero element in each column of Q.)

28

29 The given DMC belongs to a family of channels called Binary Erasure Channel (BEC). The general form of the channel diagram and the corresponding Q matrix are the case where the bit is "erased" One may say that the uniform pmf is "obvious" from the "symmetry" in the transition probabilities from the inputs.

30 Alternatively, we can solve this part by trying to apply properties of entropy (the same way that we solve the examples in class) and avoid using calculus.

31

32

33 close all; clear all; %% 1.a Q = [1/3 2/3; 2/3 1/3]; [ps C] = capacity_blahut(q) %% 1.b Q=[01/54/500;2/3001/30;00001]; [ps C] = capacity_blahut(q) %% 1.c Q = [1/3 2/3; 1/3 2/3]; [ps C] = capacity_blahut(q) %% 2.c Q=[1000;0100;0010]; [ps C] = capacity_blahut(q) %% 3.a Q=[1/32/30;02/31/3]; [ps C] = capacity_blahut(q) %% 3.b Q = [1 0; 1/2 1/2; 1/2 1/2]; [ps C] = capacity_blahut(q) %% 4 Q = [1 0; 0 1; 1/3 2/3]; [ps C] = capacity_blahut(q) >> Capacity_HW_blahut ps = C= ps = C= ps = C= 0 ps = C= ps = C= ps = C= ps = C=

6. Finding Efficient Compressions; Huffman and Hu-Tucker

6. Finding Efficient Compressions; Huffman and Hu-Tucker 6. Finding Efficient Compressions; Huffman and Hu-Tucker We now address the question: how do we find a code that uses the frequency information about k length patterns efficiently to shorten our message?

More information

CSC 310, Fall 2011 Solutions to Theory Assignment #1

CSC 310, Fall 2011 Solutions to Theory Assignment #1 CSC 310, Fall 2011 Solutions to Theory Assignment #1 Question 1 (15 marks): Consider a source with an alphabet of three symbols, a 1,a 2,a 3, with probabilities p 1,p 2,p 3. Suppose we use a code in which

More information

CSE 417 Dynamic Programming (pt 4) Sub-problems on Trees

CSE 417 Dynamic Programming (pt 4) Sub-problems on Trees CSE 417 Dynamic Programming (pt 4) Sub-problems on Trees Reminders > HW4 is due today > HW5 will be posted shortly Dynamic Programming Review > Apply the steps... 1. Describe solution in terms of solution

More information

Solution to Problem 1 of HW 2. Finding the L1 and L2 edges of the graph used in the UD problem, using a suffix array instead of a suffix tree.

Solution to Problem 1 of HW 2. Finding the L1 and L2 edges of the graph used in the UD problem, using a suffix array instead of a suffix tree. Solution to Problem 1 of HW 2. Finding the L1 and L2 edges of the graph used in the UD problem, using a suffix array instead of a suffix tree. The basic approach is the same as when using a suffix tree,

More information

QED Q: Why is it called the triangle inequality? A: Analogue with euclidean distance in the plane: picture Defn: Minimum Distance of a code C:

QED Q: Why is it called the triangle inequality? A: Analogue with euclidean distance in the plane: picture Defn: Minimum Distance of a code C: Lecture 3: Lecture notes posted online each week. Recall Defn Hamming distance: for words x = x 1... x n, y = y 1... y n of the same length over the same alphabet, d(x, y) = {1 i n : x i y i } i.e., d(x,

More information

Uniquely Decodable. Code 1 Code 2 A 00 0 B 1 1 C D 11 11

Uniquely Decodable. Code 1 Code 2 A 00 0 B 1 1 C D 11 11 Uniquely Detectable Code Uniquely Decodable A code is not uniquely decodable if two symbols have the same codeword, i.e., if C(S i ) = C(S j ) for any i j or the combination of two codewords gives a third

More information

6. Finding Efficient Compressions; Huffman and Hu-Tucker Algorithms

6. Finding Efficient Compressions; Huffman and Hu-Tucker Algorithms 6. Finding Efficient Compressions; Huffman and Hu-Tucker Algorithms We now address the question: How do we find a code that uses the frequency information about k length patterns efficiently, to shorten

More information

Multimedia Networking ECE 599

Multimedia Networking ECE 599 Multimedia Networking ECE 599 Prof. Thinh Nguyen School of Electrical Engineering and Computer Science Based on B. Lee s lecture notes. 1 Outline Compression basics Entropy and information theory basics

More information

Chapter 5 VARIABLE-LENGTH CODING Information Theory Results (II)

Chapter 5 VARIABLE-LENGTH CODING Information Theory Results (II) Chapter 5 VARIABLE-LENGTH CODING ---- Information Theory Results (II) 1 Some Fundamental Results Coding an Information Source Consider an information source, represented by a source alphabet S. S = { s,

More information

UCSD ECE154C Handout #21 Prof. Young-Han Kim Thursday, June 8, Solutions to Practice Final Examination (Spring 2016)

UCSD ECE154C Handout #21 Prof. Young-Han Kim Thursday, June 8, Solutions to Practice Final Examination (Spring 2016) UCSD ECE54C Handout #2 Prof. Young-Han Kim Thursday, June 8, 27 Solutions to Practice Final Examination (Spring 26) There are 4 problems, each problem with multiple parts, each part worth points. Your

More information

OUTLINE. Paper Review First Paper The Zero-Error Side Information Problem and Chromatic Numbers, H. S. Witsenhausen Definitions:

OUTLINE. Paper Review First Paper The Zero-Error Side Information Problem and Chromatic Numbers, H. S. Witsenhausen Definitions: OUTLINE Definitions: - Source Code - Expected Length of a source code - Length of a codeword - Variable/fixed length coding - Example: Huffman coding - Lossless coding - Distortion - Worst case length

More information

Data Compression - Seminar 4

Data Compression - Seminar 4 Data Compression - Seminar 4 October 29, 2013 Problem 1 (Uniquely decodable and instantaneous codes) Let L = p i l 100 i be the expected value of the 100th power of the word lengths associated with an

More information

Assignment 2. Unsupervised & Probabilistic Learning. Maneesh Sahani Due: Monday Nov 5, 2018

Assignment 2. Unsupervised & Probabilistic Learning. Maneesh Sahani Due: Monday Nov 5, 2018 Assignment 2 Unsupervised & Probabilistic Learning Maneesh Sahani Due: Monday Nov 5, 2018 Note: Assignments are due at 11:00 AM (the start of lecture) on the date above. he usual College late assignments

More information

Lecture 17. Lower bound for variable-length source codes with error. Coding a sequence of symbols: Rates and scheme (Arithmetic code)

Lecture 17. Lower bound for variable-length source codes with error. Coding a sequence of symbols: Rates and scheme (Arithmetic code) Lecture 17 Agenda for the lecture Lower bound for variable-length source codes with error Coding a sequence of symbols: Rates and scheme (Arithmetic code) Introduction to universal codes 17.1 variable-length

More information

Lecture 15. Error-free variable length schemes: Shannon-Fano code

Lecture 15. Error-free variable length schemes: Shannon-Fano code Lecture 15 Agenda for the lecture Bounds for L(X) Error-free variable length schemes: Shannon-Fano code 15.1 Optimal length nonsingular code While we do not know L(X), it is easy to specify a nonsingular

More information

Move-to-front algorithm

Move-to-front algorithm Up to now, we have looked at codes for a set of symbols in an alphabet. We have also looked at the specific case that the alphabet is a set of integers. We will now study a few compression techniques in

More information

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding Fundamentals of Multimedia Lecture 5 Lossless Data Compression Variable Length Coding Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Fundamentals of Multimedia 1 Data Compression Compression

More information

Analyzing the Peeling Decoder

Analyzing the Peeling Decoder Analyzing the Peeling Decoder Supplemental Material for Advanced Channel Coding Henry D. Pfister January 5th, 01 1 Introduction The simplest example of iterative decoding is the peeling decoder introduced

More information

Information Theory and Communication

Information Theory and Communication Information Theory and Communication Shannon-Fano-Elias Code and Arithmetic Codes Ritwik Banerjee rbanerjee@cs.stonybrook.edu c Ritwik Banerjee Information Theory and Communication 1/12 Roadmap Examples

More information

EE67I Multimedia Communication Systems Lecture 4

EE67I Multimedia Communication Systems Lecture 4 EE67I Multimedia Communication Systems Lecture 4 Lossless Compression Basics of Information Theory Compression is either lossless, in which no information is lost, or lossy in which information is lost.

More information

Image coding and compression

Image coding and compression Chapter 2 Image coding and compression 2. Lossless and lossy compression We have seen that image files can be very large. It is thus important for reasons both of storage and file transfer to make these

More information

Huffman Code Application. Lecture7: Huffman Code. A simple application of Huffman coding of image compression which would be :

Huffman Code Application. Lecture7: Huffman Code. A simple application of Huffman coding of image compression which would be : Lecture7: Huffman Code Lossless Image Compression Huffman Code Application A simple application of Huffman coding of image compression which would be : Generation of a Huffman code for the set of values

More information

COMPSCI 650 Applied Information Theory Feb 2, Lecture 5. Recall the example of Huffman Coding on a binary string from last class:

COMPSCI 650 Applied Information Theory Feb 2, Lecture 5. Recall the example of Huffman Coding on a binary string from last class: COMPSCI 650 Applied Information Theory Feb, 016 Lecture 5 Instructor: Arya Mazumdar Scribe: Larkin Flodin, John Lalor 1 Huffman Coding 1.1 Last Class s Example Recall the example of Huffman Coding on a

More information

IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I

IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I 1 Need For Compression 2D data sets are much larger than 1D. TV and movie data sets are effectively 3D (2-space, 1-time). Need Compression for

More information

ENSC Multimedia Communications Engineering Topic 4: Huffman Coding 2

ENSC Multimedia Communications Engineering Topic 4: Huffman Coding 2 ENSC 424 - Multimedia Communications Engineering Topic 4: Huffman Coding 2 Jie Liang Engineering Science Simon Fraser University JieL@sfu.ca J. Liang: SFU ENSC 424 1 Outline Canonical Huffman code Huffman

More information

Binomial Coefficients

Binomial Coefficients Binomial Coefficients Russell Impagliazzo and Miles Jones Thanks to Janine Tiefenbruck http://cseweb.ucsd.edu/classes/sp16/cse21-bd/ May 6, 2016 Fixed-density Binary Strings How many length n binary strings

More information

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF INFORMATION TECHNOLOGY ACADEMIC YEAR / ODD SEMESTER QUESTION BANK

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF INFORMATION TECHNOLOGY ACADEMIC YEAR / ODD SEMESTER QUESTION BANK KINGS COLLEGE OF ENGINEERING DEPARTMENT OF INFORMATION TECHNOLOGY ACADEMIC YEAR 2011-2012 / ODD SEMESTER QUESTION BANK SUB.CODE / NAME YEAR / SEM : IT1301 INFORMATION CODING TECHNIQUES : III / V UNIT -

More information

Intro. To Multimedia Engineering Lossless Compression

Intro. To Multimedia Engineering Lossless Compression Intro. To Multimedia Engineering Lossless Compression Kyoungro Yoon yoonk@konkuk.ac.kr 1/43 Contents Introduction Basics of Information Theory Run-Length Coding Variable-Length Coding (VLC) Dictionary-based

More information

Lecture 13. Types of error-free codes: Nonsingular, Uniquely-decodable and Prefix-free

Lecture 13. Types of error-free codes: Nonsingular, Uniquely-decodable and Prefix-free Lecture 13 Agenda for the lecture Introduction to data compression Fixed- and variable-length codes Types of error-free codes: Nonsingular, Uniquely-decodable and Prefix-free 13.1 The data compression

More information

Basics of Information Worksheet

Basics of Information Worksheet Basics of Information Worksheet Concept Inventory: Notes: Measuring information content; entropy Two s complement; modular arithmetic Variable-length encodings; Huffman s algorithm Hamming distance, error

More information

ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding

ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding Huffman codes require us to have a fairly reasonable idea of how source symbol probabilities are distributed. There are a number of applications

More information

ECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov

ECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov ECE521: Week 11, Lecture 20 27 March 2017: HMM learning/inference With thanks to Russ Salakhutdinov Examples of other perspectives Murphy 17.4 End of Russell & Norvig 15.2 (Artificial Intelligence: A Modern

More information

Linear Block Codes. Allen B. MacKenzie Notes for February 4, 9, & 11, Some Definitions

Linear Block Codes. Allen B. MacKenzie Notes for February 4, 9, & 11, Some Definitions Linear Block Codes Allen B. MacKenzie Notes for February 4, 9, & 11, 2015 This handout covers our in-class study of Chapter 3 of your textbook. We ll introduce some notation and then discuss the generator

More information

Horn Formulae. CS124 Course Notes 8 Spring 2018

Horn Formulae. CS124 Course Notes 8 Spring 2018 CS124 Course Notes 8 Spring 2018 In today s lecture we will be looking a bit more closely at the Greedy approach to designing algorithms. As we will see, sometimes it works, and sometimes even when it

More information

Data Compression. Guest lecture, SGDS Fall 2011

Data Compression. Guest lecture, SGDS Fall 2011 Data Compression Guest lecture, SGDS Fall 2011 1 Basics Lossy/lossless Alphabet compaction Compression is impossible Compression is possible RLE Variable-length codes Undecidable Pigeon-holes Patterns

More information

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017 CS6 Lecture 4 Greedy Algorithms Scribe: Virginia Williams, Sam Kim (26), Mary Wootters (27) Date: May 22, 27 Greedy Algorithms Suppose we want to solve a problem, and we re able to come up with some recursive

More information

ELE 201, Spring 2014 Laboratory No. 4 Compression, Error Correction, and Watermarking

ELE 201, Spring 2014 Laboratory No. 4 Compression, Error Correction, and Watermarking ELE 201, Spring 2014 Laboratory No. 4 Compression, Error Correction, and Watermarking 1 Introduction This lab focuses on the storage and protection of digital media. First, we ll take a look at ways to

More information

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson zhuyongxin@sjtu.edu.cn 2 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information

More information

This is a good time to refresh your memory on double-integration. We will be using this skill in the upcoming lectures.

This is a good time to refresh your memory on double-integration. We will be using this skill in the upcoming lectures. Chapter 5: JOINT PROBABILITY DISTRIBUTIONS Part 1: Sections 5-1.1 to 5-1.4 For both discrete and continuous random variables we will discuss the following... Joint Distributions for two or more r.v. s)

More information

392D: Coding for the AWGN Channel Wednesday, March 21, 2007 Stanford, Winter 2007 Handout #26. Final exam solutions

392D: Coding for the AWGN Channel Wednesday, March 21, 2007 Stanford, Winter 2007 Handout #26. Final exam solutions 92D: Coding for the AWGN Channel Wednesday, March 2, 27 Stanford, Winter 27 Handout #26 Problem F. (8 points) (Lexicodes) Final exam solutions In this problem, we will see that a simple greedy algorithm

More information

CSC 373 Lecture # 3 Instructor: Milad Eftekhar

CSC 373 Lecture # 3 Instructor: Milad Eftekhar Huffman encoding: Assume a context is available (a document, a signal, etc.). These contexts are formed by some symbols (words in a document, discrete samples from a signal, etc). Each symbols s i is occurred

More information

Modern Communications Chapter 5. Low-Density Parity-Check Codes

Modern Communications Chapter 5. Low-Density Parity-Check Codes 1/14 Modern Communications Chapter 5. Low-Density Parity-Check Codes Husheng Li Min Kao Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville Spring, 2017 2/14 History

More information

CSE100. Advanced Data Structures. Lecture 12. (Based on Paul Kube course materials)

CSE100. Advanced Data Structures. Lecture 12. (Based on Paul Kube course materials) CSE100 Advanced Data Structures Lecture 12 (Based on Paul Kube course materials) CSE 100 Coding and decoding with a Huffman coding tree Huffman coding tree implementation issues Priority queues and priority

More information

EE 6900: FAULT-TOLERANT COMPUTING SYSTEMS

EE 6900: FAULT-TOLERANT COMPUTING SYSTEMS EE 6900: FAULT-TOLERANT COMPUTING SYSTEMS LECTURE 6: CODING THEORY - 2 Fall 2014 Avinash Kodi kodi@ohio.edu Acknowledgement: Daniel Sorin, Behrooz Parhami, Srinivasan Ramasubramanian Agenda Hamming Codes

More information

22. LECTURE 22. I can define critical points. I know the difference between local and absolute minimums/maximums.

22. LECTURE 22. I can define critical points. I know the difference between local and absolute minimums/maximums. . LECTURE Objectives I can define critical points. I know the difference between local and absolute minimums/maximums. In many physical problems, we re interested in finding the values (x, y) that maximize

More information

CS473-Algorithms I. Lecture 11. Greedy Algorithms. Cevdet Aykanat - Bilkent University Computer Engineering Department

CS473-Algorithms I. Lecture 11. Greedy Algorithms. Cevdet Aykanat - Bilkent University Computer Engineering Department CS473-Algorithms I Lecture 11 Greedy Algorithms 1 Activity Selection Problem Input: a set S {1, 2,, n} of n activities s i =Start time of activity i, f i = Finish time of activity i Activity i takes place

More information

Analysis of Algorithms

Analysis of Algorithms Algorithm An algorithm is a procedure or formula for solving a problem, based on conducting a sequence of specified actions. A computer program can be viewed as an elaborate algorithm. In mathematics and

More information

ITCT Lecture 6.1: Huffman Codes

ITCT Lecture 6.1: Huffman Codes ITCT Lecture 6.1: Huffman Codes Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Huffman Encoding 1. Order the symbols according to their probabilities

More information

Lossless Compression Algorithms

Lossless Compression Algorithms Multimedia Data Compression Part I Chapter 7 Lossless Compression Algorithms 1 Chapter 7 Lossless Compression Algorithms 1. Introduction 2. Basics of Information Theory 3. Lossless Compression Algorithms

More information

UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis

UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis UCSD CSE 21, Spring 2014 [Section B00] Mathematics for Algorithm and System Analysis Lecture 11 Class URL: http://vlsicad.ucsd.edu/courses/cse21-s14/ Lecture 11 Notes Goals for this week (Tuesday) Linearity

More information

16 Greedy Algorithms

16 Greedy Algorithms 16 Greedy Algorithms Optimization algorithms typically go through a sequence of steps, with a set of choices at each For many optimization problems, using dynamic programming to determine the best choices

More information

Greedy Algorithms. Alexandra Stefan

Greedy Algorithms. Alexandra Stefan Greedy Algorithms Alexandra Stefan 1 Greedy Method for Optimization Problems Greedy: take the action that is best now (out of the current options) it may cause you to miss the optimal solution You build

More information

14 Data Compression by Huffman Encoding

14 Data Compression by Huffman Encoding 4 Data Compression by Huffman Encoding 4. Introduction In order to save on disk storage space, it is useful to be able to compress files (or memory blocks) of data so that they take up less room. However,

More information

Set 5, Total points: 100 Issued: week of

Set 5, Total points: 100 Issued: week of Prof. P. Koumoutsakos Prof. Dr. Jens Walther ETH Zentrum, CLT F 1, E 11 CH-809 Zürich Models, Algorithms and Data (MAD): Introduction to Computing Spring semester 018 Set 5, Total points: 100 Issued: week

More information

Chapter 5 Lempel-Ziv Codes To set the stage for Lempel-Ziv codes, suppose we wish to nd the best block code for compressing a datavector X. Then we ha

Chapter 5 Lempel-Ziv Codes To set the stage for Lempel-Ziv codes, suppose we wish to nd the best block code for compressing a datavector X. Then we ha Chapter 5 Lempel-Ziv Codes To set the stage for Lempel-Ziv codes, suppose we wish to nd the best block code for compressing a datavector X. Then we have to take into account the complexity of the code.

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

This is a good time to refresh your memory on double-integration. We will be using this skill in the upcoming lectures.

This is a good time to refresh your memory on double-integration. We will be using this skill in the upcoming lectures. Chapter 5: JOINT PROBABILITY DISTRIBUTIONS Part 1: Sections 5-1.1 to 5-1.4 For both discrete and continuous random variables we will discuss the following... Joint Distributions (for two or more r.v. s)

More information

16.Greedy algorithms

16.Greedy algorithms 16.Greedy algorithms 16.1 An activity-selection problem Suppose we have a set S = {a 1, a 2,..., a n } of n proposed activities that with to use a resource. Each activity a i has a start time s i and a

More information

February 24, :52 World Scientific Book - 9in x 6in soltys alg. Chapter 3. Greedy Algorithms

February 24, :52 World Scientific Book - 9in x 6in soltys alg. Chapter 3. Greedy Algorithms Chapter 3 Greedy Algorithms Greedy algorithms are algorithms prone to instant gratification. Without looking too far ahead, at each step they make a locally optimum choice, with the hope that it will lead

More information

Optimizations and Lagrange Multiplier Method

Optimizations and Lagrange Multiplier Method Introduction Applications Goal and Objectives Reflection Questions Once an objective of any real world application is well specified as a function of its control variables, which may subject to a certain

More information

More Bits and Bytes Huffman Coding

More Bits and Bytes Huffman Coding More Bits and Bytes Huffman Coding Encoding Text: How is it done? ASCII, UTF, Huffman algorithm ASCII C A T Lawrence Snyder, CSE UTF-8: All the alphabets in the world Uniform Transformation Format: a variable-width

More information

Lecture 22 Tuesday, April 10

Lecture 22 Tuesday, April 10 CIS 160 - Spring 2018 (instructor Val Tannen) Lecture 22 Tuesday, April 10 GRAPH THEORY Directed Graphs Directed graphs (a.k.a. digraphs) are an important mathematical modeling tool in Computer Science,

More information

Binomial Coefficient Identities and Encoding/Decoding

Binomial Coefficient Identities and Encoding/Decoding Binomial Coefficient Identities and Encoding/Decoding CSE21 Winter 2017, Day 18 (B00), Day 12 (A00) February 24, 2017 http://vlsicad.ucsd.edu/courses/cse21-w17 MT2 Review Sessions Today and Tomorrow! TODAY

More information

CS583 Lecture 01. Jana Kosecka. some materials here are based on Profs. E. Demaine, D. Luebke A.Shehu, J-M. Lien and Prof. Wang s past lecture notes

CS583 Lecture 01. Jana Kosecka. some materials here are based on Profs. E. Demaine, D. Luebke A.Shehu, J-M. Lien and Prof. Wang s past lecture notes CS583 Lecture 01 Jana Kosecka some materials here are based on Profs. E. Demaine, D. Luebke A.Shehu, J-M. Lien and Prof. Wang s past lecture notes Course Info course webpage: - from the syllabus on http://cs.gmu.edu/

More information

Top-Down Parsing and Intro to Bottom-Up Parsing. Lecture 7

Top-Down Parsing and Intro to Bottom-Up Parsing. Lecture 7 Top-Down Parsing and Intro to Bottom-Up Parsing Lecture 7 1 Predictive Parsers Like recursive-descent but parser can predict which production to use Predictive parsers are never wrong Always able to guess

More information

Algorithms Dr. Haim Levkowitz

Algorithms Dr. Haim Levkowitz 91.503 Algorithms Dr. Haim Levkowitz Fall 2007 Lecture 4 Tuesday, 25 Sep 2007 Design Patterns for Optimization Problems Greedy Algorithms 1 Greedy Algorithms 2 What is Greedy Algorithm? Similar to dynamic

More information

Text Compression through Huffman Coding. Terminology

Text Compression through Huffman Coding. Terminology Text Compression through Huffman Coding Huffman codes represent a very effective technique for compressing data; they usually produce savings between 20% 90% Preliminary example We are given a 100,000-character

More information

15 July, Huffman Trees. Heaps

15 July, Huffman Trees. Heaps 1 Huffman Trees The Huffman Code: Huffman algorithm uses a binary tree to compress data. It is called the Huffman code, after David Huffman who discovered d it in 1952. Data compression is important in

More information

CSE 417 Dynamic Programming (pt 6) Parsing Algorithms

CSE 417 Dynamic Programming (pt 6) Parsing Algorithms CSE 417 Dynamic Programming (pt 6) Parsing Algorithms Reminders > HW9 due on Friday start early program will be slow, so debugging will be slow... should run in 2-4 minutes > Please fill out course evaluations

More information

EDAA40 At home exercises 1

EDAA40 At home exercises 1 EDAA40 At home exercises 1 1. Given, with as always the natural numbers starting at 1, let us define the following sets (with iff ): Give the number of elements in these sets as follows: 1. 23 2. 6 3.

More information

Conditional Random Fields and beyond D A N I E L K H A S H A B I C S U I U C,

Conditional Random Fields and beyond D A N I E L K H A S H A B I C S U I U C, Conditional Random Fields and beyond D A N I E L K H A S H A B I C S 5 4 6 U I U C, 2 0 1 3 Outline Modeling Inference Training Applications Outline Modeling Problem definition Discriminative vs. Generative

More information

BELOW, we consider decoding algorithms for Reed Muller

BELOW, we consider decoding algorithms for Reed Muller 4880 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 11, NOVEMBER 2006 Error Exponents for Recursive Decoding of Reed Muller Codes on a Binary-Symmetric Channel Marat Burnashev and Ilya Dumer, Senior

More information

Nearly-optimal associative memories based on distributed constant weight codes

Nearly-optimal associative memories based on distributed constant weight codes Nearly-optimal associative memories based on distributed constant weight codes Vincent Gripon Electronics and Computer Enginering McGill University Montréal, Canada Email: vincent.gripon@ens-cachan.org

More information

CMPSCI 240 Reasoning Under Uncertainty Homework 4

CMPSCI 240 Reasoning Under Uncertainty Homework 4 CMPSCI 240 Reasoning Under Uncertainty Homework 4 Prof. Hanna Wallach Assigned: February 24, 2012 Due: March 2, 2012 For this homework, you will be writing a program to construct a Huffman coding scheme.

More information

CHAPTER 1 Encoding Information

CHAPTER 1 Encoding Information MIT 6.02 DRAFT Lecture Notes Spring 2011 Comments, questions or bug reports? Please contact 6.02-staff@mit.edu CHAPTER 1 Encoding Information In this lecture and the next, we ll be looking into compression

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 11 Coding Strategies and Introduction to Huffman Coding The Fundamental

More information

Brotli Compression Algorithm outline of a specification

Brotli Compression Algorithm outline of a specification Brotli Compression Algorithm outline of a specification Overview Structure of backward reference commands Encoding of commands Encoding of distances Encoding of Huffman codes Block splitting Context modeling

More information

Project 1. Implementation. Massachusetts Institute of Technology : Error Correcting Codes Laboratory March 4, 2004 Professor Daniel A.

Project 1. Implementation. Massachusetts Institute of Technology : Error Correcting Codes Laboratory March 4, 2004 Professor Daniel A. Massachusetts Institute of Technology Handout 18.413: Error Correcting Codes Laboratory March 4, 2004 Professor Daniel A. Spielman Project 1 In this project, you are going to implement the simplest low

More information

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points] CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, 2015. 11:59pm, PDF to Canvas [100 points] Instructions. Please write up your responses to the following problems clearly and concisely.

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 018 D/Q Greed SP s DP LP, Flow B&B, Backtrack Metaheuristics P, NP Design and Analysis of Algorithms Lecture 8: Greed Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Optimization

More information

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 26 Source Coding (Part 1) Hello everyone, we will start a new module today

More information

IMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1

IMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1 IMAGE COMPRESSION- I Week VIII Feb 25 02/25/2003 Image Compression-I 1 Reading.. Chapter 8 Sections 8.1, 8.2 8.3 (selected topics) 8.4 (Huffman, run-length, loss-less predictive) 8.5 (lossy predictive,

More information

Encoding/Decoding. Russell Impagliazzo and Miles Jones Thanks to Janine Tiefenbruck. May 9, 2016

Encoding/Decoding. Russell Impagliazzo and Miles Jones Thanks to Janine Tiefenbruck.   May 9, 2016 Encoding/Decoding Russell Impagliazzo and Miles Jones Thanks to Janine Tiefenbruck http://cseweb.ucsd.edu/classes/sp16/cse21-bd/ May 9, 2016 Review: Terminology A permutation of r elements from a set of

More information

15-122: Principles of Imperative Computation, Spring 2013

15-122: Principles of Imperative Computation, Spring 2013 15-122 Homework 6 Page 1 of 13 15-122: Principles of Imperative Computation, Spring 2013 Homework 6 Programming: Huffmanlab Due: Thursday, April 4, 2013 by 23:59 For the programming portion of this week

More information

SF1901 Probability Theory and Statistics: Autumn 2016 Lab 0 for TCOMK

SF1901 Probability Theory and Statistics: Autumn 2016 Lab 0 for TCOMK Mathematical Statistics SF1901 Probability Theory and Statistics: Autumn 2016 Lab 0 for TCOMK 1 Preparation This computer exercise is a bit different from the other two, and has some overlap with computer

More information

1. Find f(1), f(2), f(3), and f(4) if f(n) is defined recursively by f(0) = 1 and for n = 0, 1, 2,

1. Find f(1), f(2), f(3), and f(4) if f(n) is defined recursively by f(0) = 1 and for n = 0, 1, 2, Exercises Exercises 1. Find f(1), f(2), f(3), and f(4) if f(n) is defined recursively by f(0) = 1 and for n = 0, 1, 2, a) f(n + 1) = f(n) + 2. b) f(n + 1) = 3f(n). c) f(n + 1) = 2f(n). d) f(n + 1) = f(n)2

More information

EECS 140 Laboratory Exercise 4 3-to-11 Counter Implementation

EECS 140 Laboratory Exercise 4 3-to-11 Counter Implementation EECS 140 Laboratory Exercise 4 3-to-11 Counter Implementation 1. Objectives A. To apply knowledge of combinatorial design. B. Gain expertise in designing and building a simple combinatorial circuit This

More information

Algorithms for Data Science

Algorithms for Data Science Algorithms for Data Science CSOR W4246 Eleni Drinea Computer Science Department Columbia University Thursday, October 1, 2015 Outline 1 Recap 2 Shortest paths in graphs with non-negative edge weights (Dijkstra

More information

1. Meshes. D7013E Lecture 14

1. Meshes. D7013E Lecture 14 D7013E Lecture 14 Quadtrees Mesh Generation 1. Meshes Input: Components in the form of disjoint polygonal objects Integer coordinates, 0, 45, 90, or 135 angles Output: A triangular mesh Conforming: A triangle

More information

CS 440: Programming Languages and Translators, Spring 2019 Mon

CS 440: Programming Languages and Translators, Spring 2019 Mon Haskell, Part 4 CS 440: Programming Languages and Translators, Spring 2019 Mon 2019-01-28 More Haskell Review definition by cases Chapter 6: Higher-order functions Revisit currying map, filter Unnamed

More information

TU/e Algorithms (2IL15) Lecture 2. Algorithms (2IL15) Lecture 2 THE GREEDY METHOD

TU/e Algorithms (2IL15) Lecture 2. Algorithms (2IL15) Lecture 2 THE GREEDY METHOD Algorithms (2IL15) Lecture 2 THE GREEDY METHOD x y v w 1 Optimization problems for each instance there are (possibly) multiple valid solutions goal is to find an optimal solution minimization problem:

More information

Image coding and compression

Image coding and compression Image coding and compression Robin Strand Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Today Information and Data Redundancy Image Quality Compression Coding

More information

COMP 102: Computers and Computing

COMP 102: Computers and Computing COMP 102: Computers and Computing Lecture 26: Final Exam Review Instructor: Kaleem Siddiqi (siddiqi@cim.mcgill.ca) Class web page: www.cim.mcgill.ca/~siddiqi/102.html Number Representation and Logic Binary

More information

Top-Down Parsing and Intro to Bottom-Up Parsing. Lecture 7

Top-Down Parsing and Intro to Bottom-Up Parsing. Lecture 7 Top-Down Parsing and Intro to Bottom-Up Parsing Lecture 7 1 Predictive Parsers Like recursive-descent but parser can predict which production to use Predictive parsers are never wrong Always able to guess

More information

Homework 4: Clustering, Recommenders, Dim. Reduction, ML and Graph Mining (due November 19 th, 2014, 2:30pm, in class hard-copy please)

Homework 4: Clustering, Recommenders, Dim. Reduction, ML and Graph Mining (due November 19 th, 2014, 2:30pm, in class hard-copy please) Virginia Tech. Computer Science CS 5614 (Big) Data Management Systems Fall 2014, Prakash Homework 4: Clustering, Recommenders, Dim. Reduction, ML and Graph Mining (due November 19 th, 2014, 2:30pm, in

More information

Entropy Coding. - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic Code

Entropy Coding. - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic Code Entropy Coding } different probabilities for the appearing of single symbols are used - to shorten the average code length by assigning shorter codes to more probable symbols => Morse-, Huffman-, Arithmetic

More information

Optimization of Bit Rate in Medical Image Compression

Optimization of Bit Rate in Medical Image Compression Optimization of Bit Rate in Medical Image Compression Dr.J.Subash Chandra Bose 1, Mrs.Yamini.J 2, P.Pushparaj 3, P.Naveenkumar 4, Arunkumar.M 5, J.Vinothkumar 6 Professor and Head, Department of CSE, Professional

More information

CSEP 561 Error detection & correction. David Wetherall

CSEP 561 Error detection & correction. David Wetherall CSEP 561 Error detection & correction David Wetherall djw@cs.washington.edu Codes for Error Detection/Correction ti ti Error detection and correction How do we detect and correct messages that are garbled

More information

FACULTY OF ENGINEERING LAB SHEET INFORMATION THEORY AND ERROR CODING ETM 2126 ETN2126 TRIMESTER 2 (2011/2012)

FACULTY OF ENGINEERING LAB SHEET INFORMATION THEORY AND ERROR CODING ETM 2126 ETN2126 TRIMESTER 2 (2011/2012) FACULTY OF ENGINEERING LAB SHEET INFORMATION THEORY AND ERROR CODING ETM 2126 ETN2126 TRIMESTER 2 (2011/2012) Experiment 1: IT1 Huffman Coding Note: Students are advised to read through this lab sheet

More information

SIGNAL COMPRESSION Lecture Lempel-Ziv Coding

SIGNAL COMPRESSION Lecture Lempel-Ziv Coding SIGNAL COMPRESSION Lecture 5 11.9.2007 Lempel-Ziv Coding Dictionary methods Ziv-Lempel 77 The gzip variant of Ziv-Lempel 77 Ziv-Lempel 78 The LZW variant of Ziv-Lempel 78 Asymptotic optimality of Ziv-Lempel

More information