Linear Block Codes. Allen B. MacKenzie Notes for February 4, 9, & 11, Some Definitions

Size: px
Start display at page:

Download "Linear Block Codes. Allen B. MacKenzie Notes for February 4, 9, & 11, Some Definitions"

Transcription

1 Linear Block Codes Allen B. MacKenzie Notes for February 4, 9, & 11, 2015 This handout covers our in-class study of Chapter 3 of your textbook. We ll introduce some notation and then discuss the generator and parity check matrices of linear block codes. Then we ll talk about syndrome decoding, the weight distribution of a code, and the performance of a code. We ll end by learning about erasure decoding and code modifications. In this chapter, we ll return to the example code of Chapter 1, the Hamming Code, and we ll learn a bit more about it. Some Definitions An (n, k) block code, C, over an alphabet, A, of q symbols consists of q k vectors of length n which are called codewords. The number n is the length of the code, and k is the dimension of the code. The rate of the code is R = k/n. Also associated with the code is an encoder which is a bijective mapping of k-tuples from A k onto codewords. For a given code, C, there are many possible encoders. Generally, the performance of the code, at least in terms of the symbol error rate, does not depend on this mapping. In order to use the code, though, the the encoder must be specified. An (n, k) block code, C, over an alphabet, A, of q symbols is said to be a q-ary linear block code if and only if A is a finite field (A = F q ) and C is a k-dimensional vector subspace of the vector space of all n-tuples, F n q. An immediate consequence of this definition is that if C is a linear block code, then every linear combination of codewords is also a codeword. Another immediate consequence is that the zero word, 0, must be a codeword. The Hamming weight of a codeword c is the number of non-zero elements of c. The Hamming weight of c is denoted wt( c). Note that the Hamming weight of c is the Hamming distance between c and the zero vector, 0. That is, wt( c) = d H ( c, 0). The minimum weight of a code, C, is the minimum weight taken over all non-zero codewords. That is: w min = min c C, c = 0 wt( c). For a linear code C, it is easy to show that the minimum distance of the code is equal to the minimum weight. How? Recall that d min = min ci, c j C, c i = c j d H ( c i, c j ). Now, let c i and c j be two codewords that achieve this minimum. 1 Then c k = c i c j must also be a code- 1 That is, d H ( c i, c j ) = d min.

2 linear block codes 2 word (and must be non-zero), because it is a linear combination of two codewords. Now wt( c k ) = d min. Thus, if d min is the minimum distance of the code, then we can find a nonzero codeword with weight d min. But there can t be any nonzero codeword with a smaller weight. 2 The Generator Matrix Let C be a q-ary (n, k) linear block code. Then, by definition, C is a k-dimensional vector subspace over F n q. Thus, there exists a basis for this code, { g 0, g 1,..., g k 1 }, containing exactly k elements such that every codeword can be written as a linear combination of these vectors, c = m 0 g 0 + m 1 g m k 1 g k 1, where m i F q. Thus, a basis for C can be used to create a bijection between messages, m, which are k-tuples from F q, and codewords. That is, a basis specifies not only the code, but also an encoding function. Since the basis of the code is not unique, we could get the same code from a different basis, but this would produce a different encoding function. If we place the basis elements as the rows of a matrix, then we can take the linear combinations by doing matrix multiplication. Let G be a matrix whose rows are the vectors { g 0, g 1,..., g k 1 }. Then G is a k n generator matrix for the code. Furthermore, if m is a row vector from F k q, then the encoding operation described above can be written as c = mg. Again, though, the basis is not unique, so the generator matrix G is not unique. We could switch to a different generator matrix G which represents the same code but represents a different encoding operation. If G is a generator matrix of the code, then we can obtain other generator matrices of the code through elementary row operations on G in F q. You might recall from linear algebra that the elementary row operations are: 2 Suppose there were a nonzero codeword with a smaller weight, then the Hamming distance between this codeword and the zero codeword would be smaller than the minimum distance of the code, a contradiction. Row switching. A row in the matrix can be switched with another row. Row multiplication. A row in the matrix can be multiplied by a nonzero constant, k F q. Row addition. A row in the matrix can be replaced by the sum of that row and a multiple of another row. There is one other manipulation that we sometimes perform on generator matricies: switching columns. If two columns of a generator matrix are interchanged, then that will swap the corresponding

3 linear block codes 3 positions of the codewords. But it will not change the minimum distance of the code. Two linear codes that are the same except for a permutation of the components of the code are said to be equivalent codes. If G and G are generator matrices of two equivalent codes, C and C, then G and G are related by elementary row operations and column permutations. Transforming between generators of a code or between a generator for one code and a generator for an equivalent code is desirable because it allows us to create encoders that are systematic. An encoder is said to be systematic if the message symbols m 0, m 1,..., m k can be found explicitly and unchanged in the codeword. That is, there exist coordinates i 0, i 1,..., i k 1 such that c i0 = m 0, c i1 = m 1,..., c ik = m k. 3 Note that being systematic is a property of the encoder, not a property of the code. For a linear block code, a k n generator matrix will produce a systematic encoding if the generator matrix contains each column of the k k identity matrix. 4 Often, we will write a systematic generator in the form G = [P I k ] where P is a k (n k) matrix that generates parity symbols and I is the k k identity matrix. In this case, we have c = m[p I k ] = [ mp m]. That is, the message symbols will appear as the last k coordinates of the codeword. Given any generator matrix, G, we can perform elementary row operations to do Gaussian elimination and produce a systematic generator matrix, G, for the code. If we want G to be in the form [P I], then some column switches are often required to get the matrix into this form. Recall that these column switches will produce an equivalent code, but will change the codeword set (by switching some codeword coordinates). What have we gained by limiting ourselves to q-ary linear block codes, rather than allowing arbitrary q-ary (n, k) block codes? 3 Usually, the coordinates are sequential. That is, they are i 0, i 0 + 1,..., i 0 + k 1. Also note that this definition is not specific to linear codes, but applies to all block codes. 4 The columns can be in any positions, in any order. Representing the linear block code requires a k vectors of length n or, alternately, a k n matrix. Representing an arbitrary (n, k) block code would require a complete list of all q k codewords. Given a linear block code, we can implement the encoding operation easily as matrix multiplication. Given a linear block code, we can construct a systematic encoder for the code through elementary row operations on the generator matrix. If desired, we can convert this to an equivalent code in a standard form. Using a systematic encoder further simplifies the encoding process, and will simplify the decoding process, as well.

4 linear block codes 4 The Parity Check Matrix Since a linear code is a vector subspace, there must exist a dual space to the code. The dual space to a q-ary (n, k) linear block codes C is the (n, n k) dual code denoted C. Since C is an (n k)- dimensional vector space, we can find a basis for C, { h0, h1,..., hn k 1 }. We can then form a matrik H using these basis vectors as rows. Such a matrix is called a parity check matrix for the code. A generator matrix, G, and a parity check matrix, H, for a code C satisfy the property GH T = 0. 5 Furthermore, as we saw in Chapter 1, a vector c F n q is a codeword of C if and only if ch T = 0. When G is a systematic generator matrix in the form G = [P I k ], then a parity check matrix can be written down immediately: H = [I n k P T ]. Thus, given a generator matrix, G, we can find a generator matrix for an equivalent code in the form G = [P I k ] through elementary row operations and column switches. This equivalent code will then have a parity check matrix given by H = [I n k P T ]. If it is important that we find a parity check matrix for the original code, then we can reverse the set of column switches that were done on G to obtain H, a parity check matrix for the original code. The parity check matrix, H, provides another way to find the minimum distance of a linear block code, C. The minimum distance d min of C is equal to the smallest number of columns of H that are linearly dependent. The parity check matrix can also be used to decode the code. We saw an example of this, for Hamming codes, in Chapter 1. And we will revisit it shortly for arbitrary linear block codes. But first, let s examine two fundamental bounds on the performance of block codes. 5 Note that 0 here is a k (n k) zero matrix. Two Fundamental Bounds The observation about the relationship between a parity check matrix and the distance above leads to a first bound on the minimum distance of a code. The minimum distance for an (n, k) linear block code is bounded by d min n k + 1. This is called the Singleton bound. It can be seen as follows: Given an (n, k) linear block code, we have a (n k) n parity check matrix H that has (n k) linearly independent rows. Thus, H has rank (n k). Since the row rank and the column rank of a matrix are the same, any collection of n k + 1 columns of H must be linearly dependent. Thus d min n k A code that satisfies the Singleton bound with equality is called a maximum distance separable (MDS) code. The most important families of linear block codes, the BCH codes and Reed-Solomon codes, are 6 The Singleton bound also applies to arbitrary (nonlinear) codes, although a different proof is required.

5 linear block codes 5 MDS codes. We now return to our geometric interpretation of decoding, which we used to derive the error-correcting power of a code in Chapter 1. Specifically, we said that if a code is guaranteed to correct any t errors, then it must be the case that Hamming spheres of radius t around each codeword in the code do not overlap. We can count the number of points in a Hamming sphere of radius t for a code of length n over an alphabet of q symbols: V q (n, t) = t j=0 ( ) n (q 1) j j Now, we have q k codewords in an (n, k) code, thus there are q k spheres of radius t for a t-error correcting code, containing a total of q k V q (n, t) points. But there are only q n total n-tuples. Thus we have: q k V q (n, t) q n q n k V q (n, t) n k log q V q (n, t) This is called the Hamming bound. A code that satisfies the Hamming bound with equality is called a perfect code. The quantity on the left side of the Hamming bound, n k, is essentially the number of parity symbols of the code. This is sometimes called the redundancy of the code and denoted r. The set of perfect codes is quite limited. In fact, the entire set of perfect codes is known. 7 The entire set of perfect codes is: the set of all n-tuples, with d min = 1 and t = 0, odd-length binary repetition codes, binary Hamming codes (which are linear) and other non-linear codes with equivalent parameters, 7 It may be surprising to you that the perfect codes are not necessarily particularly good codes. This is because perfect is simply a description of how well the Hamming spheres are packed together and often does not translate into suitability for practical applications. the binary (23,12) Golay code G 23 with d min = 7 and t = 3, and the ternary (i.e. over F 3 ) codes G 11 (a (11,6) code with d min = 5) and G 23 (a (23,11) code with d min = 5). 8 Hard-Input Error Correction We have already seen that if C is a q-ary linear block code with parity check matrix H that it is easy to use C for error detection. Given a received vector r, we computer rh T. If this is the zero vector, 0, 8 Your book describes the Golay codes in Chapter 8. They are of some theoretical importance, but have seen little practical application. We will not study them in this course.

6 linear block codes 6 then r is a codeword and no error is detected. Otherwise, an error is detected. At this point, your book spends quite a lot of space on the construction and use of the standard array. I briefly summarize this approach. The standard array is a table that contains all possible q-ary words of length n. The first row contains all codewords in C, with the first entry in the first row being the all-zero codeword, 0. From this point, additional rows are added to the table as follows: 1. Select an error vector, e, of minimum weight from among all n- tuples that do not yet appear in the table. Enter it as the first entry in a new row. 2. Complete the new row by making each entry in the new row the sum of the chosen error vector e and the codeword c at the top of the corresponding column. 3. If there are n-tuples that do not yet appear in the table, go to step 1. It is possible to show that every possible n-tuple will appear in the table exactly once. Moreover, each row of the table forms a coset of the form e + C. (Remember that C is a vector space, and therefore C forms a group.) The standard array can be used for decoding in a very simple manner. For any received vector r, locate the vector in the table. Decode r to the codeword c that appears at the top of the column in which r is found. Because of the manner in which the standard array is constructed, this codeword will be minimize d H ( r, c). The standard array decoder is a complete decoder, because every n-tuple appears in the array. Unless the code is perfect, though, there will be some received words r for which the codeword that minimizes d H ( r, c) is not unique. If one wishes to use the standard array technique to construct a bounded distance decoder that can decode t errors, then one should stop constructing the standard array after all rows corresponding to error patterns of weight t have been added. In this case, if the received vector r is found in the partial array, then decoding proceeds as before. If it is not found, then a decoder failure is declared. The standard array decoder suffers from major problems: For codes of reasonable length, the size of the standard array is huge. Recall that the standard array contains all q n vectors in F n q. This is too many to represent in memory for many reasonable choices of q and n.

7 linear block codes 7 Decoding requires searching the entire table to match r. This is not an efficient operation. A first step in reducing memory size and decoding complexity is syndrome decoding. As we noted in Chapter 1, for any received word r = c + e, the syndrome given by s = rh T is a function of only e and not c (because ch T = 0). It follows that every entry in a row of the standard array will have the same syndrome. Thus, instead of constructing a standard array, we can construct a syndrome table. This table will still have q n k rows. 9 Each row will consist of an error pattern (of length n) and a syndrome (of length n k). Decoding then consists of the following process: 1. Compute the syndrome of the received word, s = rh T. 9 For modern codes, this may still be a very large number. For instance, a (256,200) binary code not particularly long by modern standards would have 2 56 entries in its syndrome table. 2. Look up the syndrome in the syndrome table to find the associated error pattern, e. 3. Output c = r e as the codeword. As for standard arrays, the syndrome table can be constructed to support either a complete or bounded distance decoder. The Hamming Codes and the Weight Distributions of Codes As we noted in Chapter 1, for any integer m 2, a binary hamming code is a (2 m 1, 2 m m 1) binary code with d min = 3 which may be defined by its parity check matrix, which will contain all non-zero binary m-tuples as columns. Based on what we now know about linear block codes, we can see that it will be convenient to write H so that the m m identity matrix appears as the first m columns of H, so that H = [I m P T ]. Of course, when we do this, we can immediately write down a systematic generator matrix G = [P I k ] where k = 2 m m 1. Note that this actually tells us something about P: If constructed in this manner, the rows of P will consist of all binary m-tuples with weight 2 or higher. Because of the form of H, it is easy to see that for any m, any two columns will be linearly independent and that we can pick a set of 3 columns that is linearly dependent. Thus, we have d min = 3 for all Hamming codes. Let C be a q-ary, (n, k) linear block code. Let A i denote the number of codewords of weight i in C. Then {A 0, A 1,..., A n } is called the weight distribution of the code. The weight distribution is often represented as a polynomial, A(z) = A 0 + A 1 z + A 2 z A n z n. This polynomial, A(z) is called the weight enumerator of the code.

8 linear block codes 8 The weight distribution is important in finding probabilities of error for the code. The weight distribution of a code can be difficult to characterize. Sometimes, it is easier to characterize the weight distribution of the dual code. In this case, the MacWilliams identity can be used to obtain the weight distribution of the code. Let C be a q-ary (n, k) linear block code with weight enumerator A(z), and let B(z) be the weight enumerator of C. Then ( B(z) = q k (1 + (q 1)z) n A 1 z 1 + (q 1)z This is the MacWilliams identity. It is slightly more useful if we turn it around and write ( ) A(z) = q (n k) (1 + (q 1)z) n 1 z B. 1 + (q 1)z The dual code of a (2 m 1, 2 m m 1) Hamming code is called a (2 m 1, m) code called a simplex code. Because it is the dual of the Hamming code, a matrix containing all non-zero binary m-tuples as columns can be used as a generator matrix for the code. In general, it can be shown that all nonzero codewords of this dual code have weight 2 m 1 and every pair of codewords is at a distance 2 m 1 apart. Thus, we can immediately write down the weight enumerator for the dual code: B(z) = 1 + (2 m 1)z 2m 1. We can then apply the MacWilliams identity to find the weight enumerator for a Hamming code: ( ) 1 z A(z) = 2 m (1 + z) n B 1 + z = 2 m (1 + z) n [1 + (2 m (1 z)2m 1 1) (1 + z) 2m 1 = 1 n + 1 (1 + (1 z)n z)(n+1)/2 [1 + n (1 + z) ] (n+1)/2 ). = 1 n + 1 [(1 + z)n + n(1 z) (n+1)/2 (1 + z) (n 1)/2 ] = 1 n + 1 [(1 + z)n + n(1 z)[(1 z)(1 + z)] (n 1)/2 ] = 1 n + 1 [(1 + z)n + n(1 z)(1 z 2 ) (n 1)/2 ] Performance of Linear Codes One of the things that I like about your textbook is that it does not defer discussion of code performance. But, in order to discuss code

9 linear block codes 9 performance, we need some notation that we will use for the remainder of the semester. Although your book does not make this clear initially, some of these probabilities are useful when discussing the performance of an error detector and some are useful when discussing the performance of error correction. I separate them, for clarity. Performance of Error Detection For an error detector, we have two significant word error probabilities: P d (E) is the probability of detected codeword error. That is, the probability that one or more errors occur in a codeword and are detected. P u (E) is the probability of undetected codeword error. That is, the probability that one or more errors occur in a codeword and are not detected. Your book also defines two bit error probabilities associated with error detection, P db and P ub. I think these are not so important given the way that error detection is usually applied in practice. In practice, error detection is often applied to packets the entire packet is the codeword. If a detected codeword error occurs, then the packet is retransmitted. Undetected codeword errors are important, because they represent a corrupted packet that is accepted as correct. But the probability of an undetected codeword error can be driven to extremely low levels by modern error detection systems, so the resulting bit error probabilities are usually not the key figure of merit. For now, let us assume that we are using a binary linear block code over a BSC with crossover probability p. An undetected error will occur only if the received word r is a codeword. Now r = c + e. But since C is a binary linear block code, it is closed under vector addition. Thus r is a codeword if and only if e is a codeword. Thus, the probability of an undetected error is the probability that e is a nonzero codeword. Computing this requires that we know the code s weight distribution. Once we know this, though, it is straightforward: P u (E) = n j=d min A j p j (1 p) n j. The probability of a detected codeword error is then simply the probability that one or more bit errors occurs, minus the probability

10 linear block codes 10 that the error is undetected: ( n P d (E) = j n j=1 ) p j (1 p) n j P u (E) = 1 (1 p) n P u (E). If the weight distribution is unavailable, though, then we cannot compute these probabilities exactly. We can bound them, though: n ( ) n P u (E) p j (1 p) n j j ( ) n P d (E) p j (1 p) n j = 1 (1 p) n j j=d min n j=1 Performance of Error Correction Again, we need to define some probabilities that we will use for the remainder of the semester when discussing the performance of codes. P(E) is the probability of decoder error. That is, it is the probability that the codeword at the output of the decoder is not the same as the codeword at the input of the encoder. P(F) is the probability of decoder failure. That is, it is the probability that the decoder is unable to return a codeword. Note that for a complete decoder, we will have P(F) = 0, but P(F) > 0 for a bounded distance decoder. P b (E) or P b is the probability of bit error, also known as the bit error rate. It is the probability that the decoded message bits are not the same as the encoded message bits. Note that a decoder error may cause from 1 to k bit errors in the output, so computing P b exactly requires knowledge of the encoder. If we are using standard array decoding, then the probability of correcting an error is the probability that the specific error appears in the first column of the standard array; these codewords/error patterns are called the coset leaders 10. Note that all errors of weight less than or equal to d min 1 2 will be represented as coset leaders, but, if we have a complete standard array, then there will typically be some additional error patterns as well. 11 Computing the probability of decoder error precisely, then, requires knowing the weight distribution of the coset leaders, which we denote {α 0, α 1,..., αn}. If this distribution is known and we have a complete standard array, then the probability of a decoder error is the probability that an error occurs that is not a coset leader: 10 Remember that each row of the standard array forms a coset of the code. 11 This is not the case for Hamming codes, though, because they are perfect codes. P(E) = 1 n j=0 α j p j (1 p) n j.

11 linear block codes 11 Most hard decision decoders for modern error control codes, though, are bounded distance decoders. For a bounded distance decoder, all errors of weight less than or equal to d min 1 2 will be correctly decoded. Errors of greater weight can either cause decoder error or decoder failure. Let P j l be the probability that a received word r is exactly Hamming distance l from a particular codeword of weight j. Then one can show that: P j l ( j l = l r r=0 )( n j r ) p j l+2r (1 p) n j+l 2r In that case, we can compute that for a binary (n, k) code with weight distribution {A i }, the probability of decoding error for a bounded distance decoder is P(E) = n j=d min A j d min 1 2 l=0 The probability of decoder failure for the bounded distance decoder is the probability that the received codeword does not fall into any of the decoding spheres. That is, it is the probability that we do not have correct decoding or a decoder error: d min 1 P(F) = 1 2 j=0 P j l ( ) n p j (1 p) n j P(E). j As noted previously, a decoder error can cause from 1 to k bit errors at the output of the decoder. Thus, it is easy to put (relatively loose) bounds on P b : 1 k P(E) P b P(E). Sadly, finding exact expressions for P b, the bit error rate, requires that we know the relationship between the weights of the message bits and the weight of the corresponding codewords. This can be simple to determine computationally for small codes, but is often intractable for large codes. If this information is known, then it can be summarized by {β 0, β 1,..., β n } where β j is the total weight of all message words associated with codewords of weight j. In this case, we can show that P b (E) = 1 k n j=d min β j d min 1 2 l=0 P j l.

12 linear block codes 12 Performance of Soft-Input Decoding Although we haven t discussed the implementation of a soft-input decoder in this chapter, we have previously seen that soft-input decoders often offer better performance than hard-input decoders. 12 So, it is worth assessing how well such a decoder might perform. Suppose the codewords of an (n, k) code C with minimum distance d min are modulated using BPSK with energy E c = RE b per coded bit and transmitted through an AWGN channel with variance σ 2 = N 0 /2. The transmitted vector s is a point in n-dimensional space. An exercise in Chapter 1 shows that the Euclidean distance between two BPSK-modulated codewords is d E = 2 E c d H. Suppose that there are an average of K codewords at distance d min from a codeword. Then the probability that the received word r is mistaken for one of these neighboring codewords is approximately given by the union bound: P(E) KQ ( de,min 2σ ) ( ) 2Rdmin E = KQ b. Neglecting the constant factor K, then, we see that we achieve comparable performance between the uncoded and coded systems with SNR of E b /N 0 for the uncoded system and Rd min E b /N 0 for the coded system. This quantity Rd min is sometimes known as the asymptotic coding gain of a code. It tends to apply only at large SNR, 13 and it presumes soft decoding. Nevertheless, it can be used for a quick back-of-envelope comparison of two codes. N 0 12 In fact, it is easy to prove that a softinput decoder can always perform at least as well as a hard-input decoder. 13 This is why it is called asymptotic. Erasure Decoding So far, we have assumed, at least for hard-input systems, that errors flip bits. That is, an error occurs when a 0 becomes a 1 or a 1 becomes a 0. In some systems, we can have another kind of error, known as an erasure. When an erasure occurs, the bit is somehow lost and the 0 or 1 becomes a non-symbol, sometimes represented as an e. 14 Note that an erasure is better than an error. When an error occurs, you don t know where it occurred. When an erasure occurs, you don t know what the bit was, but at least you know where the problem occurred. The mathematics bears this out: We will see that we can correct twice as many erasures as errors. There are a few different ways that erasures can occur. They can be declared by conventional receivers when a received signal is ambiguous. 15 This allows a kind of partial soft-input decoding that is simpler to implement than full soft-input decoding. Or, if coding is implemented in a packet network in a way that allows codewords to 14 The e of course stands for erasure, not to be confused with error. 15 For example, if a received BPSK symbol is too close to 0, then we could declare an erasure.

13 linear block codes 13 span multiple packets, 16 then when a packet is lost the symbols that it contained can be declared to be erasures. Suppose that f erasures have occurred in a codeword, and consider a code in which the corresponding symbols are deleted. This code will still have minimum distance d min f. Thus, if no errors occurred, we will be able to fill in the erasures provided that d min f 1. That is, provided that f d min 1. Thus, a code that can correct (d min 1)/2 errors can correct (or fill in ) up to d min 1 errors with no erasures. Moreover, if f erasures have occurred, then we should still be able to correct (d min f 1)/2 errors in the remaining symbols. The easiest way to put all of these facts together is to say: If there are e errors and f erasures, they can be corrected provided that 2e + f < d min. 16 This could be done by interleaving codeword symbols, as discussed in your book. Implementing erasure decoding is actually very easy for binary codes. Here is a procedure that works for any decoding algorithm: 1. Place zeros in all the erased coordinates and decode. Call the resulting codeword c Place ones in all the erased coordinates and decode. Call the resulting codeword c Output the codeword c 0 or c 1 that is closest to the received word r. This procedure works because if there are f erasures then one of the substitutions will create no more than f /2 additional errors. Thus, if 2e + f < d min, then one of the substitutions will have no more than e + f /2 errors and can be corrected. Note that if the decoder being used is a bounded distance decoder then one of the two decoding attempts may result in decoder failure. But they will not both fail unless 2e + f < d min. Modifying Linear Codes Often, we may be unable to design a code with exactly the desired properties. Or, it may be convenient to design a single code with associated encoders and decoders and then use several different modifications of the code in practice. A code is extended by adding an additional parity symbol. Thus, an (n, k) code becomes an (n + 1, k) code. Often, if the parity symbol is chosen carefully, the minimum distance will also increase by 1. A code is punctured by deleting one of its parity symbols. A punctured (n, k) code will become an (n 1, k) code, and the minimum distance will be reduced by at most Puncturing is probably the most com- 17 The minimum distance will be reduced by one if one of the minimum distance codewords in punctured in a nonzero position.

14 mon alteration made to a code. A low-rate code can be designed for the most severe anticipated channel conditions. Then, when channel conditions are less severe, that low-rate code can be punctured (often repeatedly) to obtain a higher rate code (with weaker error correction capabilities). Note that puncturing a code is the opposite of extending a code. A code is expurgated by deleting some of its codewords. If the number of codewords is reduced by a factor of q (for a q-ary linear block code) then an (n, k) code can become an (n, k 1) code. The resulting code may or may not still be linear. Expurgating a code may increase the minimum distance. A code is augmented by adding new codewords. The new code may or may not be linear, and the minimum distance may increase. It is possible that an (n, k) code could become an (n, k + 1) code, if the number of codewords is increased by a factor of q. Note that augmenting is the opposite of expurgating. A code is lengthened by adding a message symbol. Thus, an (n, k) linear block code can become an (n + 1, k + 1) linear block code. A code is shortened by deleting a message symbol, turning an (n, k) linear block code into an (n 1, k 1) linear block code. Shortening is the opposite of lengthening. There s a further connection between the various code modifications that is best illustrated by Figure 3.2 in the textbook. Namely, the various modifications can be combined in different ways that may produce equivalent results. For example, lengthening a code (adding a message symbol) and then puncturing the code (removing a parity symbol) is potentially equivalent to augmenting the code (adding codewords). linear block codes 14

ELG3175 Introduction to Communication Systems. Introduction to Error Control Coding

ELG3175 Introduction to Communication Systems. Introduction to Error Control Coding ELG375 Introduction to Communication Systems Introduction to Error Control Coding Types of Error Control Codes Block Codes Linear Hamming, LDPC Non-Linear Cyclic BCH, RS Convolutional Codes Turbo Codes

More information

Error-Correcting Codes

Error-Correcting Codes Error-Correcting Codes Michael Mo 10770518 6 February 2016 Abstract An introduction to error-correcting codes will be given by discussing a class of error-correcting codes, called linear block codes. The

More information

UCSD ECE154C Handout #21 Prof. Young-Han Kim Thursday, June 8, Solutions to Practice Final Examination (Spring 2016)

UCSD ECE154C Handout #21 Prof. Young-Han Kim Thursday, June 8, Solutions to Practice Final Examination (Spring 2016) UCSD ECE54C Handout #2 Prof. Young-Han Kim Thursday, June 8, 27 Solutions to Practice Final Examination (Spring 26) There are 4 problems, each problem with multiple parts, each part worth points. Your

More information

4. Error correction and link control. Contents

4. Error correction and link control. Contents //2 4. Error correction and link control Contents a. Types of errors b. Error detection and correction c. Flow control d. Error control //2 a. Types of errors Data can be corrupted during transmission.

More information

QED Q: Why is it called the triangle inequality? A: Analogue with euclidean distance in the plane: picture Defn: Minimum Distance of a code C:

QED Q: Why is it called the triangle inequality? A: Analogue with euclidean distance in the plane: picture Defn: Minimum Distance of a code C: Lecture 3: Lecture notes posted online each week. Recall Defn Hamming distance: for words x = x 1... x n, y = y 1... y n of the same length over the same alphabet, d(x, y) = {1 i n : x i y i } i.e., d(x,

More information

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :

More information

Errors. Chapter Extension of System Model

Errors. Chapter Extension of System Model Chapter 4 Errors In Chapter 2 we saw examples of how symbols could be represented by arrays of bits. In Chapter 3 we looked at some techniques of compressing the bit representations of such symbols, or

More information

ABSTRACT ALGEBRA FINAL PROJECT: GROUP CODES AND COSET DECODING

ABSTRACT ALGEBRA FINAL PROJECT: GROUP CODES AND COSET DECODING ABSTRACT ALGEBRA FINAL PROJECT: GROUP CODES AND COSET DECODING 1. Preface: Stumbling Blocks and the Learning Process Initially, this topic was a bit intimidating to me. It seemed highly technical at first,

More information

Chapter 10 Error Detection and Correction 10.1

Chapter 10 Error Detection and Correction 10.1 Chapter 10 Error Detection and Correction 10.1 10-1 INTRODUCTION some issues related, directly or indirectly, to error detection and correction. Topics discussed in this section: Types of Errors Redundancy

More information

CSEP 561 Error detection & correction. David Wetherall

CSEP 561 Error detection & correction. David Wetherall CSEP 561 Error detection & correction David Wetherall djw@cs.washington.edu Codes for Error Detection/Correction ti ti Error detection and correction How do we detect and correct messages that are garbled

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

1 Counting triangles and cliques

1 Counting triangles and cliques ITCSC-INC Winter School 2015 26 January 2014 notes by Andrej Bogdanov Today we will talk about randomness and some of the surprising roles it plays in the theory of computing and in coding theory. Let

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

LINEAR CODES WITH NON-UNIFORM ERROR CORRECTION CAPABILITY

LINEAR CODES WITH NON-UNIFORM ERROR CORRECTION CAPABILITY LINEAR CODES WITH NON-UNIFORM ERROR CORRECTION CAPABILITY By Margaret Ann Bernard The University of the West Indies and Bhu Dev Sharma Xavier University of Louisiana, New Orleans ABSTRACT This paper introduces

More information

Add a multiple of a row to another row, replacing the row which was not multiplied.

Add a multiple of a row to another row, replacing the row which was not multiplied. Determinants Properties involving elementary row operations There are a few sections on properties. Rirst, we ll simply state collections of properties, provide some examples, and talk about why they are

More information

Ch. 7 Error Detection and Correction

Ch. 7 Error Detection and Correction Ch. 7 Error Detection and Correction Error Detection and Correction Data can be corrupted during transmission. Some applications require that errors be detected and corrected. 2 1. Introduction Let us

More information

Chapter 10 Error Detection and Correction. Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Chapter 10 Error Detection and Correction. Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 10 Error Detection and Correction 0. Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Note The Hamming distance between two words is the number of differences

More information

Chapter 18 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

Chapter 18 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal. Chapter 8 out of 7 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal 8 Matrices Definitions and Basic Operations Matrix algebra is also known

More information

11.1 Facility Location

11.1 Facility Location CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local

More information

CMSC 2833 Lecture 18. Parity Add a bit to make the number of ones (1s) transmitted odd.

CMSC 2833 Lecture 18. Parity Add a bit to make the number of ones (1s) transmitted odd. Parity Even parity: Odd parity: Add a bit to make the number of ones (1s) transmitted even. Add a bit to make the number of ones (1s) transmitted odd. Example and ASCII A is coded 100 0001 Parity ASCII

More information

4. Simplicial Complexes and Simplicial Homology

4. Simplicial Complexes and Simplicial Homology MATH41071/MATH61071 Algebraic topology Autumn Semester 2017 2018 4. Simplicial Complexes and Simplicial Homology Geometric simplicial complexes 4.1 Definition. A finite subset { v 0, v 1,..., v r } R n

More information

Chapter 1. Math review. 1.1 Some sets

Chapter 1. Math review. 1.1 Some sets Chapter 1 Math review This book assumes that you understood precalculus when you took it. So you used to know how to do things like factoring polynomials, solving high school geometry problems, using trigonometric

More information

AXIOMS FOR THE INTEGERS

AXIOMS FOR THE INTEGERS AXIOMS FOR THE INTEGERS BRIAN OSSERMAN We describe the set of axioms for the integers which we will use in the class. The axioms are almost the same as what is presented in Appendix A of the textbook,

More information

FAULT TOLERANT SYSTEMS

FAULT TOLERANT SYSTEMS FAULT TOLERANT SYSTEMS http://www.ecs.umass.edu/ece/koren/faulttolerantsystems Part 6 Coding I Chapter 3 Information Redundancy Part.6.1 Information Redundancy - Coding A data word with d bits is encoded

More information

LOW-density parity-check (LDPC) codes are widely

LOW-density parity-check (LDPC) codes are widely 1460 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 53, NO 4, APRIL 2007 Tree-Based Construction of LDPC Codes Having Good Pseudocodeword Weights Christine A Kelley, Member, IEEE, Deepak Sridhara, Member,

More information

Topology and Topological Spaces

Topology and Topological Spaces Topology and Topological Spaces Mathematical spaces such as vector spaces, normed vector spaces (Banach spaces), and metric spaces are generalizations of ideas that are familiar in R or in R n. For example,

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Hamming Codes. s 0 s 1 s 2 Error bit No error has occurred c c d3 [E1] c0. Topics in Computer Mathematics

Hamming Codes. s 0 s 1 s 2 Error bit No error has occurred c c d3 [E1] c0. Topics in Computer Mathematics Hamming Codes Hamming codes belong to the class of codes known as Linear Block Codes. We will discuss the generation of single error correction Hamming codes and give several mathematical descriptions

More information

Describe the two most important ways in which subspaces of F D arise. (These ways were given as the motivation for looking at subspaces.

Describe the two most important ways in which subspaces of F D arise. (These ways were given as the motivation for looking at subspaces. Quiz Describe the two most important ways in which subspaces of F D arise. (These ways were given as the motivation for looking at subspaces.) What are the two subspaces associated with a matrix? Describe

More information

Linear Algebra Part I - Linear Spaces

Linear Algebra Part I - Linear Spaces Linear Algebra Part I - Linear Spaces Simon Julier Department of Computer Science, UCL S.Julier@cs.ucl.ac.uk http://moodle.ucl.ac.uk/course/view.php?id=11547 GV01 - Mathematical Methods, Algorithms and

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

A GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS

A GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS A GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS HEMANT D. TAGARE. Introduction. Shape is a prominent visual feature in many images. Unfortunately, the mathematical theory

More information

The McEliece Cryptosystem

The McEliece Cryptosystem The McEliece Cryptosystem Suanne Au Christina Eubanks-Turner Jennifer Everson September 17, 2003 Abstract The McEliece cryptosystem is a public key cryptosystem whose security rests on the difficult problem

More information

6.001 Notes: Section 4.1

6.001 Notes: Section 4.1 6.001 Notes: Section 4.1 Slide 4.1.1 In this lecture, we are going to take a careful look at the kinds of procedures we can build. We will first go back to look very carefully at the substitution model,

More information

Extended and generalized weight enumerators

Extended and generalized weight enumerators Extended and generalized weight enumerators Relinde Jurrius Ruud Pellikaan Eindhoven University of Technology, The Netherlands International Workshop on Coding and Cryptography, 2009 1/23 Outline Previous

More information

Kevin Buckley

Kevin Buckley Kevin Buckley - 69 ECE877 Information Theory & Coding for Digital Communications Villanova University ECE Department Prof. Kevin M. Buckley Lecture Set 3 Convolutional Codes x c (a) (b) (,) (,) (,) (,)

More information

Graphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs

Graphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs Graphs and Network Flows IE411 Lecture 21 Dr. Ted Ralphs IE411 Lecture 21 1 Combinatorial Optimization and Network Flows In general, most combinatorial optimization and integer programming problems are

More information

CSC 310, Fall 2011 Solutions to Theory Assignment #1

CSC 310, Fall 2011 Solutions to Theory Assignment #1 CSC 310, Fall 2011 Solutions to Theory Assignment #1 Question 1 (15 marks): Consider a source with an alphabet of three symbols, a 1,a 2,a 3, with probabilities p 1,p 2,p 3. Suppose we use a code in which

More information

Homework #5 Solutions Due: July 17, 2012 G = G = Find a standard form generator matrix for a code equivalent to C.

Homework #5 Solutions Due: July 17, 2012 G = G = Find a standard form generator matrix for a code equivalent to C. Homework #5 Solutions Due: July 7, Do the following exercises from Lax: Page 4: 4 Page 34: 35, 36 Page 43: 44, 45, 46 4 Let C be the (5, 3) binary code with generator matrix G = Find a standard form generator

More information

MA651 Topology. Lecture 4. Topological spaces 2

MA651 Topology. Lecture 4. Topological spaces 2 MA651 Topology. Lecture 4. Topological spaces 2 This text is based on the following books: Linear Algebra and Analysis by Marc Zamansky Topology by James Dugundgji Fundamental concepts of topology by Peter

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

INTRODUCTION TO THE HOMOLOGY GROUPS OF COMPLEXES

INTRODUCTION TO THE HOMOLOGY GROUPS OF COMPLEXES INTRODUCTION TO THE HOMOLOGY GROUPS OF COMPLEXES RACHEL CARANDANG Abstract. This paper provides an overview of the homology groups of a 2- dimensional complex. It then demonstrates a proof of the Invariance

More information

6.001 Notes: Section 6.1

6.001 Notes: Section 6.1 6.001 Notes: Section 6.1 Slide 6.1.1 When we first starting talking about Scheme expressions, you may recall we said that (almost) every Scheme expression had three components, a syntax (legal ways of

More information

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 2, FEBRUARY /$ IEEE

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 2, FEBRUARY /$ IEEE IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 2, FEBRUARY 2007 599 Results on Punctured Low-Density Parity-Check Codes and Improved Iterative Decoding Techniques Hossein Pishro-Nik, Member, IEEE,

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17 01.433/33 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/2/1.1 Introduction In this lecture we ll talk about a useful abstraction, priority queues, which are

More information

Matrix Methods for Lost Data Reconstruction in Erasure Codes

Matrix Methods for Lost Data Reconstruction in Erasure Codes Matrix Methods for Lost Data Reconstruction in Erasure Codes James Lee Hafner, Veera Deenadhayalan, and KK Rao John A Tomlin IBM Almaden Research Center Yahoo! Research hafner@almadenibmcom, [veerad,kkrao]@usibmcom

More information

ApplMath Lucie Kárná; Štěpán Klapka Message doubling and error detection in the binary symmetrical channel.

ApplMath Lucie Kárná; Štěpán Klapka Message doubling and error detection in the binary symmetrical channel. ApplMath 2015 Lucie Kárná; Štěpán Klapka Message doubling and error detection in the binary symmetrical channel In: Jan Brandts and Sergej Korotov and Michal Křížek and Karel Segeth and Jakub Šístek and

More information

ECE 333: Introduction to Communication Networks Fall Lecture 6: Data Link Layer II

ECE 333: Introduction to Communication Networks Fall Lecture 6: Data Link Layer II ECE 333: Introduction to Communication Networks Fall 00 Lecture 6: Data Link Layer II Error Correction/Detection 1 Notes In Lectures 3 and 4, we studied various impairments that can occur at the physical

More information

Linear-Programming Decoding of Nonbinary Linear Codes Mark F. Flanagan, Member, IEEE, Vitaly Skachek, Member, IEEE, Eimear Byrne, and Marcus Greferath

Linear-Programming Decoding of Nonbinary Linear Codes Mark F. Flanagan, Member, IEEE, Vitaly Skachek, Member, IEEE, Eimear Byrne, and Marcus Greferath 4134 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 9, SEPTEMBER 2009 Linear-Programming Decoding of Nonbinary Linear Codes Mark F. Flanagan, Member, IEEE, Vitaly Skachek, Member, IEEE, Eimear Byrne,

More information

Analyzing the Peeling Decoder

Analyzing the Peeling Decoder Analyzing the Peeling Decoder Supplemental Material for Advanced Channel Coding Henry D. Pfister January 5th, 01 1 Introduction The simplest example of iterative decoding is the peeling decoder introduced

More information

Chapter 4: Implicit Error Detection

Chapter 4: Implicit Error Detection 4. Chpter 5 Chapter 4: Implicit Error Detection Contents 4.1 Introduction... 4-2 4.2 Network error correction... 4-2 4.3 Implicit error detection... 4-3 4.4 Mathematical model... 4-6 4.5 Simulation setup

More information

EC121 Mathematical Techniques A Revision Notes

EC121 Mathematical Techniques A Revision Notes EC Mathematical Techniques A Revision Notes EC Mathematical Techniques A Revision Notes Mathematical Techniques A begins with two weeks of intensive revision of basic arithmetic and algebra, to the level

More information

Rank Minimization over Finite Fields

Rank Minimization over Finite Fields Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering, University of Wisconsin-Madison ISIT 2011 Vincent Tan (UW-Madison)

More information

ITERATIVE decoders have gained widespread attention

ITERATIVE decoders have gained widespread attention IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 11, NOVEMBER 2007 4013 Pseudocodewords of Tanner Graphs Christine A. Kelley, Member, IEEE, and Deepak Sridhara, Member, IEEE Abstract This paper presents

More information

MDS (maximum-distance separable) codes over large

MDS (maximum-distance separable) codes over large IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 55, NO 4, APRIL 2009 1721 Cyclic Lowest Density MDS Array Codes Yuval Cassuto, Member, IEEE, and Jehoshua Bruck, Fellow, IEEE Abstract Three new families of

More information

MOST attention in the literature of network codes has

MOST attention in the literature of network codes has 3862 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 8, AUGUST 2010 Efficient Network Code Design for Cyclic Networks Elona Erez, Member, IEEE, and Meir Feder, Fellow, IEEE Abstract This paper introduces

More information

The (extended) coset leader and list weight enumerator

The (extended) coset leader and list weight enumerator The (extended) coset leader and list weight enumerator Relinde Jurrius (joint work with Ruud Pellikaan) Vrije Universiteit Brussel Fq11 July 24, 2013 Relinde Jurrius (VUB) Coset leader weight enumerator

More information

Lecture 25 Notes Spanning Trees

Lecture 25 Notes Spanning Trees Lecture 25 Notes Spanning Trees 15-122: Principles of Imperative Computation (Spring 2016) Frank Pfenning 1 Introduction The following is a simple example of a connected, undirected graph with 5 vertices

More information

(Refer Slide Time 3:31)

(Refer Slide Time 3:31) Digital Circuits and Systems Prof. S. Srinivasan Department of Electrical Engineering Indian Institute of Technology Madras Lecture - 5 Logic Simplification In the last lecture we talked about logic functions

More information

Construction C : an inter-level coded version of Construction C

Construction C : an inter-level coded version of Construction C Construction C : an inter-level coded version of Construction C arxiv:1709.06640v2 [cs.it] 27 Dec 2017 Abstract Besides all the attention given to lattice constructions, it is common to find some very

More information

Lecture 25 Spanning Trees

Lecture 25 Spanning Trees Lecture 25 Spanning Trees 15-122: Principles of Imperative Computation (Fall 2018) Frank Pfenning, Iliano Cervesato The following is a simple example of a connected, undirected graph with 5 vertices (A,

More information

Lecture 4: 3SAT and Latin Squares. 1 Partial Latin Squares Completable in Polynomial Time

Lecture 4: 3SAT and Latin Squares. 1 Partial Latin Squares Completable in Polynomial Time NP and Latin Squares Instructor: Padraic Bartlett Lecture 4: 3SAT and Latin Squares Week 4 Mathcamp 2014 This talk s focus is on the computational complexity of completing partial Latin squares. Our first

More information

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find).

2. Use elementary row operations to rewrite the augmented matrix in a simpler form (i.e., one whose solutions are easy to find). Section. Gaussian Elimination Our main focus in this section is on a detailed discussion of a method for solving systems of equations. In the last section, we saw that the general procedure for solving

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

Error Correcting Codes

Error Correcting Codes Error Correcting Codes 2. The Hamming Codes Priti Shankar Priti Shankar is with the Department of Computer Science and Automation at the Indian Institute of Science, Bangalore. Her interests are in Theoretical

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

More information

Data Link Control Layer, Error Detection, Error Correction, and Framing

Data Link Control Layer, Error Detection, Error Correction, and Framing Data Link Control Layer, Error Detection, Error Correction, and Framing EEE 538, WEEK 2 Dr. Nail Akar Bilkent University Electrical and Electronics Engineering Department 1 Error Detection Techniques Used

More information

Lecture 1: Overview

Lecture 1: Overview 15-150 Lecture 1: Overview Lecture by Stefan Muller May 21, 2018 Welcome to 15-150! Today s lecture was an overview that showed the highlights of everything you re learning this semester, which also meant

More information

Vector Calculus: Understanding the Cross Product

Vector Calculus: Understanding the Cross Product University of Babylon College of Engineering Mechanical Engineering Dept. Subject : Mathematics III Class : 2 nd year - first semester Date: / 10 / 2016 2016 \ 2017 Vector Calculus: Understanding the Cross

More information

MT365 Examination 2017 Part 1 Solutions Part 1

MT365 Examination 2017 Part 1 Solutions Part 1 MT xamination 0 Part Solutions Part Q. G (a) Number of vertices in G =. Number of edges in G = (i) The graph G is simple no loops or multiple edges (ii) The graph G is not regular it has vertices of deg.,

More information

Chapter 5 VARIABLE-LENGTH CODING Information Theory Results (II)

Chapter 5 VARIABLE-LENGTH CODING Information Theory Results (II) Chapter 5 VARIABLE-LENGTH CODING ---- Information Theory Results (II) 1 Some Fundamental Results Coding an Information Source Consider an information source, represented by a source alphabet S. S = { s,

More information

Project 1. Implementation. Massachusetts Institute of Technology : Error Correcting Codes Laboratory March 4, 2004 Professor Daniel A.

Project 1. Implementation. Massachusetts Institute of Technology : Error Correcting Codes Laboratory March 4, 2004 Professor Daniel A. Massachusetts Institute of Technology Handout 18.413: Error Correcting Codes Laboratory March 4, 2004 Professor Daniel A. Spielman Project 1 In this project, you are going to implement the simplest low

More information

Handout 9: Imperative Programs and State

Handout 9: Imperative Programs and State 06-02552 Princ. of Progr. Languages (and Extended ) The University of Birmingham Spring Semester 2016-17 School of Computer Science c Uday Reddy2016-17 Handout 9: Imperative Programs and State Imperative

More information

A Toolkit for List Recovery. Nick Wasylyshyn A THESIS. Mathematics

A Toolkit for List Recovery. Nick Wasylyshyn A THESIS. Mathematics A Toolkit for List Recovery Nick Wasylyshyn A THESIS in Mathematics Presented to the Faculties of the University of Pennsylvania in Partial Fulfillment of the Requirements for the Degree of Master of Arts

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 11 Coding Strategies and Introduction to Huffman Coding The Fundamental

More information

Instructor: Padraic Bartlett. Lecture 2: Schreier Diagrams

Instructor: Padraic Bartlett. Lecture 2: Schreier Diagrams Algebraic GT Instructor: Padraic Bartlett Lecture 2: Schreier Diagrams Week 5 Mathcamp 2014 This class s lecture continues last s class s discussion of the interplay between groups and graphs. In specific,

More information

[Ch 6] Set Theory. 1. Basic Concepts and Definitions. 400 lecture note #4. 1) Basics

[Ch 6] Set Theory. 1. Basic Concepts and Definitions. 400 lecture note #4. 1) Basics 400 lecture note #4 [Ch 6] Set Theory 1. Basic Concepts and Definitions 1) Basics Element: ; A is a set consisting of elements x which is in a/another set S such that P(x) is true. Empty set: notated {

More information

CHAPTER 2. Graphs. 1. Introduction to Graphs and Graph Isomorphism

CHAPTER 2. Graphs. 1. Introduction to Graphs and Graph Isomorphism CHAPTER 2 Graphs 1. Introduction to Graphs and Graph Isomorphism 1.1. The Graph Menagerie. Definition 1.1.1. A simple graph G = (V, E) consists of a set V of vertices and a set E of edges, represented

More information

5.4 Pure Minimal Cost Flow

5.4 Pure Minimal Cost Flow Pure Minimal Cost Flow Problem. Pure Minimal Cost Flow Networks are especially convenient for modeling because of their simple nonmathematical structure that can be easily portrayed with a graph. This

More information

Graph based codes for distributed storage systems

Graph based codes for distributed storage systems /23 Graph based codes for distributed storage systems July 2, 25 Christine Kelley University of Nebraska-Lincoln Joint work with Allison Beemer and Carolyn Mayer Combinatorics and Computer Algebra, COCOA

More information

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014 Suggested Reading: Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 Probabilistic Modelling and Reasoning: The Junction

More information

in this web service Cambridge University Press

in this web service Cambridge University Press 978-0-51-85748- - Switching and Finite Automata Theory, Third Edition Part 1 Preliminaries 978-0-51-85748- - Switching and Finite Automata Theory, Third Edition CHAPTER 1 Number systems and codes This

More information

Solutions to Homework 10

Solutions to Homework 10 CS/Math 240: Intro to Discrete Math 5/3/20 Instructor: Dieter van Melkebeek Solutions to Homework 0 Problem There were five different languages in Problem 4 of Homework 9. The Language D 0 Recall that

More information

Ad hoc and Sensor Networks Chapter 6: Link layer protocols. Holger Karl

Ad hoc and Sensor Networks Chapter 6: Link layer protocols. Holger Karl Ad hoc and Sensor Networks Chapter 6: Link layer protocols Holger Karl Goals of this chapter Link layer tasks in general Framing group bit sequence into packets/frames Important: format, size Error control

More information

On Punctured Reed-Solomon Codes at the Transport Layer of Digital Network

On Punctured Reed-Solomon Codes at the Transport Layer of Digital Network On Punctured Reed-Solomon Codes at the Transport Layer of Digital Network Shen Fei June, 2010 Introduction Basic Knowledge Applying Network Coding at Transport Layer Performance of Using PRS Codes Introduction

More information

Planar Graphs. 1 Graphs and maps. 1.1 Planarity and duality

Planar Graphs. 1 Graphs and maps. 1.1 Planarity and duality Planar Graphs In the first half of this book, we consider mostly planar graphs and their geometric representations, mostly in the plane. We start with a survey of basic results on planar graphs. This chapter

More information

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS Definition 1.1. Let X be a set and T a subset of the power set P(X) of X. Then T is a topology on X if and only if all of the following

More information

COVERING SPACES AND SUBGROUPS OF THE FREE GROUP

COVERING SPACES AND SUBGROUPS OF THE FREE GROUP COVERING SPACES AND SUBGROUPS OF THE FREE GROUP SAMANTHA NIEVEEN AND ALLISON SMITH Adviser: Dennis Garity Oregon State University Abstract. In this paper we will use the known link between covering spaces

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Advanced Computer Networks. Rab Nawaz Jadoon DCS. Assistant Professor COMSATS University, Lahore Pakistan. Department of Computer Science

Advanced Computer Networks. Rab Nawaz Jadoon DCS. Assistant Professor COMSATS University, Lahore Pakistan. Department of Computer Science Advanced Computer Networks Department of Computer Science DCS COMSATS Institute of Information Technology Rab Nawaz Jadoon Assistant Professor COMSATS University, Lahore Pakistan Advanced Computer Networks

More information

arxiv: v1 [math.co] 25 Sep 2015

arxiv: v1 [math.co] 25 Sep 2015 A BASIS FOR SLICING BIRKHOFF POLYTOPES TREVOR GLYNN arxiv:1509.07597v1 [math.co] 25 Sep 2015 Abstract. We present a change of basis that may allow more efficient calculation of the volumes of Birkhoff

More information

New Constructions of Non-Adaptive and Error-Tolerance Pooling Designs

New Constructions of Non-Adaptive and Error-Tolerance Pooling Designs New Constructions of Non-Adaptive and Error-Tolerance Pooling Designs Hung Q Ngo Ding-Zhu Du Abstract We propose two new classes of non-adaptive pooling designs The first one is guaranteed to be -error-detecting

More information

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Greedy Algorithms (continued) The best known application where the greedy algorithm is optimal is surely

More information

392D: Coding for the AWGN Channel Wednesday, March 21, 2007 Stanford, Winter 2007 Handout #26. Final exam solutions

392D: Coding for the AWGN Channel Wednesday, March 21, 2007 Stanford, Winter 2007 Handout #26. Final exam solutions 92D: Coding for the AWGN Channel Wednesday, March 2, 27 Stanford, Winter 27 Handout #26 Problem F. (8 points) (Lexicodes) Final exam solutions In this problem, we will see that a simple greedy algorithm

More information

Signed umbers. Sign/Magnitude otation

Signed umbers. Sign/Magnitude otation Signed umbers So far we have discussed unsigned number representations. In particular, we have looked at the binary number system and shorthand methods in representing binary codes. With m binary digits,

More information

Divisibility Rules and Their Explanations

Divisibility Rules and Their Explanations Divisibility Rules and Their Explanations Increase Your Number Sense These divisibility rules apply to determining the divisibility of a positive integer (1, 2, 3, ) by another positive integer or 0 (although

More information

LDPC Codes a brief Tutorial

LDPC Codes a brief Tutorial LDPC Codes a brief Tutorial Bernhard M.J. Leiner, Stud.ID.: 53418L bleiner@gmail.com April 8, 2005 1 Introduction Low-density parity-check (LDPC) codes are a class of linear block LDPC codes. The name

More information

Slide Set 1. for ENEL 339 Fall 2014 Lecture Section 02. Steve Norman, PhD, PEng

Slide Set 1. for ENEL 339 Fall 2014 Lecture Section 02. Steve Norman, PhD, PEng Slide Set 1 for ENEL 339 Fall 2014 Lecture Section 02 Steve Norman, PhD, PEng Electrical & Computer Engineering Schulich School of Engineering University of Calgary Fall Term, 2014 ENEL 353 F14 Section

More information

ELEC 691X/498X Broadcast Signal Transmission Winter 2018

ELEC 691X/498X Broadcast Signal Transmission Winter 2018 ELEC 691X/498X Broadcast Signal Transmission Winter 2018 Instructor: DR. Reza Soleymani, Office: EV 5.125, Telephone: 848 2424 ext.: 4103. Office Hours: Wednesday, Thursday, 14:00 15:00 Slide 1 In this

More information

Worst-case running time for RANDOMIZED-SELECT

Worst-case running time for RANDOMIZED-SELECT Worst-case running time for RANDOMIZED-SELECT is ), even to nd the minimum The algorithm has a linear expected running time, though, and because it is randomized, no particular input elicits the worst-case

More information