LDPC Codes a brief Tutorial

Similar documents
Design and Implementation of Low Density Parity Check Codes

Modern Communications Chapter 5. Low-Density Parity-Check Codes

Capacity-approaching Codes for Solid State Storages

Finding Small Stopping Sets in the Tanner Graphs of LDPC Codes

On the construction of Tanner graphs

Lowering the Error Floors of Irregular High-Rate LDPC Codes by Graph Conditioning

THE DESIGN OF STRUCTURED REGULAR LDPC CODES WITH LARGE GIRTH. Haotian Zhang and José M. F. Moura

Performance analysis of LDPC Decoder using OpenMP

Error-Correcting Codes

LDPC Simulation With CUDA GPU

New Message-Passing Decoding Algorithm of LDPC Codes by Partitioning Check Nodes 1

Optimized Min-Sum Decoding Algorithm for Low Density PC Codes

Project 1. Implementation. Massachusetts Institute of Technology : Error Correcting Codes Laboratory March 4, 2004 Professor Daniel A.

REVIEW ON CONSTRUCTION OF PARITY CHECK MATRIX FOR LDPC CODE

Hardware Implementation

Link Layer: Error detection and correction

C LDPC Coding Proposal for LBC. This contribution provides an LDPC coding proposal for LBC

A REVIEW OF CONSTRUCTION METHODS FOR REGULAR LDPC CODES

Check-hybrid GLDPC Codes Without Small Trapping Sets

Performance comparison of Decoding Algorithm for LDPC codes in DVBS2

LOW-DENSITY PARITY-CHECK (LDPC) codes [1] can

Reduced Complexity of Decoding Algorithm for Irregular LDPC Codes Using a Split Row Method

Critical Scan Path Based Energy Efficient LDPC Decoder using DD-BMP

Use of the LDPC codes Over the Binary Erasure Multiple Access Channel

HDL Implementation of an Efficient Partial Parallel LDPC Decoder Using Soft Bit Flip Algorithm

Quantized Iterative Message Passing Decoders with Low Error Floor for LDPC Codes

Implementation of Novel BP Algorithm for LDPC Decoding on a General Purpose Mobile ARM11 CPU

Search for Improvements in Low Density Parity Check Codes for WiMAX (802.16e) Applications

Performance Analysis of Gray Code based Structured Regular Column-Weight Two LDPC Codes

New LDPC code design scheme combining differential evolution and simplex algorithm Min Kyu Song

LOW-density parity-check (LDPC) codes have attracted

Overlapped Scheduling for Folded LDPC Decoding Based on Matrix Permutation

Error Control Coding for MLC Flash Memories

FPGA Implementation of Binary Quasi Cyclic LDPC Code with Rate 2/5

Error correction guarantees

A Generic Architecture of CCSDS Low Density Parity Check Decoder for Near-Earth Applications

Randomized Progressive Edge-Growth (RandPEG)

CSEP 561 Error detection & correction. David Wetherall

CSC310 Information Theory. Lecture 21: Erasure (Deletion) Channels & Digital Fountain Codes. November 22, 2006 see

Error Floors of LDPC Codes

Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) Codes for Deep Space and High Data Rate Applications

lambda-min Decoding Algorithm of Regular and Irregular LDPC Codes

A Parallel Decoding Algorithm of LDPC Codes using CUDA

Low complexity FEC Systems for Satellite Communication

CSC310 Information Theory. Lecture 22: Erasure (Deletion) Channels & Digital Fountain Codes. November 30, 2005 see

All About Erasure Codes: - Reed-Solomon Coding - LDPC Coding. James S. Plank. ICL - August 20, 2004

Low-Density Parity-Check Codes: Construction and Implementation

Optimized ARM-Based Implementation of Low Density Parity Check Code (LDPC) Decoder in China Digital Radio (CDR)

Maximum Density Still Life

International Journal of Scientific & Engineering Research, Volume 4, Issue 5, May-2013 ISSN

Piecewise Linear Approximation Based on Taylor Series of LDPC Codes Decoding Algorithm and Implemented in FPGA

Due dates are as mentioned above. Checkoff interviews for PS2 and PS3 will be held together and will happen between October 4 and 8.

A Class of Group-Structured LDPC Codes

Multi-Rate Reconfigurable LDPC Decoder Architectures for QC-LDPC codes in High Throughput Applications

Performance Analysis of Min-Sum LDPC Decoding Algorithm S. V. Viraktamath 1, Girish Attimarad 2

On combining chase-2 and sum-product algorithms for LDPC codes

T325 Summary T305 T325 B BLOCK 4 T325. Session 3. Dr. Saatchi, Seyed Mohsen. Prepared by:

Optimal Overlapped Message Passing Decoding of Quasi-Cyclic LDPC Codes

Analyzing the Peeling Decoder

Interlaced Column-Row Message-Passing Schedule for Decoding LDPC Codes

Nearly-optimal associative memories based on distributed constant weight codes

Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku

Performance of the Sum-Product Decoding Algorithm on Factor Graphs With Short Cycles

A Memory Efficient FPGA Implementation of Quasi-Cyclic LDPC Decoder

Linear Block Codes. Allen B. MacKenzie Notes for February 4, 9, & 11, Some Definitions

International Journal of Engineering Trends and Technology (IJETT) - Volume4Issue5- May 2013

TURBO codes, [1], [2], have attracted much interest due

LOW-density parity-check (LDPC) codes are widely

Describe the two most important ways in which subspaces of F D arise. (These ways were given as the motivation for looking at subspaces.

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 2, FEBRUARY /$ IEEE

< Irregular Repeat-Accumulate LDPC Code Proposal Technology Overview

ITERATIVE COLLISION RESOLUTION IN WIRELESS NETWORKS

1 Counting triangles and cliques

Multidimensional Decoding Networks for Trapping Set Analysis

HIGH THROUGHPUT LOW POWER DECODER ARCHITECTURES FOR LOW DENSITY PARITY CHECK CODES

ISSN (Print) Research Article. *Corresponding author Akilambigai P

ERROR correcting codes are used to increase the bandwidth

Low Complexity Quasi-Cyclic LDPC Decoder Architecture for IEEE n

ELG3175 Introduction to Communication Systems. Introduction to Error Control Coding

AN ANALYSIS ON MARKOV RANDOM FIELDS (MRFs) USING CYCLE GRAPHS

Implementation of a Real Time Programmable Encoder for Low Density Parity Check Code on a Reconfigurable Instruction Cell Architecture

UCSD ECE154C Handout #21 Prof. Young-Han Kim Thursday, June 8, Solutions to Practice Final Examination (Spring 2016)

Energy Efficient Layer Decoding Architecture for LDPC Decoder

Channel Decoding in Wireless Communication Systems using Deep Learning

Summary of Raptor Codes

A New Non-Iterative Decoding Algorithm for the Erasure Channel : Comparisons with Enhanced Iterative Methods

AN EFFICIENT DESIGN OF VLSI ARCHITECTURE FOR FAULT DETECTION USING ORTHOGONAL LATIN SQUARES (OLS) CODES

2280 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012

Capacity-Approaching Low-Density Parity- Check Codes: Recent Developments and Applications

Draft Notes 1 : Scaling in Ad hoc Routing Protocols

BECAUSE of their superior performance capabilities on

Non-recursive complexity reduction encoding scheme for performance enhancement of polar codes

A Reduced Routing Network Architecture for Partial Parallel LDPC decoders

1 Graph Visualization

Performance study and synthesis of new Error Correcting Codes RS, BCH and LDPC Using the Bit Error Rate (BER) and Field-Programmable Gate Array (FPGA)

4. Error correction and link control. Contents

On the Implementation of Long LDPC Codes for Optical Communications

Link analysis. Query-independent ordering. Query processing. Spamming simple popularity

Packet-Level Forward Error Correction in Video Transmission

Game Theory & Networks

Transcription:

LDPC Codes a brief Tutorial Bernhard M.J. Leiner, Stud.ID.: 53418L bleiner@gmail.com April 8, 2005 1 Introduction Low-density parity-check (LDPC) codes are a class of linear block LDPC codes. The name comes from the characteristic of their parity-check matrix which contains only a few 1 s in comparison to the amount of 0 s. Their main advantage is that they provide a performance which is very close to the capacity for a lot of different channels and linear time complex algorithms for decoding. Furthermore are they suited for implementations that make heavy use of parallelism. They were first introduced by Gallager in his PhD thesis in 1960. But due to the computational effort in implementing coder and encoder for such codes and the introduction of Reed-Solomon codes, they were mostly ignored until about ten years ago. 1.1 Representations for LDPC codes Basically there are two different possibilities to represent LDPC codes. Like all linear block codes they can be described via matrices. The second possibility is a graphical representation. Matrix Representation Lets look at an example for a low-density parity-check matrix first. The matrix defined in equation (1) is a parity check matrix with dimension n m for a (8, 4) code. We can now define two numbers describing these matrix. w r for the number of 1 s in each row and w c for the columns. For a matrix to be called low-density the two conditions w c n and w r m must Gallager, 1960 1

0 1 0 1 1 0 0 1 H = 1 1 1 0 0 1 0 0 0 0 1 0 0 1 1 1 (1) 1 0 0 1 1 0 1 0 f 0 f 1 f 2 f 3 c nodes v nodes c 0 c 1 c 2 c 3 c 4 c 5 c 6 c 7 Figure 1: Tanner graph corresponding to the parity check matrix in equation (1). The marked path c 2 f 1 c 5 f 2 c 2 is an example for a short cycle. Those should usually be avoided since they are bad for decoding performance. be satisfied. In order to do this, the parity check matrix should usually be very large, so the example matrix can t be really called low-density. Graphical Representation Tanner introduced an effective graphical representation for LDPC Tanner codes. Not only provide these graphs a complete representation of the graph, 1981 code, they also help to describe the decoding algorithm as explained later on in this tutorial. Tanner graphs are bipartite graphs. That means that the nodes of the graph are separated into two distinctive sets and edges are only connecting nodes of two different types. The two types of nodes in a Tanner graph are called variable nodes (v-nodes) and check nodes v-nodes (c-nodes). c-nodes Figure 1.1 is an example for such a Tanner graph and represents the same code as the matrix in 1. The creation of such a graph is rather straight forward. It consists of m check nodes (the number of parity bits) and n variable nodes (the number of bits in a codeword). Check node f i is connected to variable node c j if the element h ij of H is a 1. 2

1.2 Regular and irregular LDPC codes A LDPC code is called regular if w c is constant for every column regular and w r = w c (n/m) is also constant for every row. The example matrix from equation (1) is regular with w c = 2 and w r = 4. It s also possible to see the regularity of this code while looking at the graphical representation. There is the same number of incoming edges for every v-node and also for all the c-nodes. If H is low density but the numbers of 1 s in each row or column aren t constant the code is called a irregular LDPC code. irregular 1.3 Constructing LDPC codes Several different algorithms exists to construct suitable LDPC codes. Gallager himself introduced one. Furthermore MacKay proposed one to semi-randomly generate sparse parity check matrices. This is quite interesting since it indicates that constructing good performing LDPC codes is not a hard problem. In fact, completely randomly chosen codes are good with a high probability. The problem that will arise, is that the encoding complexity of such codes is usually rather high. MacKay 2 Performance & Complexity Before describing decoding algorithms in section 3, I would like to explain why all this effort is needed. The feature of LDPC codes to perform near the Shannon limit 1 of a channel exists only for large block lengths. For example there have been simulations that perform within 0.04 db of the Shannon limit at a bit error rate of 10 6 with an block length of 10 7. An interesting fact is that those high performance codes are irregular. The large block length results also in large parity-check and generator matrices. The complexity of multiplying a codeword with a matrix depends on the amount of 1 s in the matrix. If we put the sparse matrix H in the form [P T I] via Gaussian elimination the generator matrix G can be calculated as G = [I P]. The sub-matrix P is generally not sparse so that the encoding complexity will be quite high. Shannon limit 1 Shannon proofed that reliable communication over an unreliable channel is only possible with code rates above a certain limit the channel capacity. 3

Since the complexity grows in O(n 2 ) even sparse matrices don t result in a good performance if the block length gets very high. So iterative decoding (and encoding) algorithms are used. Those algorithms perform local calculations and pass those local results via messages. This step is typically repeated several times. The term local calculations already indicates that a divide and conquere strategy, which separates a complex problem into manageable sub-problems, is realized. A sparse parity check matrix now helps this algorithms in several ways. First it helps to keep both the local calculations simple and also reduces the complexity of combining the sub-problems by reducing the number of needed messages to exchange all the information. Furthermore it was observed that iterative decoding algorithms of sparse codes perform very close to the optimal maximum likelihood decoder. divide and conquere 3 Decoding LDPC codes The algorithm used to decode LDPC codes was discovered independently several times and as a matter of fact comes under different names. The most common ones are the belief propagation algorithm, the message passing algorithm and the sum-product algorithm. BPA In order to explain this algorithm, a very simple variant which works with hard decision, will be introduced first. Later on the algorithm will be extended to work with soft decision which generally leads to better decoding results. Only binary symmetric channels will be considered. 3.1 Hard-decision decoding The algorithm will be explained on the basis of the example code already introduced in equation 1 and figure 1.1. An error free received codeword would be e.g. c = [1 0 0 1 0 1 0 1]. Let s suppose that we have a BHC channel and the received the codeword with one error bit c 1 flipped to 1. MPA SPA 1. In the first step all v-nodes c i send a message to their (always c i f j 2 in our example) c-nodes f j containing the bit they believe to be the correct one for them. At this stage the only information a v-node c i has, is the corresponding received i-th bit of c, y i. That means for example, that c 0 sends a message containing 1 to f 1 and f 3, node c 1 sends messages containing y 1 (1) to f 0 and f 1, and so on. 4

c-node received/sent f 0 received: c 1 1 c 3 1 c 4 0 c 7 1 sent: 0 c 1 0 c 3 1 c 4 0 c 7 f 1 received: c 0 1 c 1 1 c 2 0 c 5 1 sent: 0 c 0 0 c 1 1 c 2 0 c 5 f 2 received: c 2 0 c 5 1 c 6 0 c 7 1 sent: 0 c 2 1 c 5 0 c 6 1 c 7 f 3 received: c 0 1 c 3 1 c 4 0 c 6 0 sent: 1 c 0 1 c 3 0 c 4 0 c 6 Table 1: overview over messages received and sent by the c-nodes in step 2 of the message passing algorithm 2. In the second step every check nodes f j calculate a response to f j c i every connected variable node. The response message contains the bit that f j believes to be the correct one for this v-node c i assuming that the other v-nodes connected to f j are correct. In other words: If you look at the example, every c-node f j is connected to 4 v-nodes. So a c-node f j looks at the message received from three v-nodes and calculates the bit that the fourth v-node should have in order to fulfill the parity check equation. Table 2 gives an overview about this step. Important is, that this might also be the point at which the decoding algorithm terminates. This will be the case if all check equations are fulfilled. We will later see that the whole algorithm contains a loop, so an other possibility to stop would be a threshold for the amount of loops. 3. Next phase: the v-nodes receive the messages from the check c i f j nodes and use this additional information to decide if their originally received bit is OK. A simple way to do this is a majority vote. When coming back to our example that means, that each v-node has three sources of information concerning its bit. The original bit received and two suggestions from the check nodes. Table 3 illustrates this step. Now the v-nodes can send another message with their (hard) decision for the correct value to the check nodes. 4. Go to step 2. loop In our example, the second execution of step 2 would terminate the decoding process since c 1 has voted for 0 in the last step. This corrects 5

v-node y i received messages from check nodes decision c 0 1 f 1 0 f 3 1 1 c 1 1 f 0 0 f 1 0 0 c 2 0 f 1 1 f 2 0 0 c 3 1 f 0 0 f 3 1 1 c 4 0 f 0 1 f 3 0 0 c 5 1 f 1 0 f 2 1 1 c 6 0 f 2 0 f 3 0 0 c 7 1 f 0 1 f 2 1 1 Table 2: Step 3 of the described decoding algorithm. The v-nodes use the answer messages from the c-nodes to perform a majority vote on the bit value. the transmission error and all check equations are now satisfied. 3.2 Soft-decision decoding The above description of hard-decision decoding was mainly for educational purpose to get an overview about the idea. Soft-decision decoding of LDPC codes, which is based on the concept of belief propagation, yields in a better decoding performance and is therefore the prefered method. The underlying idea is exactly the same as in hard decision decoding. Before presenting the algorithm lets introduce some notations: belief propagation P i = Pr(c i = 1 y i ) q ij is a message sent by the variable node c i to the check node f j. Every message contains always the pair q ij (0) and q ij (1) which stands for the amount of belief that y i is a 0 or a 1. r ji is a message sent by the check node f j to the variable node c i. Again there is a r ji (0) and r ji (1) that indicates the (current) amount of believe in that y i is a 0 or a 1. The step numbers in the following description correspond to the hard decision case. 1. All variable nodes send their q ij messages. Since no other c i f j information is avaiable at this step, q ij (1) = P i and q ij (0) = 1 P i. 6

a) b) f j q ij (b) f j r ji (b) q ij (b) c i r ji (b) c i y i Figure 2: a) illustrates the calculation of r ji (b) and b) q ij (b) 2. The check nodes calculate their response messages r ji : 2 f j c i and r ji (0) = 1 2 + 1 2 i ɛ V j \i (1 2q i j(1)) (3) r ji (1) = 1 r ji (0) (4) So they calculate the probability that there is an even number of 1 s amoung the variable nodes except c i (this is exactly what V j \i means). This propability is equal to the probability r ji (0) that c i is a 0. This step and the information used to calculate the responses is illustrated in figure 2. 3. The variable nodes update their response messages to the check c i f j nodes. This is done according to the following equations, q ij (0) = K ij (1 P i ) r j i(0) (5) q ij (1) = K ij P i j ɛ C i \j j ɛ C i \j r j i(1) (6) 2 Equation 3 uses the following result from Gallager: for a sequence of M independent binary digits a i with an propability of p i for a i = 1, the probability that the whole sequence contains an even number of 1 s is 1 2 + 1 2 M (1 2p i ) (2) i=1 7

whereby the Konstants K ij are chosen in a way to ensure that q ij (0) + q ij (1) = 1. C i \j now means all check nodes except f j. Again figure 2 illustrates the calculation in this step. At this point the v-nodes also update their current estimation ^c i of their variable c i. This is done by calculating the propabilities for 0 and 1 and voting for the bigger one. The used equations Q i (0) = K i (1 P i ) j ɛ C i r ji (0) (7) and Q i (1) = K i P i j ɛ C i r ji (1) (8) are quite similar to the ones to compute q ij (b) but now the information from every c-node is used. ^c i = { 1 if Q i (1) > Q i (0), 0 else If the current estimated codeword fufills now the parity check equations the algorithm terminates. Otherwise termination is ensured through a maximum number of iterations. 4. Go to step 2. loop (9) The explained soft decision decoding algorithm is a very simple variant, suited for BSC channels and could be modified for performance improvements. Beside performance issues there are nummerical stability problems due to the many multiplications of probabilities. The results will come very close to zero for large block lenghts. To prevent this, it is possible to change into the log-domain and doing additions instead of multiplications. The result is a more stable algorithm that even has performance advantages since additions are less costly. log-domain 4 Encoding The sharped eyed reader will have noticed that the above decoding algorithm does in fact only error correction. This would be enough for a traditional systematic block code since the code word would consist 8

of the message bits and some parity check bits. So far nowhere was mentioned that it s possible to directly see the original message bits in a LDPC encoded message. Luckily it is. Encoding LDPC codes is roughly done like that: Choose certain variable nodes to place the message bits on. And in the second step calculate the missing values of the other nodes. An obvious solution for that would be to solve the parity check equations. This would contain operations involving the whole parity-check matrix and the complexity would be again quadratic in the block length. In practice however, more clever methods are used to ensure that encoding can be done in much shorter time. Those methods can use again the spareness of the parity-check matrix or dictate a certain structure 3 for the Tanner graph. 5 Summary Low-density-parity-check codes have been studied a lot in the last years and huge progresses have been made in the understanding and ability to design iterative coding systems. The iterative decoding approach is already used in turbo codes but the structure of LDPC codes give even better results. In many cases they allow a higher code rate and also a lower error floor rate. Furthermore they make it possible to implement parallelizable decoders. The main disadvantes are that encoders are somehow more complex and that the code lenght has to be rather long to yield good results. For more information and there are a lot of things which haven t been mentioned in this tutorial I can recommend the following web sites as a start: http://www.csee.wvu.edu/wcrl/ldpc.htm A collection of links to sites about LDPC codes and a list of papers about the topic. http://www.inference.phy.cam.ac.uk/mackay/codesfiles.html The homepage of MacKay. Very intersting are the Pictorial demonstration of iterative decoding 3 It was already mentioned in section 1.3 that randomly generated LDPC codes results in high encoding complexity. 9