On combining chase-2 and sum-product algorithms for LDPC codes

Similar documents
Quantized Iterative Message Passing Decoders with Low Error Floor for LDPC Codes

New Message-Passing Decoding Algorithm of LDPC Codes by Partitioning Check Nodes 1

Finding Small Stopping Sets in the Tanner Graphs of LDPC Codes

LOW-DENSITY PARITY-CHECK (LDPC) codes [1] can

TURBO codes, [1], [2], have attracted much interest due

THE DESIGN OF STRUCTURED REGULAR LDPC CODES WITH LARGE GIRTH. Haotian Zhang and José M. F. Moura

On the construction of Tanner graphs

A new two-stage decoding scheme with unreliable path search to lower the error-floor for low-density parity-check codes

Complexity-Optimized Low-Density Parity-Check Codes

Reduced Complexity of Decoding Algorithm for Irregular LDPC Codes Using a Split Row Method

Capacity-approaching Codes for Solid State Storages

On the performance of turbo codes with convolutional interleavers

COMPARISON OF SIMPLIFIED GRADIENT DESCENT ALGORITHMS FOR DECODING LDPC CODES

Error Floors of LDPC Codes

Channel Decoding in Wireless Communication Systems using Deep Learning

Error Control Coding for MLC Flash Memories

Performance Analysis of Gray Code based Structured Regular Column-Weight Two LDPC Codes

Overlapped Scheduling for Folded LDPC Decoding Based on Matrix Permutation

Novel Low-Density Signature Structure for Synchronous DS-CDMA Systems

Piecewise Linear Approximation Based on Taylor Series of LDPC Codes Decoding Algorithm and Implemented in FPGA

Lowering the Error Floors of Irregular High-Rate LDPC Codes by Graph Conditioning

Gradient Descent Bit Flipping Algorithms for Decoding LDPC Codes

Performance of the Sum-Product Decoding Algorithm on Factor Graphs With Short Cycles

Weight distribution of turbo codes with convolutional interleavers

An Implementation of a Soft-Input Stack Decoder For Tailbiting Convolutional Codes

A new way to optimize LDPC code in gaussian multiple access channel

Non-Binary Turbo Codes Interleavers

Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku

Trapping Set Ontology

VHDL Implementation of different Turbo Encoder using Log-MAP Decoder

Investigation of Error Floors of Structured Low- Density Parity-Check Codes by Hardware Emulation

ITERATIVE decoders have gained widespread attention

Check-hybrid GLDPC Codes Without Small Trapping Sets

Hybrid Iteration Control on LDPC Decoders

Layered Decoding With A Early Stopping Criterion For LDPC Codes

Optimized Min-Sum Decoding Algorithm for Low Density PC Codes

BER Evaluation of LDPC Decoder with BPSK Scheme in AWGN Fading Channel

Adaptive Linear Programming Decoding of Polar Codes

< Irregular Repeat-Accumulate LDPC Code Proposal Technology Overview

Cascaded Channel Model, Analysis, and Hybrid Decoding for Spin-Torque Transfer Magnetic Random Access Memory (STT-MRAM)

Low complexity FEC Systems for Satellite Communication

Quasi-Cyclic Low-Density Parity-Check (QC-LDPC) Codes for Deep Space and High Data Rate Applications

6.962 Graduate Seminar in Communications Turbo-Like Codes: Structure, Design Criteria, and Variations

Comparison of Decoding Algorithms for Concatenated Turbo Codes

Interlaced Column-Row Message-Passing Schedule for Decoding LDPC Codes

Low Error Rate LDPC Decoders

Performance comparison of Decoding Algorithm for LDPC codes in DVBS2

LDPC Codes a brief Tutorial

PROPOSED DETERMINISTIC INTERLEAVERS FOR CCSDS TURBO CODE STANDARD

Minimum-Polytope-Based Linear Programming Decoder for LDPC Codes via ADMM Approach

Performance analysis of LDPC Decoder using OpenMP

Comparative Performance Analysis of Block and Convolution Codes

Dynamic Weighted Bit-Flipping Decoding. Algorithms for LDPC Codes

ISSN (Print) Research Article. *Corresponding author Akilambigai P

Re-Synchronization of Permutation Codes with Viterbi-Like Decoding

Partly Parallel Overlapped Sum-Product Decoder Architectures for Quasi-Cyclic LDPC Codes

Performance Analysis of Min-Sum LDPC Decoding Algorithm S. V. Viraktamath 1, Girish Attimarad 2

PERFORMANCE ANALYSIS OF HIGH EFFICIENCY LOW DENSITY PARITY-CHECK CODE DECODER FOR LOW POWER APPLICATIONS

Design and Implementation of Low Density Parity Check Codes

Image retrieval based on bag of images

Non-recursive complexity reduction encoding scheme for performance enhancement of polar codes

lambda-min Decoding Algorithm of Regular and Irregular LDPC Codes

LOW-density parity-check (LDPC) codes have attracted

Multidimensional Decoding Networks for Trapping Set Analysis

Towards Improved LDPC Code Designs Using Absorbing Set Spectrum Properties

An MSE Based Transfer Chart to Analyze Iterative Decoding Schemes

A Generic Architecture of CCSDS Low Density Parity Check Decoder for Near-Earth Applications

Quickest Search Over Multiple Sequences with Mixed Observations

Informed Dynamic Scheduling for Belief-Propagation Decoding of LDPC Codes

Due dates are as mentioned above. Checkoff interviews for PS2 and PS3 will be held together and will happen between October 4 and 8.

Design of Cages with a Randomized Progressive Edge-Growth Algorithm

A NOVEL HARDWARE-FRIENDLY SELF-ADJUSTABLE OFFSET MIN-SUM ALGORITHM FOR ISDB-S2 LDPC DECODER

2280 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 58, NO. 4, APRIL 2012

High Speed ACSU Architecture for Viterbi Decoder Using T-Algorithm

Design of Cages with a Randomized Progressive Edge-Growth Algorithm

A New Non-Iterative Decoding Algorithm for the Erasure Channel : Comparisons with Enhanced Iterative Methods

Estimating the Information Rate of Noisy Two-Dimensional Constrained Channels

An Algorithm for Counting Short Cycles in Bipartite Graphs

Convex combination of adaptive filters for a variable tap-length LMS algorithm

A Fast Systematic Optimized Comparison Algorithm for CNU Design of LDPC Decoders

Algorithm for a robust Message Authentication

Network Image Coding for Multicast

Multi-Rate Reconfigurable LDPC Decoder Architectures for QC-LDPC codes in High Throughput Applications

Randomized Progressive Edge-Growth (RandPEG)

Dynamic Window Decoding for LDPC Convolutional Codes in Low-Latency Optical Communications

Efficient Majority Logic Fault Detector/Corrector Using Euclidean Geometry Low Density Parity Check (EG-LDPC) Codes

The design and implementation of TPC encoder and decoder

H-ARQ Rate-Compatible Structured LDPC Codes

LDPC Simulation With CUDA GPU

Nearly-optimal associative memories based on distributed constant weight codes

FPGA Implementation of Binary Quasi Cyclic LDPC Code with Rate 2/5

Convergence Analysis of Iterative Threshold Decoding Process

Error correction guarantees

A Review on Analysis on Codes using Different Algorithms

New LDPC code design scheme combining differential evolution and simplex algorithm Min Kyu Song

Iterative Decoding Beyond Belief Propagation

Low Complexity Architecture for Max* Operator of Log-MAP Turbo Decoder

Energy Efficient Layer Decoding Architecture for LDPC Decoder

Performance study and synthesis of new Error Correcting Codes RS, BCH and LDPC Using the Bit Error Rate (BER) and Field-Programmable Gate Array (FPGA)

On the Performance Evaluation of Quasi-Cyclic LDPC Codes with Arbitrary Puncturing

Transcription:

University of Wollongong Research Online Faculty of Engineering and Information Sciences - Papers: Part A Faculty of Engineering and Information Sciences 2012 On combining chase-2 and sum-product algorithms for LDPC codes Sheng Tong Xidian University, sheng@uow.edu.au Huijuan Zheng Xian University Publication Details Tong, S. & Zheng, H. (2012). On combining chase-2 and sum-product algorithms for LDPC codes. ETRI Journal, 34 (4), 629-632. Research Online is the open access institutional repository for the University of Wollongong. For further information contact the UOW Library: research-pubs@uow.edu.au

On combining chase-2 and sum-product algorithms for LDPC codes Abstract This letter investigates the combination of the Chase-2 and sum-product (SP) algorithms for low-density parity-check (LDPC) codes. A simple modification of the tanh rule for check node update is given, which incorporates test error patterns (TEPs) used in the Chase algorithm into SP decoding of LDPC codes. Moreover, a simple yet effective approach is proposed to construct TEPs for dealing with decoding failures with low-weight syndromes. Simulation results show that the proposed algorithm is effective in improving both the waterfall and error floor performance of LDPC codes. Disciplines Engineering Science and Technology Studies Publication Details Tong, S. & Zheng, H. (2012). On combining chase-2 and sum-product algorithms for LDPC codes. ETRI Journal, 34 (4), 629-632. This journal article is available at Research Online: http://ro.uow.edu.au/eispapers/956

On Combining Chase-2 and Sum-Product Algorithms for LDPC Codes Sheng Tong and Huijuan Zheng This letter investigates the combination of the Chase-2 and sum-product (SP) algorithms for low-density parity-check (LDPC) codes. A simple modification of the tanh rule for check node update is given, which incorporates test error patterns (TEPs) used in the Chase algorithm into SP decoding of LDPC codes. Moreover, a simple yet effective approach is proposed to construct TEPs for dealing with decoding failures with lowweight syndromes. Simulation results show that the proposed algorithm is effective in improving both the waterfall and error floor performance of LDPC codes. Keywords: LDPC codes, Chase algorithm, sum-product (SP) algorithm. I. Introduction The Chase algorithm [1] is a suboptimal decoding procedure and usually used along with a hard decision decoder to improve the decoding performance at the cost of increased computational complexity. There are three variants of the Chase algorithm [1], among which the Chase-2 algorithm is the most promising and provides a good tradeoff between performance and complexity. In this letter, rather than combining the Chase-2 algorithm with a hard decoding algorithm, we consider the association of the Chase-2 algorithm with a soft decision decoding algorithm, that is, the sum-product (SP) algorithm [2], for the decoding of low-density parity-check (LDPC) codes [3]. The purpose is to provide an approach to achieve a tradeoff between Manuscript received Dec. 3, 2011; revised Feb. 21, 2012, accepted Mar. 7, 2012. This work was supported by the 973 Program (s. 2012CB316100 and 2010CB328300), NSFC (s. 61001130, 60972046, and 61101148), and the Fundamental Research Funds for the Central Universities (s. K50510010026 and K50510010006). Sheng Tong (phone: +86 029 158 0925 0930, ts_xd@163.com) is with the State Key Lab. of ISN, Xidian University, Xi an, China. Huijuan Zheng (zhj@xupt.edu.cn) is with the Department of Telecommunication, Xi an University of Posts and Telecommunications, Xi an, China. http://dx.doi.org/10.4218/etrij.12.0211.0510 performance and complexity for LDPC decoding. II. Combining Chase and Sum-Product Algorithms For an (n, k, d) binary linear code, the Chase-2 algorithm selects p least reliable positions (LRPs) in the received sequence and constructs a set of 2 p test error patterns (TEPs) whose non-zero elements are confined in the p LRPs. Then, each of the 2 p TEPs are added to the hard decision of the received sequence and the sum vector is fed into a hard decision decoder. Finally, all decoded code words, known as candidate code words, are compared and the one with the best metric with respect to the received sequence is chosen as the final decoding output. The parameter p determines the complexity, which is originally set to be [d/2] [1]. By varying p, tradeoffs between performance and complexity can be made. The SP algorithm is a suboptimal iterative decoding algorithm for the decoding of LDPC codes, which consists of two steps, that is, the variable node update (VNU) and check node update (CNU). The details of the SP algorithm are provided by [2]. As VNU is relatively simple and irrelevant to the following development, we only revisit the well-known tanh rule [4] for CNU. Consider a check node c. Denote the set of all of its neighboring variable nodes (VNs) as N(c). The incoming log-likelihood ratio (LLR) message from a VN vn(c) to c is denoted as m vc, and the outgoing LLR message from c to v is denoted as m cv. Then, the tanh rule is given by mc v mw c tanh tanh. 2 = w N( c), w v 2 (1) When combining the Chase-2 and SP algorithms, one needs to incorporate TEPs into the SP decoding of LDPC codes. This ETRI Journal, Volume 34, Number 4, August 2012 2012 Sheng Tong and Huijuan Zheng 629

can easily be realized by modifying the tanh rule. To describe this modification, some notations are required. Denote as D(v) ({0, 1}) the hard decision of a coded bit or equivalently its corresponding VN, v. The set of VNs corresponding to the non-zero elements in a TEP is denoted as V. Then, the modified tanh rule can be written as mc ( ) tanh v Du mw ( 1) tanh c 2 = u N( c) V w N( c)\ V, w v 2. (2) The reason for the above modification is explained as follows. When the hard decision is made for a non-zero position, or a VN w, in a TEP, its associated LLR L w is + for D(w)=0 or for D(w)=1. te that tanh(l w /2) is 1 for L w =+ or 1 for L w =, that is, tanh(l w /2)=( 1) D(w). Equation (2) follows directly from (1) by replacing tanh(l w /2) with ( 1) D(w) for wt. Thus, a TEP is naturally incorporated into the SP decoding of LDPC codes. The resulting algorithm is denoted as Chase-SP(p), where p is the number of LRPs in the Chase-2 algorithm. Some remarks about the application of the above Chase-SP algorithm are made as follows. The Chase-SP algorithm can be used after an SP decoder. Only when a decoding failure occurs, that is, the former SP decoder fails to output a valid LDPC code word, will the Chase-SP decoding be invoked. This is reasonable since a well-designed LDPC code typically possesses no undetectable errors. Once a valid LDPC code word is produced, it is almost guaranteed that the output code word is really the transmitted one. Following the same reason, the Chase-SP decoder stops to process the remaining TEPs once a valid code word is generated. This strategy is different from the conventional practice of the Chase algorithm, where all TEPs are required to be processed. By using the above two tricks, the total complexity can be greatly reduced when using the Chase-SP algorithm and, in fact, is comparable to that of SP decoding in the high signal-to-noise ratio (SNR) region, as will be shown in the following simulations. To investigate the performance of Chase-SP decoders, a length-504 (3, 6) regular LDPC code is used. Assume an additive white Gaussian noise (AWGN) channel with binary phase-shift keying (BPSK) modulation. At E b /N 0 =2.6 db, 2 10 5 code words are simulated using SP decoding with a maximum iteration number of 100 and 681 decoding failures are observed, among which 197 failures can be recovered by Chase-SP(3), accounting for 28.9 percent of all failures. We also observe that most of the recoverable failures by Chase- SP(3) have high weight syndromes (say, a few dozen). Most failures with syndrome weights no greater than six (that is, six unsatisfied checks at most), accounting for 30.2 percent of the remaining ones, cannot be recovered by Chase-SP(3) decoding. This implies that LRP-based TEPs are useful for some failures with high syndrome weights but not for those with low syndrome weights. If we could devise effective TEPs for treating failures with low-weight syndromes and recover a majority of them, the error rate could be greatly reduced. The information that follows presents a simple approach to generating effective TEPs for treating decoding failures with low-weight syndromes with the help of syndromes. III. Syndrome-Based Test Error Patterns Denote as W(S) the Hamming weight of a syndrome S. For a small W(S), we can construct a relatively small number of TEPs as follows. After an unsuccessful SP decoding, we obtain the hard decision code word c. te that each 1 in S corresponds to an unsatisfied check, in which there is at least one bit in error joining. Thus, for each unsatisfied check, choose one of its neighboring VNs and flip the corresponding bit in c. In this way, for a (j, k) regular LDPC code, we can obtain k W(S) TEPs. Among these TEPs, there must exist a TEP that can correct W(S) errors, which is expected to be helpful in recovering all of the remaining errors. To distinguish from LRP-based TEPs, the constructed TEPs are referred to as syndrome-based TEPs. To reduce the number of TEPs, one can use only a few, say q, of the W(S) unsatisfied checks, thus leading to a total number of k q TEPs. To decide whether to use the LRP-based TEPs or syndrome-based TEPs, we set a threshold, T, for W(S). When W(S)>T, the LRP-based TEPs are used. Otherwise, syndrome-based TEPs are adopted. The resulting algorithm is referred to as Chase-SP(p, T, q). The reason why the Chase-SP(p) algorithm does not work well for failures with low-weight syndromes is intuitively explained as follows. Failures with low-weight syndromes are closely related to concepts such as near code words [5] or trapping sets (TSs) [6], which dominate the error floor performance. Unless Chase-SP(p) decoding happens to select some bits in TSs, the TSs will cause decoding deadlocks. However, a TS usually involves a few bits and thus Chase- SP(p) decoding has a relatively small probability of choosing bits from TSs, especially from small-size ones. In contrast, with syndrome-based TEPs, bits involved in a TS are revealed by the syndrome to some extent. By flipping a few bits in a TS, the decoding deadlock caused by the TS is expected to be broken. This is true for many instances. However, as observed in our experiments, for some stubborn TSs, which are combinations of two or more smaller TSs, the above flipping method may not work well. Even so, the syndrome-based TEP approach is effective in dealing with a large portion of TSs, thus resulting in improved error performance. According to the above explanation, the threshold parameter, T, should be chosen according to the sizes of the dominant TSs. 630 Sheng Tong and Huijuan Zheng ETRI Journal, Volume 34, Number 4, August 2012

Usually, a larger threshold is expected to offer a better performance at the cost of a higher decoding complexity. Thus, threshold should be carefully chosen to balance the performance and complexity. To ease the understanding of the proposed decoding algorithm, a flow chart description of the Chase-SP(p, T, q) decoding procedure is shown in Fig. 1, where I and I MAX are the iteration number and the maximum iteration number, respectively. From Fig. 1, we see that there are two decoding phases in the Chase-SP decoding. For a given received code word, the conventional SP decoding phase is firstly used, and then the Chase decoding phase is invoked if and only if the conventional SP decoding phase fails to output a valid code word. During the Chase decoding phase, according to the syndrome weight W(S) and the given threshold T, either syndrome-based TEPs or LRP-based TEPs are used. For any given TEP, a modified SP decoding is called, in which the CNU is run by using (2) instead of (1). When the two decoding phases are finished, I records the total number of the SP iterations used in both the SP decoding and Chase decoding phases. For simplicity, we neglect the complexity involved in constructing TEPs, which are marginal compared to an SP iteration. Thus, when Chase decoding is invoked, compared to the conventional SP decoder, the increased decoding complexity can be roughly assessed as the number of SP iterations used in the Chase decoding phase, which can be easily calculated as (I I MAX ). In the following section, we simply use the average iteration numbers to compare the complexities of different algorithms. For the Chase-SP decoding algorithm, the average iteration number is calculated as the ratio of the sum of the Is of all the received code words and the number of received code words. IV. Simulation Results The same rate-1/2, length-504 (3, 6) regular LDPC code is used in this study. The performance of the Chase-SP algorithm with four different parameter settings is simulated and compared with that of the SP decoding. From Fig. 2, we see that Chase-SP algorithms outperform SP decoding in both the waterfall and error floor regions. In the parameter setting (3, 0, ), the threshold, T, is set to be 0. Thus, syndrome-based TEPs are not used, and only LRP-based TEPs are treated. te that the parameter q is useless in this setting and thus is not Error rate 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 FER, SP FER, Chase-SP(3, 0, -) FER, Chase-SP(3, 6, 1) FER, Chase-SP(3, 6, 2) FER, Chase-SP(7, 6, 2) BER, SP BER, Chase-SP(3, 0, -) BER, Chase-SP(3, 6, 1) BER, Chase-SP(3, 6, 2) BER, Chase-SP(7, 6, 2) 1.2 1.6 2.0 2.4 2.8 3.2 3.6 E b /N 0 (db) Start I=0 I++; SP iteration W(S)=0? I=I MAX? SP decoding phase Any unused syndrome- based TEP? I++; SP iteration End W(S)>T? W(S)=0? I=I MAX? Any unused LRPbased TEP? I++; SP iteration W(S)=0? I=I MAX? Chase decoding phase Fig. 2. Performance of length-504 (3, 6) regular LDPC codes used over AWGN channel with BPSK modulation. Average iteration number 10 4 10 3 10 2 10 1 Chase-SP(7, 6, 2) Chase-SP(3, 6, 2) Chase-SP(3, 0, -) Chase-SP(3, 6, 1) SP Fig. 1. Flow chart of Chase-SP(p, T, q) decoding algorithm. S denotes syndrome. I and I MAX stand for iteration number and the maximum iteration number, respectively. W(S) is Hamming weight of S. T is threshold, which is nonnegative integer. 10 0 1.2 1.6 2.0 2.4 2.8 3.2 3.6 E b /N 0 (db) Fig. 3. Average iteration numbers vs. E b /N 0. ETRI Journal, Volume 34, Number 4, August 2012 Sheng Tong and Huijuan Zheng 631

provided here. Comparing the performance curves of Chase- SP(3, 0, ) and Chase-SP(3, 6, 1), we can see that syndromebased TEPs are especially useful in improving the error floor performance. From Fig. 2, we also see that Chase-SP(3, 6, 1) performs similarly to Chase-SP(3, 6, 2) in the waterfall region, while Chase-SP(3, 6, 2) outperforms Chase-SP(3, 6, 1) in the error floor region. This implies that increasing q in Chase-SP(p, T, q) can improve the error floor performance but not the waterfall performance. However, by increasing p from 3 to 7, we see observable performance improvement in both the waterfall and error floor regions, which implies that p affects the performance of both the waterfall and error floor. Mo reover, the complexities of both algorithms are also compared in terms of average iteration number, as shown in Fig. 3. We see that for high SNRs, the increases in the average iteration number for Chase-SP algorithms are marginal when compared to the SP algorithm. V. Conclusion In this letter, we investigated the combination of the Chase-2 and SP algorithms. An effective approach to constructing TEPs for treating errors with low-weight syndromes was presented. Simulation results showed that the proposed Chase-SP algorithm provides a good tradeoff between performance and complexity. References [1] D. Chase, Class of Algorithms for Decoding Block Codes with Channel Measurement Information, IEEE Trans. Inf. Theory, vol. 18, Jan. 1972, pp. 170-182. [2] F.R. Kschischang, B.J. Frey, and H. Andrea Loeliger, Factor Graphs and the Sum-Product Algorithm, IEEE Trans. Inf. Theory, vol. 47, no. 2, Feb. 2001, pp. 498-519. [3] R.G. Gallager, Low-Density Parity-Check Codes, PhD dissertation, MIT, Cambridge, MA, USA, July 1963. [4] J. Hagenauer, E. Offer, and L. Papke, Iterative Decoding of Binary Block and Convolutional Codes, IEEE Trans. Inf. Theory, vol. 42, no. 3, Mar. 1996, pp. 429-445. [5] D. MacKay and M.S. Postol, Weaknesses of Margulis and Ramanujan-Margulis Low-Density Parity-Check Codes, Electron. tes Theoretical Computer Sci., vol. 74, 2003. [6] T.J. Richardson, Error Floors of LDPC Codes, Proc. 41st Annual Allerton Conf. Commun., Control, Computing, Monticello, IL, USA, Sept. 2003, pp. 1426-1435. 632 Sheng Tong and Huijuan Zheng ETRI Journal, Volume 34, Number 4, August 2012