Channel Decoding in Wireless Communication Systems using Deep Learning Gaurang Naik 12/11/2017 Deep Learning Course Project Acknowledgements: Navneet Agrawal, TU Berlin
Error Control Coding Wireless Communication System Hamming/LDPC/Turbo BPSK/QPSK/QAM Source Sink Channel Encoder Channel Decoder Modulator Demodulator Noise added Wireless channel What is Error Control Coding? Redundancy is added at the transmitter in a controlled fashion Helps receiver in correcting errors introduced by the random channel
Performance metric Bit Error Rate Significance Bit Error Rate = Lower the BER, more robust the system # of bits received in error # of total bits transmitted Performance of a wireless system is usually expressed in terms of BER versus Signal to Noise Ratio (SNR) curves Desired Performance Uncoded system Good code Better code
Channel Coding State-of-the-art Focus is limited to Linear Block codes C(n,k) We consider only BPSK modulation scheme with Additive White Gaussian Noise (AWGN) Encoding Process Simple matrix multiplication in binary field G m = c Decoding Process Usually much more complicated Sum Product Algorithm One of the best known algorithms to decode error control codes Leverages the parity check matrix - H Source: Wikipedia, (7,4) Hamming codes
Sum-Product Algorithm Tanner Graph x1 x2 Sequence of messages exchanged between variable and check nodes Message Log Likelihood Ratio (l) Prob bit = 0 l = log Prob bit = 1 x3 x4 x5 x6 f1 f2 f3 s x7 Variable Nodes Advantage: Can learn on a single codeword
Sum-Product Algorithm: Known Issues x1 Trapping Sets x2 x3 f1 x4 f2 x5 x6 f3 s x7 Variable Nodes Why are Trapping sets a problem?
SPA: Connection to Neural Networks Input LLRs Output LLRs Unrolled Tanner Graph
SPA-NN: SPA with Neural Networks Input 1 Input Check Nodes 2 3
SPA-NN: SPA with Neural Networks Input 1 2 3
SPA-NN: SPA with Neural Networks Variable Node Input
SPA-NN: SPA with Neural Networks Variable Node Input
SPA-NN: SPA with Neural Networks Variable Node Input Output
SPA-NN: SPA with Neural Networks Variable Node Input Output Output Prob (bit = 0)
Operations SPA SPA-NN
BER Equivalence of SPA and SPA-NN SNR (db)
Neural Network Decoder (NND) Variable Node Input Output Output Prob (bit = 0)
Neural Network Decoder (NND) Variable Node Input Output Output Prob (bit = 0)
Neural Network Decoder (NND) Variable Node Input Output Output Prob (bit = 0) Ground Truth Loss
Dataset Training data was generated by sampling the noisy channel output New samples were generated for each epoch so that NND can train on as many error patterns as possible The training data samples corresponded to SNR = 2,4 db. It was observed that training with samples at this SNR can generalize at low as well as high SNRs. Validation and Test dataset consists of samples at all SNRs
Parameters
Results (7,4) Hamming code Coding gain ~ 1.5 db
Results (32,16) Polar code Coding gain ~ 1.5 db
Summary Scalable learning NND needs to be trained only for one codeword Improved performance Can overcome shortcomings of SPA due to Trapping Sets For desired BER, fewer iterations can be used Faster decoding Future Work Analysis on choice of different hyperparameters Extension to longer codes, and other channel models
Thank You