Channel Performance Improvement through FF and RBF Neural Network based Equalization Manish Mahajan 1, Deepak Pancholi 2, A.C. Tiwari 3 Research Scholar 1, Asst. Professor 2, Professor 3 Lakshmi Narain College of Technology, Indore mahajan_manawar@yahoo.com 1, erdeepakpancholiind@gmail.com 2,Achandra0@gmail.com 3 Abstract: In wireless technology, the communication systems require signal processing techniques to improve the channel performance. The wireless communication is not easily able to avail the error free signal transmission because channel introduces some of the distortions like cochannel interference, adjacent channel interference, Inter symbol interference during signal transmission. So in order to improve the channel performance basically three techniques named diversity, channel coding and equalization are used. In this paper, we are using neural network based equalization technique which is basically used to reduce ISI. The equalization process may be either liner or non linear. The severely distorting channels limit the use of linear equalizers, so non-linear equalizers are more suitable and efficient instead linear equalizer. Neural network based equalizers are computationally more efficient alternative to currently used (without neural network) nonlinear equalizer e.g. the DFE. In this work, we are giving BER performance comparison of two different neural network based equalizer, first Feed forward neural network (Multi L ayer Perceptron) and second RBF based equalizer. Finally in this work it is found that the performance of RBF based equalizer is better as compared to MLP equalizer. Because training process of RBF is faster than that of MLP network which may have more than three layers in its architecture. Second RBF have fast convergence rate as compared to that MLP network. Keywords: RBF, FF, MLP, ISI etc. 1. INTRODUCTION In mobile radio environment, the high speed data transmission is limited due to channel ISI created by multipath within the band limited time dispersive channel [1]. So for reliable data transmission, equalizer is required at the receiving side of communication systems. Since the channel is unknown and time varying so equalizer must be adaptive [2]. In digital communication systems, adaptive equalizers play an important role. Adaptive equalization at the receiver removes the effects of ISI. In an adaptive equalizer the current and past values of received signal are linearly weighted by equalizer coefficients and summed to produce the output. Following fig.1 shows a digital communication system model with equalizer at the receiver side, where x (n) is transmitted symbol sequence, η is additive white Gaussian noise, y(n) is received signal sequence and (n) is output of equalizer which is an estimate of transmitted sequence x(n). Fig:- 1 Digital communication system model Generally linear equalizers show inferior performance because of random nature and time varying property of channel. And hence non-linear equalizers have become popular and are mostly used in applications. Artificial neural network [3] is a powerful tool, plays an important role in many applications related to industries and communication technologies such as www.ijrcct.org Page 834
nonlinear control[4], fault detections, data processing,signal processing,image processing[5], audio signal processing[6],function approximation, adaptive channel equalization and so on. ANN based MLP equalizers [7] may have more than three layers so it requires more training time and also it has drawback of slow convergence. On the other hand, ANN based RBF equalizer [3], [8]-[10] has fast convergence and training process is also fast as compared to MLP based equalizer. This comparison is subjected to get the same response through both of the ANN based equalizers. Secondly RBF networks act as local approximation networks because the networks outputs are determined by specified hidden units in certain local receptive fields, while MLP networks works globally, since the networks outputs are decided by all neurons. In this work, MLP and RBF equalizers are analyzed and compared based on bit error rate for different SNR values. The rest of this paper is organized as follows: The section II gives the brief description about artificial neural network (ANN). Section III describes MLP network.section IV describes the RBF network. Section V describes the simulation results and section VI describes the conclusion. 1. A learning process is adopted by network to acquire the knowledge. 2. Inter-neuron connections strengths known as weights are used to store the knowledge. Capabilities of ANNs - ANN can compute any computable function i.e. they can do anything a normal digital computer can do. Especially anything that can be represented as mapping between vector spaces can be approximated to arbitrary precision by neural network.so neural network is used for mapping problems, to learn pattern and relationships in data. 3. MULTILAYER PERCEPTRON Multilayer perceptron network [7] consists of several hidden layers of neurons that are capable of performing complex, nonlinear mappings between input and output layers. Fig. 2 shows the basic unit of traditional neural networks, with N inputs and M outputs. 2. ARTIFICIAL NEURAL NETWORK Artificial neural network is defined as Parameterized computational nonlinear algorithm for data, signal, and image processing. ANN is model for human nervous system operations which uses mathematical formulations or algorithms for its functionality or modeling. It may be considered as one of the tools to analyze the structure function relationship of human brain. Artificial intelligence techniques involve the application of artificial neural networks. These techniques attempt to imitate the way of a human brain works. ANN works by creating connections between processing elements i.e. computer equivalent of neurons, rather than using digital model in which all computations based on 0 s and 1 s. ANN resembles the brain in following two aspects: Fig. 2 Single neuron with N inputs and M outputs Computations related with the single neuron include: I) Net computation = + ( ) (1) Where: n is the index of inputs and weights, from 1 to N; wn is the weight on input xn; w0 is the bias weight. ii) Output computation Ym =f net (2) www.ijrcct.org Page 835
Where: y m is the output of the neuron; f ( ) is the activation function and normally chosen as sigmoidal shape. For more neurons interconnected together, the two basic computations (1) and (2) for each neuron remain the same; while the only difference is that the inputs of a neuron could be provided by either the outputs of neurons from previous layers or network inputs. Weight values are the only type of parameters and can be updated by learning algorithms. Based on error back propagation procedure, various gradient algorithms are developed for traditional neural network learning. First order gradient methods are stable, but very time consuming, and usually fail to converge to very small errors. Training speed and accuracy are significantly improved by applying second order gradient methods, such as Levenberg Marquardt algorithm and neuron-by-neuron algorithm. In multilayer perceptron neural networks, arbitrarily shaped hyper surfaces are used for separation. While in RBF, clusters are separated by hyper spheres [3]. In the simple two-dimension case separations given as shown in Fig. 3. Fig. 4 RBF network with N inputs, L hidden units and M outputs. The basic computations in the RBF network above include: i) Input layer computation At the input of hidden unit l, the input vector x is weighted by input weights w h : S l =[X 1,, X 2, X n,.. X N, ] (3) Where: n is the index of input; l is the index of hidden units; x n is the n-th input;, is the input weight between input n and hidden unit l. ii) Hidden layer computation The output of hidden unit l is calculated by: (a) (b) Fig.3:- Separation results of RBF (fig. a) and FF (fig.b) network; 4. RADIAL BASIS FUNCTION NETWORKS Fig. 4 shows the general form of RBF networks, with N inputs, L hidden units and M outputs. Φl (s l ) =exp [- ] (4) Where: the activation function φl ( ) for hidden unit l is normally chosen as Gaussian function; c l is the center of hidden unit l and σ l is the width of hidden unit l. iii) Output layer computation The network output m is calculated by: o m = ( ), +, (5) Where: m is the index of output;, is the output weight between hidden unit l and output unit m;, is the bias weight of output unit m. 5. SIMULATION RESULT www.ijrcct.org Page 836
The transmitted symbol sequence x (n) is assumed to be random, means binary sequence taking values from the set of {000, 001,010,011,100,101,110, 111}. In the training phase the parameters of hidden layer are computed from the given data in an unsupervised learning manner. This algorithm is initialized with random data set comprising of 1000 training samples. The channel impairments are introduced in the transmitting data due to Rayleigh fading channel while the AWGN of different variance is added to the training samples to achieve the various EsNodb levels. Then trained network is presented with an unknown data set consisting of 50,000 payload data samples adding the channel impairments and noise at various EsNodb levels. The performance analysis of feed forward neural network and radial basis function neural network in the form of table is given below: EsNodb BER for FFNN BER for RBF 1 0.2853 0.1889 2 0.2616 0.1615 3 0.2431 0.1336 4 0.2175 0.109 5 0.1896 0.0848 6 0.1658 0.0605 7 0.1461 0.0437 8 0.1134 0.0271 9 0.088 0.0161 10 0.0673 0.0075 11 0.0492 0.0033 12 0.0317 0.0011 Table: 1 Graphical Representation The following graphical representation showing the performance analysis of feed forward and RBF neural network at various EsNodb Levels. It is clear from the graphical representation that as we are increasing EsNodb, there is decrement continuously in the value of BER in both cases. Also found RBF has improved results as compare to FF neural network. Fig.:-5 BER Vs EsNodb for FFNN Fig.:-6 BER Vs EsNodb for RBF neural network 6. CONCLUSION In this paper we have compared the performance of feed forward and radial basis function neural network. The performance matrix is based on bit error rate for different noise levels. The higher value of bit error rate shows the poor quality of signal. The graphical analysis shows that at the higher SNR level, the BER is lowest in both cases. After comparison between FF and RBF neural network, we can say that RBF has some improved results as compare to feed forward neural network. REFERENCES [1] D.R. Guha, S.K Patra, Channel Equalization for ISI channels using RBF Network, International www.ijrcct.org Page 837
Conference on Industrial and Information Systems, Srilanka, December 2009. [2] S. Qureshi, Adaptive equalization, Proceedings of The IEEE PIEEE, vol.73, no. 9, pp. 1349 1387, 1985. [3] T. Xie, H. Yu, B. Wilamowski, Comparison between Traditional Neural Networks and Radial Basis Function Networks IEEE International Symposium on Industrial Electronics, 2011 [4] K. Derr and M. Manic, Wireless based object tracking based on neural networks, ICIEA 2008, 3rd IEEE Conference on Industrial Electronics and Applications, Singapore, June 3-5, pp.308-313, 2008. [5] Y. J. Lee, J. Yoon, "Nonlinear Image Upsampling Method Based on Radial Basis Function Interpolation," IEEE Trans. on Image Processing, vol. 19, issue 10, pp. 2682-2692, 2010. [6] F. Moreno, J. Alarcón, et al., "Reconfigurable Hardware Architecture of a Shape Recognition System Based on Specialized Tiny Neural Networks With Online Training," IEEE Trans. on Industrial Electronics, vol. 56, no. 8, pp. 3253-3263, 2009. [7] A. Zerguine, A. Shafi, and Maamar Bettayeb, Multilayer Perceptron-Based DFE with Lattice Structure, IEEE transactions on neural networks, vol. 12, no. 3, May 2001. [8] B. Mulgrew, Applying Radial Basis Functions, IEEE Signal ProcessingMagazine, vol. 13, pp. 5065, March 1996. [9] M. Miyake, K. Oishi, S. Yamaguchi, Adaptive equalization of a nonlinear channel by means of Gaussian radial basis functions, Electronics and communications, Japan, part 3, vol. 80, No. 6, 1977. [10]. I. Cha and S.A. Kassam, Channel equalization using adaptive complex radial basis function networks, IEEE Journal on Selected Areas in Communications, vol. 13, no. 1, pp. 122-131, Jan 1995. www.ijrcct.org Page 838