Channel Performance Improvement through FF and RBF Neural Network based Equalization

Similar documents
Image Compression: An Artificial Neural Network Approach

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Research Article International Journals of Advanced Research in Computer Science and Software Engineering ISSN: X (Volume-7, Issue-6)

Accurate modeling of SiGe HBT using artificial neural networks: Performance Comparison of the MLP and RBF Networks

CHAPTER IX Radial Basis Function Networks

Efficient Object Tracking Using K means and Radial Basis Function

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks

RIMT IET, Mandi Gobindgarh Abstract - In this paper, analysis the speed of sending message in Healthcare standard 7 with the use of back

CHAPTER VI BACK PROPAGATION ALGORITHM

THE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM FOR TIFINAGH CHARACTER RECOGNITION

CS6220: DATA MINING TECHNIQUES

IMPLEMENTATION OF FPGA-BASED ARTIFICIAL NEURAL NETWORK (ANN) FOR FULL ADDER. Research Scholar, IIT Kharagpur.

Research on Evaluation Method of Product Style Semantics Based on Neural Network

Gauss-Sigmoid Neural Network

Dynamic Analysis of Structures Using Neural Networks

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

CHAPTER 6 IMPLEMENTATION OF RADIAL BASIS FUNCTION NEURAL NETWORK FOR STEGANALYSIS

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 6, NOVEMBER Inverting Feedforward Neural Networks Using Linear and Nonlinear Programming

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network

Neural Network Approach for Automatic Landuse Classification of Satellite Images: One-Against-Rest and Multi-Class Classifiers

Radial Basis Function Networks

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5

Recognition of Handwritten Digits using Machine Learning Techniques

PERFORMANCE COMPARISON OF BACK PROPAGATION AND RADIAL BASIS FUNCTION WITH MOVING AVERAGE FILTERING AND WAVELET DENOISING ON FETAL ECG EXTRACTION

Character Recognition Using Convolutional Neural Networks

IN recent years, neural networks have attracted considerable attention

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

CMPT 882 Week 3 Summary

COMPUTATIONAL INTELLIGENCE

Radial Basis Function Networks: Algorithms

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Multilayer Feed-forward networks

MODELLING OF ARTIFICIAL NEURAL NETWORK CONTROLLER FOR ELECTRIC DRIVE WITH LINEAR TORQUE LOAD FUNCTION

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

II. ARTIFICIAL NEURAL NETWORK

Global Journal of Engineering Science and Research Management

Simulation of Back Propagation Neural Network for Iris Flower Classification

A Quantitative Approach for Textural Image Segmentation with Median Filter

Visual object classification by sparse convolutional neural networks

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Function Approximation Using Artificial Neural Networks

Neural Network Classifier for Isolated Character Recognition

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Edge Detection for Dental X-ray Image Segmentation using Neural Network approach

Support Vector Machines

Radial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR)

Solar Radiation Data Modeling with a Novel Surface Fitting Approach

Face Detection Using Radial Basis Function Neural Networks With Fixed Spread Value

Optimum Design of Truss Structures using Neural Network

Week 3: Perceptron and Multi-layer Perceptron

Asst. Prof. Bhagwat Kakde

Artificial Neural Network Methodology for Modelling and Forecasting Maize Crop Yield

Neural Networks: What can a network represent. Deep Learning, Fall 2018

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

A Comparative Study of SVM Kernel Functions Based on Polynomial Coefficients and V-Transform Coefficients

International Research Journal of Computer Science (IRJCS) ISSN: Issue 09, Volume 4 (September 2017)

CS 4510/9010 Applied Machine Learning

Improving Trajectory Tracking Performance of Robotic Manipulator Using Neural Online Torque Compensator

Data Mining. Neural Networks

Abalone Age Prediction using Artificial Neural Network

INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION

An Edge Detection Method Using Back Propagation Neural Network

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016

COMPARISION OF REGRESSION WITH NEURAL NETWORK MODEL FOR THE VARIATION OF VANISHING POINT WITH VIEW ANGLE IN DEPTH ESTIMATION WITH VARYING BRIGHTNESS

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

Keywords: ANN; network topology; bathymetric model; representability.

Support Vector Machines

COMPEL 17,1/2/3. This work was supported by the Greek General Secretariat of Research and Technology through the PENED 94 research project.

COMBINING NEURAL NETWORKS FOR SKIN DETECTION

Radial Basis Function Neural Network Classifier

Data mining with Support Vector Machine

Anomaly Detection System for Video Data Using Machine Learning

REDUCTION OF NOISE TO IMPROVE QUALITY OF VOICE SIGNAL DATA USING NEURAL NETWORKS

HANDWRITTEN GURMUKHI CHARACTER RECOGNITION USING WAVELET TRANSFORMS

Implementation of a Library for Artificial Neural Networks in C

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.

Akarsh Pokkunuru EECS Department Contractive Auto-Encoders: Explicit Invariance During Feature Extraction

A System for Joining and Recognition of Broken Bangla Numerals for Indian Postal Automation

International Journal of Advanced Research in Computer Science and Software Engineering

Neural Networks: What can a network represent. Deep Learning, Spring 2018

PARALLEL PROCESSING TECHNIQUE FOR HIGH SPEED IMAGE SEGMENTATION USING COLOR

Artificial Neural Network-Based Prediction of Human Posture

Journal of Engineering Technology Volume 6, Special Issue on Technology Innovations and Applications Oct. 2017, PP

Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions

A Matlab based Face Recognition GUI system Using Principal Component Analysis and Artificial Neural Network

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically

Multi Layer Perceptron trained by Quasi Newton learning rule

Linear Models. Lecture Outline: Numeric Prediction: Linear Regression. Linear Classification. The Perceptron. Support Vector Machines

A neural network that classifies glass either as window or non-window depending on the glass chemistry.

Module 6 STILL IMAGE COMPRESSION STANDARDS

Lecture #11: The Perceptron

Stacked Denoising Autoencoders for Face Pose Normalization

MOS x and Voice Outage Rate in Wireless

Department of Electronics and Telecommunication Engineering 1 PG Student, JSPM s Imperial College of Engineering and Research, Pune (M.H.

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

Transcription:

Channel Performance Improvement through FF and RBF Neural Network based Equalization Manish Mahajan 1, Deepak Pancholi 2, A.C. Tiwari 3 Research Scholar 1, Asst. Professor 2, Professor 3 Lakshmi Narain College of Technology, Indore mahajan_manawar@yahoo.com 1, erdeepakpancholiind@gmail.com 2,Achandra0@gmail.com 3 Abstract: In wireless technology, the communication systems require signal processing techniques to improve the channel performance. The wireless communication is not easily able to avail the error free signal transmission because channel introduces some of the distortions like cochannel interference, adjacent channel interference, Inter symbol interference during signal transmission. So in order to improve the channel performance basically three techniques named diversity, channel coding and equalization are used. In this paper, we are using neural network based equalization technique which is basically used to reduce ISI. The equalization process may be either liner or non linear. The severely distorting channels limit the use of linear equalizers, so non-linear equalizers are more suitable and efficient instead linear equalizer. Neural network based equalizers are computationally more efficient alternative to currently used (without neural network) nonlinear equalizer e.g. the DFE. In this work, we are giving BER performance comparison of two different neural network based equalizer, first Feed forward neural network (Multi L ayer Perceptron) and second RBF based equalizer. Finally in this work it is found that the performance of RBF based equalizer is better as compared to MLP equalizer. Because training process of RBF is faster than that of MLP network which may have more than three layers in its architecture. Second RBF have fast convergence rate as compared to that MLP network. Keywords: RBF, FF, MLP, ISI etc. 1. INTRODUCTION In mobile radio environment, the high speed data transmission is limited due to channel ISI created by multipath within the band limited time dispersive channel [1]. So for reliable data transmission, equalizer is required at the receiving side of communication systems. Since the channel is unknown and time varying so equalizer must be adaptive [2]. In digital communication systems, adaptive equalizers play an important role. Adaptive equalization at the receiver removes the effects of ISI. In an adaptive equalizer the current and past values of received signal are linearly weighted by equalizer coefficients and summed to produce the output. Following fig.1 shows a digital communication system model with equalizer at the receiver side, where x (n) is transmitted symbol sequence, η is additive white Gaussian noise, y(n) is received signal sequence and (n) is output of equalizer which is an estimate of transmitted sequence x(n). Fig:- 1 Digital communication system model Generally linear equalizers show inferior performance because of random nature and time varying property of channel. And hence non-linear equalizers have become popular and are mostly used in applications. Artificial neural network [3] is a powerful tool, plays an important role in many applications related to industries and communication technologies such as www.ijrcct.org Page 834

nonlinear control[4], fault detections, data processing,signal processing,image processing[5], audio signal processing[6],function approximation, adaptive channel equalization and so on. ANN based MLP equalizers [7] may have more than three layers so it requires more training time and also it has drawback of slow convergence. On the other hand, ANN based RBF equalizer [3], [8]-[10] has fast convergence and training process is also fast as compared to MLP based equalizer. This comparison is subjected to get the same response through both of the ANN based equalizers. Secondly RBF networks act as local approximation networks because the networks outputs are determined by specified hidden units in certain local receptive fields, while MLP networks works globally, since the networks outputs are decided by all neurons. In this work, MLP and RBF equalizers are analyzed and compared based on bit error rate for different SNR values. The rest of this paper is organized as follows: The section II gives the brief description about artificial neural network (ANN). Section III describes MLP network.section IV describes the RBF network. Section V describes the simulation results and section VI describes the conclusion. 1. A learning process is adopted by network to acquire the knowledge. 2. Inter-neuron connections strengths known as weights are used to store the knowledge. Capabilities of ANNs - ANN can compute any computable function i.e. they can do anything a normal digital computer can do. Especially anything that can be represented as mapping between vector spaces can be approximated to arbitrary precision by neural network.so neural network is used for mapping problems, to learn pattern and relationships in data. 3. MULTILAYER PERCEPTRON Multilayer perceptron network [7] consists of several hidden layers of neurons that are capable of performing complex, nonlinear mappings between input and output layers. Fig. 2 shows the basic unit of traditional neural networks, with N inputs and M outputs. 2. ARTIFICIAL NEURAL NETWORK Artificial neural network is defined as Parameterized computational nonlinear algorithm for data, signal, and image processing. ANN is model for human nervous system operations which uses mathematical formulations or algorithms for its functionality or modeling. It may be considered as one of the tools to analyze the structure function relationship of human brain. Artificial intelligence techniques involve the application of artificial neural networks. These techniques attempt to imitate the way of a human brain works. ANN works by creating connections between processing elements i.e. computer equivalent of neurons, rather than using digital model in which all computations based on 0 s and 1 s. ANN resembles the brain in following two aspects: Fig. 2 Single neuron with N inputs and M outputs Computations related with the single neuron include: I) Net computation = + ( ) (1) Where: n is the index of inputs and weights, from 1 to N; wn is the weight on input xn; w0 is the bias weight. ii) Output computation Ym =f net (2) www.ijrcct.org Page 835

Where: y m is the output of the neuron; f ( ) is the activation function and normally chosen as sigmoidal shape. For more neurons interconnected together, the two basic computations (1) and (2) for each neuron remain the same; while the only difference is that the inputs of a neuron could be provided by either the outputs of neurons from previous layers or network inputs. Weight values are the only type of parameters and can be updated by learning algorithms. Based on error back propagation procedure, various gradient algorithms are developed for traditional neural network learning. First order gradient methods are stable, but very time consuming, and usually fail to converge to very small errors. Training speed and accuracy are significantly improved by applying second order gradient methods, such as Levenberg Marquardt algorithm and neuron-by-neuron algorithm. In multilayer perceptron neural networks, arbitrarily shaped hyper surfaces are used for separation. While in RBF, clusters are separated by hyper spheres [3]. In the simple two-dimension case separations given as shown in Fig. 3. Fig. 4 RBF network with N inputs, L hidden units and M outputs. The basic computations in the RBF network above include: i) Input layer computation At the input of hidden unit l, the input vector x is weighted by input weights w h : S l =[X 1,, X 2, X n,.. X N, ] (3) Where: n is the index of input; l is the index of hidden units; x n is the n-th input;, is the input weight between input n and hidden unit l. ii) Hidden layer computation The output of hidden unit l is calculated by: (a) (b) Fig.3:- Separation results of RBF (fig. a) and FF (fig.b) network; 4. RADIAL BASIS FUNCTION NETWORKS Fig. 4 shows the general form of RBF networks, with N inputs, L hidden units and M outputs. Φl (s l ) =exp [- ] (4) Where: the activation function φl ( ) for hidden unit l is normally chosen as Gaussian function; c l is the center of hidden unit l and σ l is the width of hidden unit l. iii) Output layer computation The network output m is calculated by: o m = ( ), +, (5) Where: m is the index of output;, is the output weight between hidden unit l and output unit m;, is the bias weight of output unit m. 5. SIMULATION RESULT www.ijrcct.org Page 836

The transmitted symbol sequence x (n) is assumed to be random, means binary sequence taking values from the set of {000, 001,010,011,100,101,110, 111}. In the training phase the parameters of hidden layer are computed from the given data in an unsupervised learning manner. This algorithm is initialized with random data set comprising of 1000 training samples. The channel impairments are introduced in the transmitting data due to Rayleigh fading channel while the AWGN of different variance is added to the training samples to achieve the various EsNodb levels. Then trained network is presented with an unknown data set consisting of 50,000 payload data samples adding the channel impairments and noise at various EsNodb levels. The performance analysis of feed forward neural network and radial basis function neural network in the form of table is given below: EsNodb BER for FFNN BER for RBF 1 0.2853 0.1889 2 0.2616 0.1615 3 0.2431 0.1336 4 0.2175 0.109 5 0.1896 0.0848 6 0.1658 0.0605 7 0.1461 0.0437 8 0.1134 0.0271 9 0.088 0.0161 10 0.0673 0.0075 11 0.0492 0.0033 12 0.0317 0.0011 Table: 1 Graphical Representation The following graphical representation showing the performance analysis of feed forward and RBF neural network at various EsNodb Levels. It is clear from the graphical representation that as we are increasing EsNodb, there is decrement continuously in the value of BER in both cases. Also found RBF has improved results as compare to FF neural network. Fig.:-5 BER Vs EsNodb for FFNN Fig.:-6 BER Vs EsNodb for RBF neural network 6. CONCLUSION In this paper we have compared the performance of feed forward and radial basis function neural network. The performance matrix is based on bit error rate for different noise levels. The higher value of bit error rate shows the poor quality of signal. The graphical analysis shows that at the higher SNR level, the BER is lowest in both cases. After comparison between FF and RBF neural network, we can say that RBF has some improved results as compare to feed forward neural network. REFERENCES [1] D.R. Guha, S.K Patra, Channel Equalization for ISI channels using RBF Network, International www.ijrcct.org Page 837

Conference on Industrial and Information Systems, Srilanka, December 2009. [2] S. Qureshi, Adaptive equalization, Proceedings of The IEEE PIEEE, vol.73, no. 9, pp. 1349 1387, 1985. [3] T. Xie, H. Yu, B. Wilamowski, Comparison between Traditional Neural Networks and Radial Basis Function Networks IEEE International Symposium on Industrial Electronics, 2011 [4] K. Derr and M. Manic, Wireless based object tracking based on neural networks, ICIEA 2008, 3rd IEEE Conference on Industrial Electronics and Applications, Singapore, June 3-5, pp.308-313, 2008. [5] Y. J. Lee, J. Yoon, "Nonlinear Image Upsampling Method Based on Radial Basis Function Interpolation," IEEE Trans. on Image Processing, vol. 19, issue 10, pp. 2682-2692, 2010. [6] F. Moreno, J. Alarcón, et al., "Reconfigurable Hardware Architecture of a Shape Recognition System Based on Specialized Tiny Neural Networks With Online Training," IEEE Trans. on Industrial Electronics, vol. 56, no. 8, pp. 3253-3263, 2009. [7] A. Zerguine, A. Shafi, and Maamar Bettayeb, Multilayer Perceptron-Based DFE with Lattice Structure, IEEE transactions on neural networks, vol. 12, no. 3, May 2001. [8] B. Mulgrew, Applying Radial Basis Functions, IEEE Signal ProcessingMagazine, vol. 13, pp. 5065, March 1996. [9] M. Miyake, K. Oishi, S. Yamaguchi, Adaptive equalization of a nonlinear channel by means of Gaussian radial basis functions, Electronics and communications, Japan, part 3, vol. 80, No. 6, 1977. [10]. I. Cha and S.A. Kassam, Channel equalization using adaptive complex radial basis function networks, IEEE Journal on Selected Areas in Communications, vol. 13, no. 1, pp. 122-131, Jan 1995. www.ijrcct.org Page 838