A Study of Various Training Algorithms on Neural Network for Angle based Triangular Problem

Similar documents
Gdansk University of Technology Faculty of Electrical and Control Engineering Department of Control Systems Engineering

Planar Robot Arm Performance: Analysis with Feedforward Neural Networks

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

ANN Based Short Term Load Forecasting Paradigms for WAPDA Pakistan

The Pennsylvania State University. The Graduate School. John and Willie Leone Family Department of Energy and Mineral Engineering

A neural network that classifies glass either as window or non-window depending on the glass chemistry.

Neuro-Fuzzy Computing

Image Compression: An Artificial Neural Network Approach

CHAPTER VI BACK PROPAGATION ALGORITHM

Comparison and Evaluation of Artificial Neural Network (ANN) Training Algorithms in Predicting Soil Type Classification

CHAPTER 5 NEURAL NETWORK BASED CLASSIFICATION OF ELECTROGASTROGRAM SIGNALS

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Dynamic Analysis of Structures Using Neural Networks

The Pennsylvania State University. The Graduate School DEVELOPMENT OF AN ARTIFICIAL NEURAL NETWORK FOR DUAL LATERAL HORIZONTAL WELLS IN GAS RESERVOIRS

Data Mining. Neural Networks

Supervised Learning in Neural Networks (Part 2)

MATLAB representation of neural network Outline Neural network with single-layer of neurons. Neural network with multiple-layer of neurons.

Neural Networks. Lab 3: Multi layer perceptrons. Nonlinear regression and prediction.

1. Approximation and Prediction Problems

An Evaluation of Statistical Models for Programmatic TV Bid Clearance Predictions

Early tube leak detection system for steam boiler at KEV power plant

Artificial Neural Network (ANN) Approach for Predicting Friction Coefficient of Roller Burnishing AL6061

International Research Journal of Computer Science (IRJCS) ISSN: Issue 09, Volume 4 (September 2017)

A NEURAL NETWORK BASED IRIS RECOGNITION SYSTEM FOR PERSONAL IDENTIFICATION

PERFORMANCE COMPARISON OF BACK PROPAGATION AND RADIAL BASIS FUNCTION WITH MOVING AVERAGE FILTERING AND WAVELET DENOISING ON FETAL ECG EXTRACTION

A Framework of Hyperspectral Image Compression using Neural Networks

Neural Networks Laboratory EE 329 A

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

Statistical & Data Analysis Using Neural Network

For Monday. Read chapter 18, sections Homework:

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

NN-GVEIN: Neural Network-Based Modeling of Velocity Profile inside Graft-To-Vein Connection

Proceedings of the 2016 International Conference on Industrial Engineering and Operations Management Detroit, Michigan, USA, September 23-25, 2016

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska

Neural Network Approach for Automatic Landuse Classification of Satellite Images: One-Against-Rest and Multi-Class Classifiers

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)

Neural Network Classifier for Isolated Character Recognition

Pattern Classification Algorithms for Face Recognition

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.

Neural Networks CMSC475/675

Notes on Multilayer, Feedforward Neural Networks

1 The Options and Structures in the Neural Net

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

DEVELOPMENT OF NEURAL NETWORK TRAINING METHODOLOGY FOR MODELING NONLINEAR SYSTEMS WITH APPLICATION TO THE PREDICTION OF THE REFRACTIVE INDEX

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Multi Layer Perceptron trained by Quasi Newton learning rule

Introduction to Neural Networks: Structure and Training

AN NOVEL NEURAL NETWORK TRAINING BASED ON HYBRID DE AND BP

CS6220: DATA MINING TECHNIQUES

Research on Evaluation Method of Product Style Semantics Based on Neural Network

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

NEURAL NETWORK BASED REGRESSION TESTING

Feed Forward Neural Network for Solid Waste Image Classification

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

Channel Performance Improvement through FF and RBF Neural Network based Equalization

Abalone Age Prediction using Artificial Neural Network

Department of applied mathematics. Mat Individual Research Projects in Applied Mathematics course

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically

CMPT 882 Week 3 Summary

KINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL NETWORK

MODELLING OF ARTIFICIAL NEURAL NETWORK CONTROLLER FOR ELECTRIC DRIVE WITH LINEAR TORQUE LOAD FUNCTION

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Deep Learning. Practical introduction with Keras JORDI TORRES 27/05/2018. Chapter 3 JORDI TORRES

Role of Hidden Neurons in an Elman Recurrent Neural Network in Classification of Cavitation Signals

INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION

PERFORMANCE ANALYSIS AND VALIDATION OF CLUSTERING ALGORITHMS USING SOFT COMPUTING TECHNIQUES

Neural Networks (Overview) Prof. Richard Zanibbi

Neural Network Neurons

Future Image Prediction using Artificial Neural Networks

Assignment 2. Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions

Character Recognition Using Convolutional Neural Networks

Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions

Liquefaction Analysis in 3D based on Neural Network Algorithm

Multi Layer Perceptron with Back Propagation. User Manual

Recapitulation on Transformations in Neural Network Back Propagation Algorithm

11/14/2010 Intelligent Systems and Soft Computing 1

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks

ANNALS of the ORADEA UNIVERSITY. Fascicle of Management and Technological Engineering, Volume X (XX), 2011, NR2

Machine Learning Classifiers and Boosting

CHAPTER 7 MASS LOSS PREDICTION USING ARTIFICIAL NEURAL NETWORK (ANN)

II. ARTIFICIAL NEURAL NETWORK

COMBINING NEURAL NETWORKS FOR SKIN DETECTION

An Edge Detection Method Using Back Propagation Neural Network

IN recent years, neural networks have attracted considerable attention

Automatic Adaptation of Learning Rate for Backpropagation Neural Networks

Department of Electronics and Telecommunication Engineering 1 PG Student, JSPM s Imperial College of Engineering and Research, Pune (M.H.

Artificial Neural Network Methodology for Modelling and Forecasting Maize Crop Yield

Research Article International Journals of Advanced Research in Computer Science and Software Engineering ISSN: X (Volume-7, Issue-6)

RIMT IET, Mandi Gobindgarh Abstract - In this paper, analysis the speed of sending message in Healthcare standard 7 with the use of back

ALGORITHMS FOR INITIALIZATION OF NEURAL NETWORK WEIGHTS

Yuki Osada Andrew Cannon

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

An Intelligent Technique for Image Compression

Use of Artificial Neural Networks to Investigate the Surface Roughness in CNC Milling Machine

An Improved Backpropagation Method with Adaptive Learning Rate

Introduction to Multilayer Perceptrons

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation

Transcription:

A Study of Various Training Algorithms on Neural Network for Angle based Triangular Problem Amarpal Singh M.Tech (CS&E) Amity University Noida, India Piyush Saxena M.Tech (CS&E) Amity University Noida, India Sangeeta Lalwani M.Tech (CS&E) Amity University Noida, India ABSTRACT This paper examines the study of various feed forward backpropagation neural network training algorithms and performance of different radial basis function neural network for angle based triangular problem. The training algorithms in feed forward back-propagation neural network comprise of Scale Gradient Conjugate Back-Propagation (BP), Conjugate Gradient BP through Polak-Riebre updates, Conjugate Gradient BP through Fletcher-Reeves updates, One Secant BP and Resilent BP. The final result of each training algorithm for angle based triangular problem will also be discussed and compared. General Terms Artificial Neural Network (ANN), Feed Forward Backpropogation (FFB), Training Algorithm. Keywords Feed-forward back-propagation neural network, radial basis function, generalized regression neural network. 1. INTRODUCTION Artificial Neural Network (ANN) learns by adjusting the weights so as to be able to correctly categorize the training data and hence, after testing phase, to classify unknown data. It needs long time for training. It has a high tolerance to noisy and incomplete data. Some Salient Features of ANN are as follow: Adaptive learning, Self-organization Real-time operation, Massive parallelism Error acceptance by means of redundant information coding Learning and generalizing ability Distributed representation There are many different definitions of neural networks that are quoted by some famous researcher as follow: DARPA Neural Network (NN) Study: A NN is a taxonomy collected of various straightforward dispensations of fundamentals in commission of parallel whose purpose is unwavering by network configuration, connection strengths, as well as the dispensation executed at computing fundamentals or nodes. According to Haykin (1994): A Neural Network is a vast analogous scattered processor which has an expected tendency for storing practical knowledge and making it available for use. It is similar to the brain in two aspects [1]: Knowledgebase is acquired by the network through a training process. Inter-neuron connection strengths known as synaptic weights are used to accumulate the knowledge. According to Nigrin (1993): A NN is a circuit composed of a very large number of simple processing elements that are neurally based. Each element operates only on local information. According to Zurada (1992): Artificial neural systems, or neural networks, are physical cellular systems which can acquire, store and utilize experiential knowledge. This paper examines how the artificial neural network is implemented for angle based triangle problem. The performance (speed processing and high accuracy result) of training algorithm that been used in this problem is the research target. The following figure (Fig. 1) shows the system architecture for angle based triangular problem. System structural design System structural design starts with creating training database. After that neural network model such as training function, design and constraint were initialized. Input INPUT an ANGLE OF TRIANGLE NEURAL NETWORK CLASSIFICATION Fig 1: Block Diagram for Angle Based Triangular Problem OUTPUT (MARE & MRE) 30

CREATE TRAINING DATA NETWORK PARAMETER AND DESIGNING Fig 2: Block diagram for neural network 2. LITERATURE REVIEWED In this section of paper, the various types of neural networks are discussed. Basic Models of Artificial Neural Networks: Single-Layer Feed-Forward Network: When a layer of the processing nodes is formed, the inputs can be connected to these nodes with various weights, resulting in a series of outputs, one per node. Multilayer Feed-Forward Network: A Multilayer feed-forward network is formed by the interconnection of several layers. The input layer is that which receives the input and this layer has no function except buffering the input signal. Single Node with its own Feedback: Single node with its own feedback is simple recurrent neural network having a single neuron with feedback itself. Single-Layer Recurrent Network: Single-layer recurrent network with a feedback connection in which a processing element s output can be directed back to the processing element itself or the other processing element or to both. Multilayer Recurrent Network: In Multilayer recurrent network, a processing element output can be directed back to the nodes in a preceding layer, forming a Multilayer recurrent network: Fig 3: Basic Models of ANN TRAIN AND SIMULATE NETWORK Feed-forward Back-Propagation (FFBP) Neural Network This neural network was trained and validated for various feed-forward backprop training algorithms available in Matlab Neural Network toolbox [2]. SUPPORTED TRAINING FUNCTIONS NEURAL NETWORK [10] IN FFBP There are many supported training functions as follow : Trainb (Batch training with weight and bias learning rules), Trainbfg (BFGS quasi-newton BP ), Trainbr (Bayesian regularization), Trainc ( Cyclical order incremental update), Traincgb ( Powell-Beale conjugate gradient BP), Traincgf (Fletcher-Powell conjugate gradient BP ), Traincgp (Polak- Ribiere conjugate gradient BP), Traingd (Gradient descent BP), Traingda (Gradient descent with adaptive learning rate BP), Traingdm (Gradient descent with momentum BP), Traingdx (Gradient descent with momentum & adaptive linear BP), Trainlm (Levenberg-Marquardt BP), Trainoss (One step secant BP), Trainr (Random order incremental update), Trainrp (Resilient backpropagation (Rprop)), Trains (Sequential order incremental update), Trainscg (Scaled conjugate gradient BP) SUPPORTED LEARNING FUNCTIONS IN FFBP NEURAL NETWORK [10] There are many supported learning functions as follow : learncon (Conscience bias learning), learngd (Gradient descent weight/bias learning), learngdm (Gradient descent with momentum weight/bias learning), learnh (Hebb weight learning), learnhd (Hebb with decay weight learning rule ), learnis (Instar weight learning), learnk (Kohonen weight learning), learnlv1 (LVQ1 weight learning), learnlv2 (LVQ2 weight learning), learnos (Outstar weight learning), learnp (Perceptron weight and bias learning) learnpn (Normalized perceptron weight and bias learning), learnsom (Selforganizing map weight learning), learnwh (Widrow-Hoff weight and bias learning rule). TRANSFER FUNCTIONS IN FFBP NEURAL NETWORK [10] There are many transfer functions as follow : compet (Competitive), hardlim (Hard limit transfer), hardlims (Symmetric hard limit), logsig (Log sigmoid), poslin (Positive linear),purelin (Linear), radbas (Radial basis),satlin (Saturating linear)satlins (Symmetric saturating linear), tansig (Hyperbolic tangent sigmoid), tribas (Triangular basis) TRANSFER DERIVATIVE FUNCTIONS [10] There are many transfer derivative functions as follow : dhardlim (Hard limit transfer derivative), dhardlms (Symmetric hard limit transfer derivative ), dlogsig (Log sigmoid transfer derivative), dposlin (Positive linear transfer derivative), dpurelin (Hard limit transfer derivative), dradbas (Radial basis transfer derivative), dsatlin (Saturating linear transfer derivative), dsatlins (Symmetric saturating linear transfer derivative), dtansig (Hyperbolic tangent sigmoid transfer derivative), dtribas (Triangular basis transfer derivative ). WEIGHT & BIAS INITIALIZATION FUNCTIONS [10] There are many weight and bias initialization functions as follow: initcon (Conscience bias initialization), initzero (Zero weight/bias initialization), midpoint (Midpoint weight initialization), randnc (Normalized column weight initialization), randnr (Normalized row weight initialization), rands (Symmetric random weight/bias initialization) WEIGHT DERIVATIVE FUNCTIONS [10] The weight derivative function in ANN is Ddotprod (Dot product weight derivative function). 31

In this paper, fifteen training algorithms and two learning function namely learngd and learngdm are used. RBF Neural Network RBF s are embedded in a two layer neural network where each hidden unit implements a radial activated function. The output units implement a weighted sum of hidden unit outputs. The input into an RBF network is non linear while the output is linear. In order to use RBF network it need to specify the hidden unit activation function, the number of processing units, a criterion for modeling a given task and a training algorithm for finding the parameters of the network. After training the RBF network can be used with data whose underlying statistics is comparable to that of the training set. RBF networks have been successfully applied to a large diversity of applications including interpolation, time series modeling, speech recognition etc. HIDDEN UNIT the neural network model had better predictive accuracy. Specht [4] has stated that it is a memory-based network that provides estimates of continuous variables and converges to the underlying (linear or nonlinear) regression surface. This is a one-pass learning algorithm with a highly parallel structure. Even with sparse data is a multidimensional measurement space; the algorithm provides smooth transitions from one observed value to another. 3. RESEARCH METHODOLOGY In this section of paper, the research methodology for the implementation of the problem is provided. Feed-forward Back-propagation Neural Networks Backprop implements a gradient descent search through a space of possible network weight, iteratively reducing the error E, between training example and target value and network output. It guaranteed to converge only towards some local minima. A training procedure which allows multilayer feed forward Neural Networks to be trained. INPUT OUTPUT Fig 4: Network topology of RBF GRNN Generalized Regression Neural Network GRNN is consists of a RBF layer with a unique linear layer used for function approximation with adequate amount of unseen neurons. The MATLAB Neural Network Toolbox Function (newgrnn) has been used for testing and training the network performance via measure of GRNN for corresponding to validation data. Fig 5 shows architecture of GRNN. It is comparable to the RBF network, but it has a somewhat dissimilar second layer. Fig 6: Architecture of Feed Forward Network However, the major disadvantages of BP are its convergence rate relatively slow [11] and being trapped at the local minima. many powerful optimization algorithms have been devised, most of which have been based on simple gradient descent algorithm as explain by C.M. Bishop [12] such as conjugate gradient decent, scaled conjugate gradient descent, quasi-newton BFGS and Levenberg-Marquardt methods. For feed forward networks: A continuous function can be differentiated allowing Gradient-descent. Back propagation is an example of a gradientdescent technique. Uses sigmoid (binary or bipolar) activation function. Fig. 5: GRNN Network Topology Some researcher use these neural network fundamental to propose their own metrics regarding to their research areas. Khoshgoftarr et al. [3] introduced to apply the concept of the NN as a tool for predicting software quality. They presented a discriminated model and a NN representation of the large telecommunications system, classifying modules as not faultprone or fault-prone. They compared the neural-network model with a non parametric disciminant model, and found 32

In multilayer networks, the activation function is usually more complex than just a threshold function, like 1/[1+exp(-x)] or even 2/[1+exp(-x)] 1 to allow for inhibition, etc. Gradient Descent Gradient-Descent(training_examples, ) Each training example is a pair of the form <(x 1, x n ),t> where (x 1,,x n ) is the vector of input values, and t is the target output value, is the learning rate (e.g. 0.1) Initialize each wi to some small random value Until the termination condition is met, Do Initialize each wi to zero For each <(x 1, x n ),t> in training_examples Do Input the instance (x1,,xn) to the linear unit and compute the output o For each linear unit weight wi Do w i = w i + (t-o) xi For each linear unit weight wi Do w i =w i +w i Sigmoid Activation Function SCG calculate the second order Conjugate Gradient Algorithm that will help to reduce goal functions for some variables. Moller [7] proved this theoretical foundation in which remain its first order techniques in first derivatives like standard backpropogation. This helps to find way for local minimum in second order techniques in second derivatives. Conjugate Gradient Backpropagation with Fletcher- Reeves Updates (CGF) The 2 nd edition for Conjugate Gradient algorithm was projected by Fletcher-Reeves. As with the Polak and Ribiére algorithm, the explore path at per iteration is computed by equation below. Conjugate Gradient Backpropagation with Polak-Riebre Updates (CGP) One more edition of the Conjugate Gradient algorithm was projected by Polak along with Ribiére. The explore path at per iteration is same like SCG search direction equation. However for the Polak-Ribiére update, the constant beta, βk is computed by equation below. X 1 W 1 X 0 =1 X 2 W 2... W n net= i=0 n W i X i O=σ(net)=1/(1+e -net) ƒ O Quasi-Newton Algorithms (One-Step Secant Backpropagation (OSS)) An alternative way to speed up the conjugate gradient optimization is Newton s method. The fundamental footstep of Newton's technique shows in equation following. X n Fig. 7: Sigmoid Activation Function Derive gradient decent rules to train: one sigmoid function E/w i = -d(td-od) od (1-od) xi Multilayer networks of sigmoid units backpropagation Conjugate gradient This is the well accepted iterative technique for solving huge systems of linear equations [6]. In the 1 st iteration, the conjugate gradient algorithm will find the steep descent direction. Description of 3 types of conjugate Gradient Algorithms:- Scaled Gradient Conjugate Backpropogation(SCG), Conjugate Gradient BP with Polak-Riebre Updates(CGP) and Conjugate Gradient BP with Fletcher-Reeves updates(cgf). Approximate solution, x k for conjugate gradient iteration is described as formulas below [7]: Heuristics Algorithms (Resilent Backpropagation(RP)) The reason for resilient backpropagation training algorithm is to get rid of these destructive sound effects of the magnitudes of the fractional derivatives [5]. Performance evaluation Using Feed-Forward Backpropagation Neural Networks [8]: The first examination was to contrast the predictive accuracy of Feed Forward Neural Network trained with various backpropagation training algorithms. This neural network was trained and validated for various feedforward backprop training algorithms available in Matlab Neural Network toolbox [2]. The predictive accuracy of training algorithms was compared: Mean Absolute relative error (MARE) [9]: This is the preferred measure used by software engineering researchers and is given as Mean Relative Error (MRE) [9]: This measure is used to estimate whether models are biased and tend to overestimate or underestimate and is designed as follows X k =X k-1 +α k d k-1 Scaled Gradient Conjugate Backpropogation (SCG) 33

Radial Basis Function (RBF) Neural Network The RBF is a classification and functional approximation neural network developed by M.J.D. Powell. The network uses the most common nonlinearities such as sigmoidal and Gaussian kernel functions. The Gaussian functions are also used in regularization networks. The Gaussian function is generally defined as epochs = 1000 (as the learning would be better when there are large no. of epochs and long durations of training) goal = 0.000000000000001 max_fail = 50 This will give you a learning and performance graph. Once graph is decomposed (since you are trying to minimize the error) similar to that in figure 9. Fig 8: Gaussian function Performance evaluation using Radial Basis Function (RBF) Neural Networks [8]: The second investigation was to construct performance prediction models and compare their predictive accuracy using different radial basis function neural networks available in Matlab Neural Network toolbox [2]. Three radial basis functions are available in the toolbox. They are i. Exact design radial basis networks ii. iii. Efficient design radial basis networks Generalized Regression Neural Network 4. PROBLEM STATMENT This part of paper will explore the problem statement for the implementation of neural network in feed-forward backpropogation and radial basis function. In this problem triangle is identified based on their angle input. In this problem different types of angle based triangle is identified using radial basis function neural networks and feed-forward back-propagation neural network. For neural networks the training data, test data and target data is given to as input. Input is given in form of matrices. For this problem the training data is given as learning set for the network. The learning data can be changed according to different problem statement. 5. IMPLIMENTATION Feed-forward backpropogation NN using Matlab On Training Info, choose Inputs and Targets. lying on Training Parameters, specify: Fig 9: Train Network Radial basis function Neural Networks using Matlab Make sure the parameters are as follows: Network Type = radial basis function (exact fit) Spread constant = 1.0 Network Training: on clicking the Create button, the network is automatically trained which concludes the network implementation phrase. Network Simulation: Now, test the test_data S on the NN Network Manager and track the identical method indicated before (like for input P). Now perform the same steps with radial basis function (fewer neurons) neural network and Generalized Regression Neural Network (GRNN). 6. OBSERVATIONS AND RESULTS The Observations through Different Training Functions is formulated in form of table. Here all training algorithms with LEARNGDM Adaption Learning Function, Performance Function is MSE and Numbers of Layers = 2 are used : No. of neurons= 10 Transfer function=tansig 34

Training function No. of iterations MARE MRE TRAINBFG 1 0.36-0.192 TRAINBR 6 1.3-0.21 TRAINCGB 2 0.07 0.09 TRAINCGF 3 0.14-0.24 TRAINGD 7 1.03 0.16 TRAINGDM 8 35.7-0.06 TRAINGDA 7 0.07 0.12 TRAINGDX 6 8.5 0.34 TRAINLM 1000 39.3 0.18 TRAINOSS 2 0.58 0.15 TRAINRP 6 0.3-0.29 TRAINR 1000 0.36-0.12 TRAINSCG 206 67.2 0.091 TRAINCGP 3 8.59-0.4 TABLE 1: Results of error Prediction with different training function Adaption Learning Function = LEARNGD No. of neurons= 10 Transfer function= pure liner Training function No. of iterations MARE MRE TRAINBFG 2 0.0018 0.2 TRAINBR 7 0.27 0.005 TRAINCGB 27 48.05 0.028 TRAINCGF 5 55.9 88.7 TRAINGD 68 0.38 0.14 TRAINGDM 6 1.18 0.14 TRAINGDA 6 8.3 0.42 TRAINGDX 6 3.1 0.14 TRAINLM 1000 416.4 0.28 TRAINOSS 3 1.0-0.38 TRAINRP 8 0.26-0.15 TRAINR 1000 0.14-0.2 TRAINSCG 22 8.0 0.27 TRAINCGP 4 124.89 0.15 RBF Network MARE MRE Generalized Regression 0.18 0.27 RBF (Exact Fit) 0.5-0.33 RBF(Fewer neurons) 0.49-0.14 TABLE 3: Performance with different RBF Networks 7. CONCLUSION In this paper, the performance of RBF network is observed, the simulated neural network for the prediction accuracy using radical basis function neural network. The simulated network for prediction of errors by changing any metric can analyze the effect of change in that metric. MARE and MRE value can be predicted by increasing or decreasing different metric. This concludes that radial basis function (exact fit) has better accuracy than any other radial basis network. This work can be applied to predict MARE and MRE by using feed-forward backpropogation neural network. Using feed-forward backpropogation neural network, it concludes that the performance of TRAINCGB (Powell-Beale conjugate gradient back-propagation) training function predicts better accuracy than any other training functions. Several different algorithms for training the networks were evaluated : classical backpropagation with variable learning rates, scale gradient conjugate, conjugate gradient with Polak- Riebre Updates, conjugate gradient with Fletcher-Reeves updates, one secant and resilent were tested as well, but were found to require too much RAM memory to allow PC compatible computers to be used efficiently. All tested Artificial Neural Network (ANN) were feedforward networks in which the neurons were processed by a hidden layer of units, which used the so-called tan-sigmoid output function [13] that fed into a linear output layer that predicted the errors. The conjugate gradient algorithms, in mainly trainscg, appear to execute fine more than a extensive diversity of problems, mainly for networks with a huge no. of weights. The SCG algorithm is approximately as quick as the LM algorithm on function approximation problems (more rapidly for large networks) plus is approximately as quick as RP on pattern recognition problems. The performance of SCG algorithm does not degrade as quickly as RP performance does when the error is minimized. The conjugate gradient algorithms have comparatively humble memory necessities [5]. TABLE 2: Performance of training functions after change in parameters. Fig 10: Graphical comparison of some training functions 35

8. FUTURE SCOPE In the future work, The RP function is the fastest algorithm and can be used for wide varieties of problems such as pattern recognition. The main drawback of this algorithm is as it does not perform well on function approximation problems and performance also degrades as the error goal is reduced. The memory necessities for this algorithm are comparatively less in contrast with other algorithms considered. In this work neural networks (based on neural networks algorithms), further can design by own training algorithm for better results focusing on accuracy, those may be based on Fuzzy Logic algorithms, Artificial Intelligence based algorithms or Support vector based algorithms etc. 9. REFERENCES [1] S. Haykins, A Comprehensive Foundation on Neural Networks, Prentice Hall, 1999. [2] Neural network specification, available at URL: http://www.mathworks.in/products/neuralnetwork/descri ption3.html [3] T.M. Khoshgoftaar, E.B. Allen, J.P. Hudepohl, and S.J. Aud, Application of neural networks to software quality modeling of a very large telecommunications system, IEEE Transactions on Neural Network, vol. 8, pp. 902-909, 1997. [4] D.F, Specht, A general regression neural network.ieee Transactions on Neural Networks, vol. 2, Issue: 6, pp. 568-576, 1991. [5] Mathworks Online. Available: http://www. www.mathworks.com, 2009. [6] Jonathan Richard Shewchuk, an Introduction to the Conjugate Gradient Method without the Agonizing Pain, August 1994. [7] Martin Fodslette Moller, A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning. Neural Networks, 6:525-533, 1993. [8] S.N. Sivanadam, S. Sumathi, S.N. Deepa, Introduction to Neural Networks Using Matlab 6.0, Tata McGraw Hill, NewDelhi, 2006. [9] G.Finnie and G. witting, AI tools for software Devlopment Effort Estimation, International Conference on Software Engineering: Education and practice, 1996. [10] S.N. Sivanandam & S.N. Deepa, Principles of Soft Computing, Wiley Publications [11] Zweiri, Y.H., Whidborne, J.F. and Sceviratne, L.D. A Three-term Backpropagation Algorithm. Neurocomputing. 50: 305-318. 2002. [12] 10. C.M. Bishop. Neural Networks for Pattern Recognition. Chapter 7, pp.253-294. Oxford University Press. 1995. [13] Demuth, H., Beale, M. Neural Network Toolbox (Mathworks, Natick,Massachusetts, 1998). World Academy of Science, Engineering and Technology IJCA TM : www.ijcaonline.org 36