Performance Evaluation of a Radial Basis Function Neural Network Learning Algorithm for Function Approximation.

Size: px
Start display at page:

Download "Performance Evaluation of a Radial Basis Function Neural Network Learning Algorithm for Function Approximation."

Transcription

1 Performance Evaluation of a Radial Basis Function Neural Network Learning Algorithm for Function Approximation. A.A. Khurshid PCEA, Nagpur, India A.P.Gokhale VNIT, Nagpur, India Abstract: This paper presents a performance analysis of the learning algorithm used to optimize radial basis function neural network in order to approach target functions from a set of input output pairs. This algorithm combines the growth criterion which adds neurons to the net until a stopping criteria is met for efficient design with a pruning strategy based on the relative contribution of each hidden unit to the overall network output. The resulting networks leads towards a minimal topology with better approximation accuracy. The performance of this algorithm is compared with RBF networks making use of operators based on singular value decomposition & orthogonal least squares in the function approximation areas. For all these problems, the algorithm is shown to realize better approximation accuracy and fewer hidden neurons. Keywords: Artificial Neural Network, Radial Basis Functions, Function approximation I. Introduction Radial Basis function neural networks [1] consist of neurons which are locally tuned. An RBFNN can be regarded as a feed forward artificial neural network [2] with a single layer of hidden units whose responses are the output of radial basis functions as shown in figure1. Figure 1 Radial basis function neural network (RBFNN) can be described by the following equation. f =1 m F(x,Φ,ω) = W f Ф f (x,c f,r f ) where m is the number hidden units. The output of the network F depends on the input vector x = (x 1,,xd) T and on the set of RBF s Ф = (Ф 1, Ф m ) and weights W=(ω 1,..ω m ). The contribution of each RBF depends on the distance from the input vector x to the centre c f of each RBF Ф f. The functions which are of particular interest in the study of RBF net are guassian functions, multiquadrics and inverse multiquadrics. The family of RBF networks is broad enough to uniformly approximate any continuous function on a compact set [2]. Due to their simple structure as compared with multilayer perceptrons (MLP s ) [3] there has been increasing research in RBFNNs & their applicability as function approximators. The learning process undertaken by a RBF network may be visualised as follows. The linear weights associated with the output units of the network tend to evolve on a different line scale compared to the non linear activation functions of the hidden units. Thus as the hidden layers activation functions evolve slowly in accordance with some nonlinear optimization the output layer weights adjust themselves rapidly through a linear optimization strategy. The important point is that the different layers of an RBF network perform different task and so it is reasonable to separate the optimization of the hidden and output layers of the network by using different techniques. In the classical approach to RBF network implementation the basis function are usually chosen as Gaussian & the number of hidden units is fixed a priori based on some properties of the input data. The weights connecting the hidden and output units are estimated by linear least squares methods eg. Least mean square (LMS) [4,5], recursive least squares [6]. The disadvantage with the classical approach is that it results usually in too many hidden units. There are several well known methods to find optimum values for the weights such as Cholesky decomposition [7] or singular value decomposition [8]. In the literature several algorithms to identify these parameters have been published especially because one stage gradient descent algorithms have stability problems when dealing with spread parameters of RBF s [9, 10]. There is a sequential learning algorithm for radial basis function (RBF) networks referred to as generalized growing and pruning algorithm for RBF (GGAP-RBF) [14]. It introduces the concept of significance for the hidden neurons and then uses it in the learning algorithm to realize parsimonious networks. Computation of significance for the hidden neurons involves an integration of the probability density function in the sampling range for the functions other than the popular ones. Moreover it 75

2 uses variable width guassian function where the overlap factor has to be appropriately chosen to determine the overlap of the responses of the hidden neurons in the input space. During pruning the parameters of the nearest neuron is adjusted and if the nearest neuron becomes insignificant it is removed. Each time the parameter adjustment is done using EKF algorithm. Another work suggests, an algorithm [15] that initializes the centers and the radii has been proposed. The new algorithm is able to design better RBFNNs.The success of the new algorithm resides in the idea of a generic activation function that keeps a balance between the output of the target function and the coordinates of the input vectors. The output of the algorithm is used to calculate the values of the radii for each RBF performing much better than other heuristic used for this task.. This paper is organized as follows. Section II gives a brief description of the algorithm. Section III gives the comparison between the proposed algorithm and Multi layer feedforward networks(mfn s) trained using other approaches. Section IV summarizes the conclusions from this study. II. Learning Algorithm A. Growing Criterion: The learning process of this algorithm involves the allocation of new hidden neurons as well as adaptation of network parameters. The RBF network begins with no hidden neurons. As inputs are received during training, a new hidden neuron will be initiated with its centre equal to the input vector with the greatest error until it meets the specified mean squared error goal. The output layer weights are redesigned to minimize error. Approximate choice of spread of RBF is required to fit a smooth function. Larger the spread, smoother the function approximation will be, but too large spread means a lot of neurons will be required to fit a fast changing function.also too small a spread means many neurons will be required to fit a smooth function and the network may not generalize well. Hence depending upon the mean absolute deviation of the input output pairs the spread is appropriately chosen. B. Pruning Criteria: This uses the basic idea of Yingwei et al [11].The pruning strategy removes these hidden units which makes insignificant contribution to the overall network output consecutively over a number of training observations. It uses a sliding window in the pruning criteria to identify the neurons that contribute relatively little to the network output. Selection of the appropriate sizes for these windows critically depends on the distribution of the input samples. To realize compact RBF network this pruning scheme checks the pruning criteria for all hidden neurons after all training observations have been prescribed and learned. The pruning criterion is indicated as follows. For every observation, the outputs of the hidden units are first normalized with respect to the maximum output value among all hidden neurons. These normalized value are then compared with a threshold δ and if any of them falls below this threshold for a sliding window of size M, then this particular hidden neuron is removed from the network. C. The final algorithm is summarized below: For each observation (xn, yn) do. If mse > goal Compute overall network output: k f(xn) = α 0 + α k exp ( -1 11x n -μ k 11 2 ) k=1 k= no. of hidden units σ 2 k Compute the input vector with greatest error (xn max). Allocate a new hidden unit K+1 with μ k+1 = Xn max α k+1 =[w b] * [A ; ONES (Size A)] where b = sqrt (-log(0.5))/spread A = output of the hidden layer. end if end for Check the criterion for pruning hidden units: Compute hidden unit outputs Find the largest absolute hidden unit output. Compute the normalized output values (O n R norm) R = 1---k. If O n R norm < δ for sliding window of size M, then remove the R th hidden unit. Reduce the dimensionality of network parameters. end if. Adjust the network parameters using RPROP. 76

3 III. Performance of the algoriothm for function approximation In this section performance of the algorithm is given for several function approximation problems with I-D, 2-D & 3-D target functions proposed in the literature. I-D functions: In this section the algorithm is tested with I-D function used by other authors. The result obtained is compared with the solutions presented by other authors in terms of the error committed by the model and its complexity. The 1-D target function used is originally proposed by Dickerson and Kosko [12]. The function is defined as dick(x) = 3x (X-1)(X- 1.9 )( X + 0.7) (X+ 1.8), X [-2.1, 2.1].The training set used was composed up of 2100 examples equidistributed in the input interval [-2.1, 2.1] and test set contained 2200 test data also equidistributed in the same input range. Table 1 shows the approximation error reached by the algorithm. Dickerson & Kosko applied a hybrid new fuzzy system with ellipsoidal rules trained by several learning methods, while Pomares proposed a fuzzy system based on a complete table of rules using triangular membership functions. Figure 2(a) shows the result of testing the system. This algorithm is also demonstrated with the help of the function f(x)=0.5xsin 2 (x) + cos 2 (x) [16] which leads to smaller network size (m=7) and better accuracy (MSE= ) as compared to the above at the cost of training time and is as shown in figure 2(b). Again the algorithm is evaluated using the function f1(x) = sin(2πx) /e x [15] and the evolved network with m=6 has an approximation error(rmse) of , m=8 has an approximation error(rmse) of m= The number of RBF s or rules (Depending upon the model) Table Algorithm Dicker son & Kosko Pomares 2000 MOEA 2003 Unsuperv ised Supervise d Proposed Approach Figure 2(b) m mse ± ± test data real data Figure 2(c) Figure2(a):Testing results 77

4 2-D functions : In this section we have used 2-D functions f 5 and f 7 originally proposed in [13]. The two target functions are defined as follows: f 5 (x 1, x 2) = (0.1+ x 1 ( X X 1 2 X X 2 4 )). X 1, X 2 [-0.5, 0.5] f 7 (X 1,X 2 ) = 1.9(1.35+e x1 sin(13(x 1-0.6) 2 ) e -X2 sin (7X 2 )) X 1, X 2 [0, 1] The training sets for the experiments presented in this section consist of 1580 samples of the input space for f7 and 700 samples for f5. The test sets were formed by dividing the input interval with a (31 X 31) grid. Table 2 & 3 shows the approximation error reached by the algorithm. 3-D Functions: In this section the algorithm is used to approximate the following function. Y(x 1, x 2, x 3 ) = 0.1(e x1 + x 2 x 3 cos (x 1 x 2 ) + x 1 x 3 ), x 1 [0, 1], x 2, x 3 [-2, 2] This function is highly non linear, involving an exponential, three multiplications and a trigonometric function. A training set is created by generating 1000 uniformly distributed random values of X1(t) = (t), X2 (t) = 1.61(constant) X3(t) = 8t 2 0 t <0.5-8t t <1 Table 2 (Target function f 5 ) m = no. of RBF is a rules (Depending upon the model) Algorithm m Test NRMSE MLP(Cherkassky, ) Pomares,2000[17] 8X (PT) MOEA, ± ± Proposed approach MSE ± m = no. of RBF is a rules (Depending upon the model) Algorithm m NRME MSE MLP (Cherkassky ) Pomares 2000 MOEA,2003 Proposed approach Table 3(Target function f 7 ) 7X (PT ) ± ± Figure 3 shows the result of testing the system. Function Approximation Problem: Hearta- Problem The hearta dataset consists of 920 examples out of which 690 are to be used for training and the rest 230 to be used for testing. Each example is described by 13 input attributes and one output attribute. Among the training examples only 299 are complete and the rest have one or more missing values. As per PROBEN1 guidelines, missing values were replaced with the mean of the nonmissing values for that attribute. In PROBEN1, the 13 input attributes have been coded into 35 input units.the performance was evaluated by using hearta2 dataset with parameters M=15, δ= and is as shown in table 4. Table 4 Benchmark m mse(training set) mse(test Data set) hearta IV. Conclusion In this paper the performance of the algorithm is compared with other approaches for function approximation problems. Result shows that the algorithm produces a RBF neural network with smaller complexity and better accuracy. Since the algorithm uses a sliding data window in the pruning criteria, selection of the appropriate sizes 78

5 for these windows critically depends upon the distribution of the input samples. Choice of proper window sizes can only be done by trial and error based on exhaustive simulation studies. Training data needs to be stored and reused for pruning purposes. This learning algorithm adds new neurons based on their novelty to the individual instant observations until the goal is met. The implementation of this approach still depends on the data being available all at the same time and hence is strictly not a sequential one but a variation of batch algorithm only. V. References [1.]D.S.Broomhead and D. Lowe, Multivariate functional interpolation and adaptive networks, complex syst., vol 2,pp , [2.]S. Haykin, Neural Networks, a comprehensive foundation, second edition, Pearson Education. [3.] J.Hertz, A.Krough and E.G. Palmer, Introduction to the theory of Neural computation, Reedwood city, Addison-Wesley [4.] J.Moody and C.J. Darker, Fast learning in network of locally tuned processing units, Neural computation, vol 1, pp , [5.] M.Musavi, W.Ahmed, K.Faris and D.Hummels, On training of radial basis function classifiers, Neural networks, vol 5,no.4, pp , 1992 [6.[ S.Chen, S. Billings and P. Grant, Recursive hybrid algorithm for non linear system identification using radial basis function networks, Int. J. Contr., vol 55, pp ,1992. [7.] S Chen, C.F.N.Cowan and P.M. Grant, Orthogonal least squares learning algorithm for radial basis functions, IEEE Trans. Neural Network,vol 2, pp ,1991 [8.] P.P.Kanjilal and D.N. Banerjee, On the application of orthogonal transformation for the design and analysis of feed forward networks, IEEE Trans. Neural Network, vol 6, pp ,1995. [9.] N.B.Karayiannis Re-formulated radial basis neural networks trained by gradient descent, IEEE Trans. Neural Networks, vol. 10, pp , May 1999 [10.] N.B.Karayiannis and G.W.Mi, Growing radial basis neural networks: Merging supervised and unsupervised learning with network growth techniques, IEEE Trans. Neural Networks, vol 8., pp , Nov.1997 [11.] Lu. Yingwei, N. Sundarajan and P. Saratchandran, Performance Evaluation of a sequential Minimal Radial Basis function Neural Network learning algorithm, IEEE Trans. Neural Networks. vol 9,no.2, pp , March1998. [12.] J. A. Dickerson and B. Kosko Fuzzy function approximation with ellipsoidal rules, IEEE Trans syst. Man cyber. B, vol 26, pp , Aug [13.] V.Cherkassy and H. Lari- Najafi, Constrained topological mapping for nonparametric regression analysis, IEEE Trans.Neural Networks, vol.4, no. 1, pp , [14.] Guang-Bin Huang, P.Saratchandran, and Narasimhan Sundararajan, A Generalized Growing and Pruning RBF (GGAP-RBF) Neural Network for Function Approximation, IEEE Transactions On Neural Networks, Vol. 16, No. 1, Jan.05. [15] Alberto Guill en, Ignacio Rojas, Jes us Gonz alez, H ector Pomares, L.J. Herrera and A. Prieto, Supervised RBFNN Centers and Radii Initialization for Function Approximation Problems, International Joint Conference on Neural Networks, Sheraton Vancouver, Wall Centre Hotel, Vancouver, BC, Canada, July 16-21, 2006, Pg , [16] Fatemi,Mehdi Roopaei,Faridoo Shabaninia, New Enhanced Methods for Radial Basis Function Neural Networks in Function Approximation, Proceedings of the Fifth International Conference on Hybrid Intelligent Systems (HIS 05),2005 [17] H. A systematic approach to a self-generating Pomares, I. Rojas, J. Ortega, J. González, and A. Prieto, fuzzy rule-table for function approximation IEEE Trans. Syst., Man Cybern. B, vol. 30, pp , June

Multiobjective Evolutionary Optimization of the Size, Shape, and Position Parameters of Radial Basis Function Networks for Function Approximation

Multiobjective Evolutionary Optimization of the Size, Shape, and Position Parameters of Radial Basis Function Networks for Function Approximation 1478 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 6, NOVEMBER 2003 Multiobjective Evolutionary Optimization of the Size, Shape, and Position Parameters of Radial Basis Function Networks for Function

More information

Radial basis function networks are special-designed,

Radial basis function networks are special-designed, Using Self-organizing Incremental Neural Network (SOINN) For Radial Basis Function Networks Jie Lu, Furao Shen, and Jinxi Zhao Abstract This paper presents a batch learning algorithm and an online learning

More information

MultiGrid-Based Fuzzy Systems for Function Approximation

MultiGrid-Based Fuzzy Systems for Function Approximation MultiGrid-Based Fuzzy Systems for Function Approximation Luis Javier Herrera 1,Héctor Pomares 1, Ignacio Rojas 1, Olga Valenzuela 2, and Mohammed Awad 1 1 University of Granada, Department of Computer

More information

COMPEL 17,1/2/3. This work was supported by the Greek General Secretariat of Research and Technology through the PENED 94 research project.

COMPEL 17,1/2/3. This work was supported by the Greek General Secretariat of Research and Technology through the PENED 94 research project. 382 Non-destructive testing of layered structures using generalised radial basis function networks trained by the orthogonal least squares learning algorithm I.T. Rekanos, T.V. Yioultsis and T.D. Tsiboukis

More information

Multiobjective RBFNNs Designer for Function Approximation: An Application for Mineral Reduction

Multiobjective RBFNNs Designer for Function Approximation: An Application for Mineral Reduction Multiobjective RBFNNs Designer for Function Approximation: An Application for Mineral Reduction Alberto Guillén, Ignacio Rojas, Jesús González, Héctor Pomares, L.J. Herrera and Francisco Fernández University

More information

Face Detection Using Radial Basis Function Neural Networks With Fixed Spread Value

Face Detection Using Radial Basis Function Neural Networks With Fixed Spread Value Detection Using Radial Basis Function Neural Networks With Fixed Value Khairul Azha A. Aziz Faculty of Electronics and Computer Engineering, Universiti Teknikal Malaysia Melaka, Ayer Keroh, Melaka, Malaysia.

More information

Function approximation using RBF network. 10 basis functions and 25 data points.

Function approximation using RBF network. 10 basis functions and 25 data points. 1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS BOGDAN M.WILAMOWSKI University of Wyoming RICHARD C. JAEGER Auburn University ABSTRACT: It is shown that by introducing special

More information

Improving Performance of Multi-objective Genetic for Function Approximation through island specialisation

Improving Performance of Multi-objective Genetic for Function Approximation through island specialisation Improving Performance of Multi-objective Genetic for Function Approximation through island specialisation A. Guillén 1, I. Rojas 1, J. González 1, H. Pomares 1, L.J. Herrera 1, and B. Paechter 2 1) Department

More information

Width optimization of the Gaussian kernels in Radial Basis Function Networks

Width optimization of the Gaussian kernels in Radial Basis Function Networks ESANN' proceedings - European Symposium on Artificial Neural Networks Bruges (Belgium), - April, d-side publi., ISBN -97--, pp. - Width optimization of the Gaussian kernels in Radial Basis Function Networks

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Neural Computation : Lecture 14 John A. Bullinaria, 2015 1. The RBF Mapping 2. The RBF Network Architecture 3. Computational Power of RBF Networks 4. Training

More information

Function Approximation Using Artificial Neural Networks

Function Approximation Using Artificial Neural Networks Approximation Using Artificial Neural Networks ZARITA ZAINUDDIN & ONG PAULINE School of Mathematical Sciences Universiti Sains Malaysia 800 Minden, Penang MALAYSIA zarita@cs.usm.my Abstract: - approximation,

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Radial Basis Function Networks Adrian Horzyk Preface Radial Basis Function Networks (RBFN) are a kind of artificial neural networks that use radial basis functions (RBF) as activation

More information

RADIAL BASIS FUNCTIONS NETWORK FOR DEFECT SIZING. S. Nair, S. Udpa and L. Udpa Center for Non-Destructive Evaluation Scholl Road Ames, IA 50011

RADIAL BASIS FUNCTIONS NETWORK FOR DEFECT SIZING. S. Nair, S. Udpa and L. Udpa Center for Non-Destructive Evaluation Scholl Road Ames, IA 50011 RADAL BASS FUNCTONS NETWORK FOR DEFECT SZNG S. Nair, S. Udpa and L. Udpa Center for Non-Destructive Evaluation Scholl Road Ames, A 50011 NTRODUCTON An important aspect of non-destructive testing is the

More information

Accurate modeling of SiGe HBT using artificial neural networks: Performance Comparison of the MLP and RBF Networks

Accurate modeling of SiGe HBT using artificial neural networks: Performance Comparison of the MLP and RBF Networks Accurate modeling of SiGe HBT using artificial neural networks: Performance Comparison of the MLP and RBF etworks Malek Amiri Abdeboochali Department of Electrical Engineering Razi University Kermanshah,

More information

Chap.12 Kernel methods [Book, Chap.7]

Chap.12 Kernel methods [Book, Chap.7] Chap.12 Kernel methods [Book, Chap.7] Neural network methods became popular in the mid to late 1980s, but by the mid to late 1990s, kernel methods have also become popular in machine learning. The first

More information

THE CLASSICAL method for training a multilayer feedforward

THE CLASSICAL method for training a multilayer feedforward 930 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 4, JULY 1999 A Fast U-D Factorization-Based Learning Algorithm with Applications to Nonlinear System Modeling and Identification Youmin Zhang and

More information

Radial Basis Function Neural Network Classifier

Radial Basis Function Neural Network Classifier Recognition of Unconstrained Handwritten Numerals by a Radial Basis Function Neural Network Classifier Hwang, Young-Sup and Bang, Sung-Yang Department of Computer Science & Engineering Pohang University

More information

CHAPTER IX Radial Basis Function Networks

CHAPTER IX Radial Basis Function Networks CHAPTER IX Radial Basis Function Networks Radial basis function (RBF) networks are feed-forward networks trained using a supervised training algorithm. They are typically configured with a single hidden

More information

Improving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms.

Improving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms. Improving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms. Gómez-Skarmeta, A.F. University of Murcia skarmeta@dif.um.es Jiménez, F. University of Murcia fernan@dif.um.es

More information

Designing Radial Basis Neural Networks using a Distributed Architecture

Designing Radial Basis Neural Networks using a Distributed Architecture Designing Radial Basis Neural Networks using a Distributed Architecture J.M. Valls, A. Leal, I.M. Galván and J.M. Molina Computer Science Department Carlos III University of Madrid Avenida de la Universidad,

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information

Efficient Object Tracking Using K means and Radial Basis Function

Efficient Object Tracking Using K means and Radial Basis Function Efficient Object Tracing Using K means and Radial Basis Function Mr. Pradeep K. Deshmuh, Ms. Yogini Gholap University of Pune Department of Post Graduate Computer Engineering, JSPM S Rajarshi Shahu College

More information

Artificial Neural Networks MLP, RBF & GMDH

Artificial Neural Networks MLP, RBF & GMDH Artificial Neural Networks MLP, RBF & GMDH Jan Drchal drchajan@fel.cvut.cz Computational Intelligence Group Department of Computer Science and Engineering Faculty of Electrical Engineering Czech Technical

More information

Face Detection Using Radial Basis Function Neural Networks with Fixed Spread Value

Face Detection Using Radial Basis Function Neural Networks with Fixed Spread Value IJCSES International Journal of Computer Sciences and Engineering Systems, Vol., No. 3, July 2011 CSES International 2011 ISSN 0973-06 Face Detection Using Radial Basis Function Neural Networks with Fixed

More information

Research on an online self-organizing radial basis function neural network

Research on an online self-organizing radial basis function neural network Neural Comput & Applic () 9:667 676 DOI.7/s5-9-33-6 ORIGINAL ARTICLE Research on an online self-organizing radial basis function neural network Honggui Han Qili Chen Junfei Qiao Received: October 8 / Accepted:

More information

Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions

Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions ENEE 739Q SPRING 2002 COURSE ASSIGNMENT 2 REPORT 1 Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions Vikas Chandrakant Raykar Abstract The aim of the

More information

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Research on Evaluation Method of Product Style Semantics Based on Neural Network

Research on Evaluation Method of Product Style Semantics Based on Neural Network Research Journal of Applied Sciences, Engineering and Technology 6(23): 4330-4335, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 28, 2012 Accepted:

More information

Homework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures:

Homework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures: Homework Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression 3.0-3.2 Pod-cast lecture on-line Next lectures: I posted a rough plan. It is flexible though so please come with suggestions Bayes

More information

Artificial Neural Network-Based Prediction of Human Posture

Artificial Neural Network-Based Prediction of Human Posture Artificial Neural Network-Based Prediction of Human Posture Abstract The use of an artificial neural network (ANN) in many practical complicated problems encourages its implementation in the digital human

More information

CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks

CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks Part IV 1 Function approximation MLP is both a pattern classifier and a function approximator As a function approximator,

More information

Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation

Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation .--- Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Networ and Fuzzy Simulation Abstract - - - - Keywords: Many optimization problems contain fuzzy information. Possibility

More information

Machine Learning and Pervasive Computing

Machine Learning and Pervasive Computing Stephan Sigg Georg-August-University Goettingen, Computer Networks 17.12.2014 Overview and Structure 22.10.2014 Organisation 22.10.3014 Introduction (Def.: Machine learning, Supervised/Unsupervised, Examples)

More information

FEATURE EXTRACTION USING FUZZY RULE BASED SYSTEM

FEATURE EXTRACTION USING FUZZY RULE BASED SYSTEM International Journal of Computer Science and Applications, Vol. 5, No. 3, pp 1-8 Technomathematics Research Foundation FEATURE EXTRACTION USING FUZZY RULE BASED SYSTEM NARENDRA S. CHAUDHARI and AVISHEK

More information

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science Neural Networks Prof. Dr. Rudolf Kruse Computational Intelligence Group Faculty for Computer Science kruse@iws.cs.uni-magdeburg.de Rudolf Kruse Neural Networks Radial Basis Function Networks Rudolf Kruse

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section An Algorithm for Incremental Construction of Feedforward Networks of Threshold Units with Real Valued Inputs Dhananjay S. Phatak Electrical Engineering Department State University of New York, Binghamton,

More information

A Compensatory Wavelet Neuron Model

A Compensatory Wavelet Neuron Model A Compensatory Wavelet Neuron Model Sinha, M., Gupta, M. M. and Nikiforuk, P.N Intelligent Systems Research Laboratory College of Engineering, University of Saskatchewan Saskatoon, SK, S7N 5A9, CANADA

More information

AppART + Growing Neural Gas = high performance hybrid neural network for function approximation

AppART + Growing Neural Gas = high performance hybrid neural network for function approximation 1 AppART + Growing Neural Gas = high performance hybrid neural network for function approximation Luis Martí Ý Þ, Alberto Policriti Ý, Luciano García Þ and Raynel Lazo Þ Ý DIMI, Università degli Studi

More information

Channel Performance Improvement through FF and RBF Neural Network based Equalization

Channel Performance Improvement through FF and RBF Neural Network based Equalization Channel Performance Improvement through FF and RBF Neural Network based Equalization Manish Mahajan 1, Deepak Pancholi 2, A.C. Tiwari 3 Research Scholar 1, Asst. Professor 2, Professor 3 Lakshmi Narain

More information

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM Annals of the University of Petroşani, Economics, 12(4), 2012, 185-192 185 IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM MIRCEA PETRINI * ABSTACT: This paper presents some simple techniques to improve

More information

Review Article Using Radial Basis Function Networks for Function Approximation and Classification

Review Article Using Radial Basis Function Networks for Function Approximation and Classification International Scholarly Research Network ISRN Applied Mathematics Volume 2012, Article ID 324194, 34 pages doi:10.5402/2012/324194 Review Article Using Radial Basis Function Networks for Function Approximation

More information

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar

More information

Automatic basis selection for RBF networks using Stein s unbiased risk estimator

Automatic basis selection for RBF networks using Stein s unbiased risk estimator Automatic basis selection for RBF networks using Stein s unbiased risk estimator Ali Ghodsi School of omputer Science University of Waterloo University Avenue West NL G anada Email: aghodsib@cs.uwaterloo.ca

More information

WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS

WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS ARIFA SULTANA 1 & KANDARPA KUMAR SARMA 2 1,2 Department of Electronics and Communication Engineering, Gauhati

More information

Batch Intrinsic Plasticity for Extreme Learning Machines

Batch Intrinsic Plasticity for Extreme Learning Machines Batch Intrinsic Plasticity for Extreme Learning Machines Klaus Neumann and Jochen J. Steil Research Institute for Cognition and Robotics (CoR-Lab) Faculty of Technology, Bielefeld University Universitätsstr.

More information

Support Vector Machines

Support Vector Machines Support Vector Machines RBF-networks Support Vector Machines Good Decision Boundary Optimization Problem Soft margin Hyperplane Non-linear Decision Boundary Kernel-Trick Approximation Accurancy Overtraining

More information

Radial Basis Function Networks

Radial Basis Function Networks Radial Basis Function Networks As we have seen, one of the most common types of neural network is the multi-layer perceptron It does, however, have various disadvantages, including the slow speed in learning

More information

Approximation of a Fuzzy Function by Using Radial Basis Functions Interpolation

Approximation of a Fuzzy Function by Using Radial Basis Functions Interpolation International Journal of Mathematical Modelling & Computations Vol. 07, No. 03, Summer 2017, 299-307 Approximation of a Fuzzy Function by Using Radial Basis Functions Interpolation R. Firouzdor a and M.

More information

CHAPTER 5 RADIAL BASIS FUNCTION (RBF) NEURAL NETWORKS FOR TOOL WEAR MONITORING

CHAPTER 5 RADIAL BASIS FUNCTION (RBF) NEURAL NETWORKS FOR TOOL WEAR MONITORING CHAPTER 5 RADIAL BASIS FUNCTION (RBF) NEURAL NETWORKS FOR TOOL WEAR MONITORING This chapter presents an overview of radial basis function neural networks and their applications to tool wear monitoring.

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

Neuro-Fuzzy Approach for Software Release Time Optimization

Neuro-Fuzzy Approach for Software Release Time Optimization Int. J. Advance Soft Compu. Appl, Vol.9, No. 3, Nov 2017 ISSN 2074-8523 Neuro-Fuzzy Approach for Software Release Time Optimization Shubhra Gautam, Deepak Kumar, L.M. Patnaik Amity University, Uttar Pradesh,

More information

International Research Journal of Computer Science (IRJCS) ISSN: Issue 09, Volume 4 (September 2017)

International Research Journal of Computer Science (IRJCS) ISSN: Issue 09, Volume 4 (September 2017) APPLICATION OF LRN AND BPNN USING TEMPORAL BACKPROPAGATION LEARNING FOR PREDICTION OF DISPLACEMENT Talvinder Singh, Munish Kumar C-DAC, Noida, India talvinder.grewaal@gmail.com,munishkumar@cdac.in Manuscript

More information

3 Nonlinear Regression

3 Nonlinear Regression 3 Linear models are often insufficient to capture the real-world phenomena. That is, the relation between the inputs and the outputs we want to be able to predict are not linear. As a consequence, nonlinear

More information

An Endowed Takagi-Sugeno-type Fuzzy Model for Classification Problems

An Endowed Takagi-Sugeno-type Fuzzy Model for Classification Problems Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Hierarchical Learning Algorithm for the Beta Basis Function Neural Network

Hierarchical Learning Algorithm for the Beta Basis Function Neural Network Third International Conference on Systems, Signals & Devices March 2-24, 2005 Sousse, Tunisia Volume III Communication and Signal Processing Hierarchical Learning Algorithm for the Beta Basis Function

More information

A Learning Algorithm for Piecewise Linear Regression

A Learning Algorithm for Piecewise Linear Regression A Learning Algorithm for Piecewise Linear Regression Giancarlo Ferrari-Trecate 1, arco uselli 2, Diego Liberati 3, anfred orari 1 1 nstitute für Automatik, ETHZ - ETL CH 8092 Zürich, Switzerland 2 stituto

More information

Fuzzy Modeling using Vector Quantization with Supervised Learning

Fuzzy Modeling using Vector Quantization with Supervised Learning Fuzzy Modeling using Vector Quantization with Supervised Learning Hirofumi Miyajima, Noritaka Shigei, and Hiromi Miyajima Abstract It is known that learning methods of fuzzy modeling using vector quantization

More information

ESSENTIALLY, system modeling is the task of building

ESSENTIALLY, system modeling is the task of building IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 53, NO. 4, AUGUST 2006 1269 An Algorithm for Extracting Fuzzy Rules Based on RBF Neural Network Wen Li and Yoichi Hori, Fellow, IEEE Abstract A four-layer

More information

Pattern Classification Algorithms for Face Recognition

Pattern Classification Algorithms for Face Recognition Chapter 7 Pattern Classification Algorithms for Face Recognition 7.1 Introduction The best pattern recognizers in most instances are human beings. Yet we do not completely understand how the brain recognize

More information

SEQUENTIAL LEARNING FOR ADAPTIVE CRITIC DESIGN: AN INDUSTRIAL CONTROL APPLICATION

SEQUENTIAL LEARNING FOR ADAPTIVE CRITIC DESIGN: AN INDUSTRIAL CONTROL APPLICATION SEQUENTIAL LEARNING FOR ADAPTIVE CRITIC DESIGN: AN INDUSTRIAL CONTROL APPLICATION James J. Govindhasamy, Member, IEEE, Seán F. McLoone, Senior Member, IEEE, Intelligent Systems and Control Research Group,

More information

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016 CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2016 A2/Midterm: Admin Grades/solutions will be posted after class. Assignment 4: Posted, due November 14. Extra office hours:

More information

Nelder-Mead Enhanced Extreme Learning Machine

Nelder-Mead Enhanced Extreme Learning Machine Philip Reiner, Bogdan M. Wilamowski, "Nelder-Mead Enhanced Extreme Learning Machine", 7-th IEEE Intelligent Engineering Systems Conference, INES 23, Costa Rica, June 9-2., 29, pp. 225-23 Nelder-Mead Enhanced

More information

Applying Kohonen Network in Organising Unstructured Data for Talus Bone

Applying Kohonen Network in Organising Unstructured Data for Talus Bone 212 Third International Conference on Theoretical and Mathematical Foundations of Computer Science Lecture Notes in Information Technology, Vol.38 Applying Kohonen Network in Organising Unstructured Data

More information

A methodology for Building Regression Models using Extreme Learning Machine: OP-ELM

A methodology for Building Regression Models using Extreme Learning Machine: OP-ELM A methodology for Building Regression Models using Extreme Learning Machine: OP-ELM Yoan Miche 1,2, Patrick Bas 1,2, Christian Jutten 2, Olli Simula 1 and Amaury Lendasse 1 1- Helsinki University of Technology

More information

Support Vector Machines

Support Vector Machines Support Vector Machines RBF-networks Support Vector Machines Good Decision Boundary Optimization Problem Soft margin Hyperplane Non-linear Decision Boundary Kernel-Trick Approximation Accurancy Overtraining

More information

A Systematic Overview of Data Mining Algorithms

A Systematic Overview of Data Mining Algorithms A Systematic Overview of Data Mining Algorithms 1 Data Mining Algorithm A well-defined procedure that takes data as input and produces output as models or patterns well-defined: precisely encoded as a

More information

RULE BASED SIGNATURE VERIFICATION AND FORGERY DETECTION

RULE BASED SIGNATURE VERIFICATION AND FORGERY DETECTION RULE BASED SIGNATURE VERIFICATION AND FORGERY DETECTION M. Hanmandlu Multimedia University Jalan Multimedia 63100, Cyberjaya Selangor, Malaysia E-mail:madasu.hanmandlu@mmu.edu.my M. Vamsi Krishna Dept.

More information

Research on the New Image De-Noising Methodology Based on Neural Network and HMM-Hidden Markov Models

Research on the New Image De-Noising Methodology Based on Neural Network and HMM-Hidden Markov Models Research on the New Image De-Noising Methodology Based on Neural Network and HMM-Hidden Markov Models Wenzhun Huang 1, a and Xinxin Xie 1, b 1 School of Information Engineering, Xijing University, Xi an

More information

IMPLEMENTATION OF FPGA-BASED ARTIFICIAL NEURAL NETWORK (ANN) FOR FULL ADDER. Research Scholar, IIT Kharagpur.

IMPLEMENTATION OF FPGA-BASED ARTIFICIAL NEURAL NETWORK (ANN) FOR FULL ADDER. Research Scholar, IIT Kharagpur. Journal of Analysis and Computation (JAC) (An International Peer Reviewed Journal), www.ijaconline.com, ISSN 0973-2861 Volume XI, Issue I, Jan- December 2018 IMPLEMENTATION OF FPGA-BASED ARTIFICIAL NEURAL

More information

742 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 13, NO. 6, DECEMBER Dong Zhang, Luo-Feng Deng, Kai-Yuan Cai, and Albert So

742 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 13, NO. 6, DECEMBER Dong Zhang, Luo-Feng Deng, Kai-Yuan Cai, and Albert So 742 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL 13, NO 6, DECEMBER 2005 Fuzzy Nonlinear Regression With Fuzzified Radial Basis Function Network Dong Zhang, Luo-Feng Deng, Kai-Yuan Cai, and Albert So Abstract

More information

Surface Reconstruction using Feature based Approach with Radial Basis Function Neural Network

Surface Reconstruction using Feature based Approach with Radial Basis Function Neural Network I J C T A, 9(19) 2016, pp. 9245-9256 International Science Press Surface Reconstruction using Feature based Approach with Radial Basis Function Neural Network Kavita Khanna, Navin Rajpal and Ajay Dureja

More information

Multi Layer Perceptron with Back Propagation. User Manual

Multi Layer Perceptron with Back Propagation. User Manual Multi Layer Perceptron with Back Propagation User Manual DAME-MAN-NA-0011 Issue: 1.3 Date: September 03, 2013 Author: S. Cavuoti, M. Brescia Doc. : MLPBP_UserManual_DAME-MAN-NA-0011-Rel1.3 1 INDEX 1 Introduction...

More information

Character Recognition Using Convolutional Neural Networks

Character Recognition Using Convolutional Neural Networks Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining

More information

Performance of Error Normalized Step Size LMS and NLMS Algorithms: A Comparative Study

Performance of Error Normalized Step Size LMS and NLMS Algorithms: A Comparative Study International Journal of Electronic and Electrical Engineering. ISSN 97-17 Volume 5, Number 1 (1), pp. 3-3 International Research Publication House http://www.irphouse.com Performance of Error Normalized

More information

Dynamic Analysis of Structures Using Neural Networks

Dynamic Analysis of Structures Using Neural Networks Dynamic Analysis of Structures Using Neural Networks Alireza Lavaei Academic member, Islamic Azad University, Boroujerd Branch, Iran Alireza Lohrasbi Academic member, Islamic Azad University, Boroujerd

More information

3 Nonlinear Regression

3 Nonlinear Regression CSC 4 / CSC D / CSC C 3 Sometimes linear models are not sufficient to capture the real-world phenomena, and thus nonlinear models are necessary. In regression, all such models will have the same basic

More information

COMPUTATIONAL INTELLIGENCE SEW (INTRODUCTION TO MACHINE LEARNING) SS18. Lecture 6: k-nn Cross-validation Regularization

COMPUTATIONAL INTELLIGENCE SEW (INTRODUCTION TO MACHINE LEARNING) SS18. Lecture 6: k-nn Cross-validation Regularization COMPUTATIONAL INTELLIGENCE SEW (INTRODUCTION TO MACHINE LEARNING) SS18 Lecture 6: k-nn Cross-validation Regularization LEARNING METHODS Lazy vs eager learning Eager learning generalizes training data before

More information

Prior Knowledge Input Method In Device Modeling

Prior Knowledge Input Method In Device Modeling Turk J Elec Engin, VOL.13, NO.1 2005, c TÜBİTAK Prior Knowledge Input Method In Device Modeling Serdar HEKİMHAN, Serdar MENEKAY, N. Serap ŞENGÖR İstanbul Technical University, Faculty of Electrical & Electronics

More information

Research Article International Journals of Advanced Research in Computer Science and Software Engineering ISSN: X (Volume-7, Issue-6)

Research Article International Journals of Advanced Research in Computer Science and Software Engineering ISSN: X (Volume-7, Issue-6) International Journals of Advanced Research in Computer Science and Software Engineering Research Article June 17 Artificial Neural Network in Classification A Comparison Dr. J. Jegathesh Amalraj * Assistant

More information

APPLICATIONS OF INTELLIGENT HYBRID SYSTEMS IN MATLAB

APPLICATIONS OF INTELLIGENT HYBRID SYSTEMS IN MATLAB APPLICATIONS OF INTELLIGENT HYBRID SYSTEMS IN MATLAB Z. Dideková, S. Kajan Institute of Control and Industrial Informatics, Faculty of Electrical Engineering and Information Technology, Slovak University

More information

Performance Analysis of Adaptive Beamforming Algorithms for Smart Antennas

Performance Analysis of Adaptive Beamforming Algorithms for Smart Antennas Available online at www.sciencedirect.com ScienceDirect IERI Procedia 1 (214 ) 131 137 214 International Conference on Future Information Engineering Performance Analysis of Adaptive Beamforming Algorithms

More information

A Systematic Overview of Data Mining Algorithms. Sargur Srihari University at Buffalo The State University of New York

A Systematic Overview of Data Mining Algorithms. Sargur Srihari University at Buffalo The State University of New York A Systematic Overview of Data Mining Algorithms Sargur Srihari University at Buffalo The State University of New York 1 Topics Data Mining Algorithm Definition Example of CART Classification Iris, Wine

More information

KINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL NETWORK

KINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL NETWORK Proceedings of the National Conference on Trends and Advances in Mechanical Engineering, YMCA Institute of Engineering, Faridabad, Haryana., Dec 9-10, 2006. KINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL

More information

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network Utkarsh Dwivedi 1, Pranjal Rajput 2, Manish Kumar Sharma 3 1UG Scholar, Dept. of CSE, GCET, Greater Noida,

More information

SNIWD: Simultaneous Weight Noise Injection With Weight Decay for MLP Training

SNIWD: Simultaneous Weight Noise Injection With Weight Decay for MLP Training SNIWD: Simultaneous Weight Noise Injection With Weight Decay for MLP Training John Sum and Kevin Ho Institute of Technology Management, National Chung Hsing University Taichung 4, Taiwan. pfsum@nchu.edu.tw

More information

ORT EP R RCH A ESE R P A IDI! " #$$% &' (# $!"

ORT EP R RCH A ESE R P A IDI!  #$$% &' (# $! R E S E A R C H R E P O R T IDIAP A Parallel Mixture of SVMs for Very Large Scale Problems Ronan Collobert a b Yoshua Bengio b IDIAP RR 01-12 April 26, 2002 Samy Bengio a published in Neural Computation,

More information

Learning with Regularization Networks

Learning with Regularization Networks Learning with Regularization Networks Petra Kudová Department of Theoretical Computer Science Institute of Computer Science Academy of Sciences of the Czech Republic Outline Introduction supervised learning

More information

Neural Network Weight Selection Using Genetic Algorithms

Neural Network Weight Selection Using Genetic Algorithms Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks

More information

Intelligent Methods in Modelling and Simulation of Complex Systems

Intelligent Methods in Modelling and Simulation of Complex Systems SNE O V E R V I E W N OTE Intelligent Methods in Modelling and Simulation of Complex Systems Esko K. Juuso * Control Engineering Laboratory Department of Process and Environmental Engineering, P.O.Box

More information

Adaptive Training of Radial Basis Function Networks Based on Cooperative Evolution and Evolutionary Programming

Adaptive Training of Radial Basis Function Networks Based on Cooperative Evolution and Evolutionary Programming Adaptive Training of Radial asis Function Networks ased on Cooperative Evolution and Evolutionary Programming Alexander P. Topchy, Oleg A. Lebedko, Victor V. Miagkikh and Nikola K. Kasabov 1 Research Institute

More information

Neural Networks (Overview) Prof. Richard Zanibbi

Neural Networks (Overview) Prof. Richard Zanibbi Neural Networks (Overview) Prof. Richard Zanibbi Inspired by Biology Introduction But as used in pattern recognition research, have little relation with real neural systems (studied in neurology and neuroscience)

More information

A New RBF Neural Network With Boundary Value Constraints

A New RBF Neural Network With Boundary Value Constraints 98 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 39, NO. 1, FEBRUARY 009 A New RBF Neural Network With Boundary Value Constraints Xia Hong, Senior Member, IEEE,and Sheng

More information

Comparison of supervised self-organizing maps using Euclidian or Mahalanobis distance in classification context

Comparison of supervised self-organizing maps using Euclidian or Mahalanobis distance in classification context 6 th. International Work Conference on Artificial and Natural Neural Networks (IWANN2001), Granada, June 13-15 2001 Comparison of supervised self-organizing maps using Euclidian or Mahalanobis distance

More information

NNIGnets, Neural Networks Software

NNIGnets, Neural Networks Software NNIGnets, Neural Networks Software Tânia Fontes 1, Vânia Lopes 1, Luís M. Silva 1, Jorge M. Santos 1,2, and Joaquim Marques de Sá 1 1 INEB - Instituto de Engenharia Biomédica, Campus FEUP (Faculdade de

More information