The rest of the paper is organized as follows: we rst shortly describe the \growing neural gas" method which we have proposed earlier [3]. Then the co
|
|
- Jeffery Potter
- 5 years ago
- Views:
Transcription
1 In: F. Fogelman and P. Gallinari, editors, ICANN'95: International Conference on Artificial Neural Networks, pages , Paris, France, EC2 & Cie. Incremental Learning of Local Linear Mappings Bernd Fritzke Institut fur Neuroinformatik, Ruhr-Universitat Bochum, Germany Abstract A new incremental network model for supervised learning is proposed. The model builds up a structure of units each of which has an associated local linear mapping (LLM). Error information obtained during training is used to determine where to insert new units whose LLMs are interpolated from their neighbors. Simulation results for several classication tasks indicate fast convergence as well as good generalization. The ability of the model to also perform function approximation is demonstrated by an example. 1 Introduction Local (or piece-wise) linear mappings (LLMs) are an economic means of describing a \well-behaved" function f : R n! R m. The principle is to approximate the function (which may be given by a number of input/output samples (; ) 2 R n R m ) with a set of linear mappings each of which is constrained to a local region of the input space R n. LLM-based methods have been used earlier to learn the inverse kinematics of robot arms [7], for classication [4] and for time series prediction [6]. A general problem which has to be solved when using LLMs is to partition the input space into a number of parcels such that within each parcel the function f can be described suciently well by a linear mapping. Those parcels may be rather large in areas of R n where f indeed behaves approximately linear and must be smaller where this is not the case. The total number of parcels needed depends on the desired approximation accuracy and may be limited by the amount of available sample data since over-tting might occur. A widely used method to achieve a partitioning of the input space into parcels is to choose a number of centers in R n and use the corresponding Voronoi tessellation (which associates each point to the center with minimum Euclidean distance). Existing LLM-based approaches generally assume a xed number of centers which are distributed in input space by some vector quantization method. Thereafter, or even during the vector quantization, the linear mapping f c : R n! R m associated with each center c is learned by evaluating data pairs. A problem with this approach, however, is that the vector quantization method is only driven by the n-dimensional input part of the data pairs (; ) and, therefore, does not take into account at all the linearity or non-linearity of f. Rather, the centers are distributed according to the density of the input data which may result in a partition which is sub-optimal for the given task. It may happen, e.g., that a region of R n where f is perfectly linear is partitioned into many parcels since a large part of the available input data happens to lie in this region. In this paper we propose a method for incrementally generating a partition of the input space. Our approach uses locally accumulated approximation error to determine where to insert new centers (and associated LLMs). The principle of insertion based on accumulated error has been used earlier for the incremental construction of radial basis function networks [2, 1]. Here we adapt the same idea for LLM-based networks.
2 The rest of the paper is organized as follows: we rst shortly describe the \growing neural gas" method which we have proposed earlier [3]. Then the combination with LLMs is outlined and nally some simulation results are given. 2 Growing Neural Gas \Growing neural gas" (GNG) is an unsupervised network model which learns topologies [3]: It incrementally constructs a graph representation of a given data set which is n-dimensional but may stem from a lower-dimensional sub-manifold of the input space R n. In the following we assume that the data obeys some (unknown) probability distribution P (). In particular the data set need not be nite but may also be generated continuously by some stationary process. The GNG method distributes a set of centers (or units) in R n. This is partially done by adaptation steps but mostly by interpolation of new centers from existing ones. Between two centers there may be an edge indicating neighborhood in R n. These edges which are used for interpolation (see below) are inserted with the \competitive Hebbian learning" rule [5] during the mentioned adaptation steps. The \competitive Hebbian learning" rule can simply be stated as: \Insert an edge between the nearest and second-nearest center with respect to the current input signal." The GNG algorithm is the following (for a more detailed discussion see [3]): 0. Start with two units a and b at random positions wa and wb in R n. 1. Generate an input signal according to P (). 2. Find the nearest unit s 1 and the second-nearest unit s Increment the age of all edges emanating from s Add the squared distance between the input signal and the nearest unit in input space to a local error variable: error(s 1 ) = kws 1? k 2 5. Move s 1 and its direct topological neighbors 1 towards by fractions b and n, respectively, of the total distance: ws 1 = b(? ws 1 ) wn = n(? wn) for all direct neighbors n of s 1 6. If s 1 and s 2 are connected by an edge, set the age of this edge to zero. If such an edge does not exist, create it. 7. Remove edges with an age larger than amax. If this results in points having no emanating edges, remove them as well. 8. If the number of input signals generated so far is an integer multiple of a parameter, insert a new unit as follows: Determine the unit q with the maximum accumulated error. Insert a new unit r halfway between q and its neighbor f with the largest error variable: wr = 0:5 (wq + wf ): Insert edges connecting the new unit r with units q and f, and remove the original edge between q and f. Decrease the error variables of q and f by multiplying them with a constant. Initialize the error variable of r with the new value of the error variable of q. 1 Throughout this paper the term neighbors denotes units which are topological neighbors in the graph (as opposed to units within a small Euclidean distance of each other in input space).
3 9. Decrease all error variables by multiplying them with a decay constant d. 10. If a stopping criterion (e.g., net size or some performance measure) is not yet fullled continue with step 1. How does this method work? The adaptation steps towards the input signals (5.) lead to a general movement of all units towards those areas of the input space where signals come from (P () > 0). The insertion of edges (6.) between the nearest and the second-nearest unit with respect to an input signal generates a single connection of the \induced Delaunay triangulation", a subgraph of the Delaunay triangulation restricted to areas of the input space with P () > 0. The removal of edges (7.) is necessary to get rid of those edges which are no longer part of the \induced Delaunay triangulation" because their ending points have moved and other units are in between them. This is achieved by local edge aging (3.) around the nearest unit combined with age re-setting of those edges (6.) which already exist between nearest and second-nearest units. With insertion and removal of edges the model tries to construct and then track the \induced Delaunay triangulation" which is a slowly moving target due to the adaptation of the reference vectors. The accumulation of squared distances (4.) during the adaptation helps to identify units lying in areas of the input space where the mapping from signals to units causes much error. To reduce this error, new units are inserted in such regions. 3 GNG and LLM The GNG model just described is unsupervised and it inserts new units in order to reduce the mean distortion error. For this reason the distortion error is locally accumulated and new units are inserted near the unit with maximum accumulated error. How can this principle be used for supervised learning? We rst have to dene what the networks output is (which was not necessary for unsupervised learning). Then we can use the dierence between actual and desired output to guide the insertions of new units. Our original problem was to approximate a function f : R n! R m which is given by a number of data pairs (; ) 2 R n R m. One should note that this problem includes classication tasks as a special case. The dierent classes can be en-coded by a small number of m-dimensional vectors which are often chosen to be binary (1-out-of-m). With every unit c of the GNG network (c is positioned at w c in input space) we now associate an m-dimensional output vector c and an m n-matrix A c. The vector c is the output of the network for the case = w c, i.e., for input vectors coinciding with one of the centers. For a general input vector the nearest center s 1 is determined and the output g() of the network is computed from the LLM realized by the stored value s1 and the Matrix A s1 as follows: g() = s1 + A s1 (? w s1 ): We now have to change the original GNG algorithm to incorporate the LLMs. Since we are interested in reducing the expectation of the mean square error E(j? g()j 2 ) for data pairs (; ) we change step 4 of the GNG algorithm to error(s 1 ) = j? g()j 2 This means that we now locally accumulate the error with respect to the function to be approximated. New units are inserted where the approximation is poor.
4 The LLMs associated with the units of our network are initially set at random. At each adaptation step the data pair (; ) is used two-fold: is used (as before) for center adaptation and the whole pair (; ) is used to improve the LLM of the nearest center s 1. This done with a simple Delta-rule: s1 = " m (? g()) A s1 = " m (? g()) (? w s1 ) Thereby " m is an adaptation parameter and denotes the outer product of two vectors. When a new unit r is inserted (step 8 of the GNG algorithm), its LLM is interpolated from its neighbors q and f: r = 0:5 ( q + f ) A r = 0:5 (A q + A f ) A stopping criterion has to be dened to nish the growth process. This can be arbitrarily chosen depending on the application. A possible choice is to observe network performance on a validation set during training and stop when this performance begins to decrease. Alternatively, the error on the training set may be used or simply the number of units in the network, if for some reason a specic network size is desired. 4 Simulation Examples In the following some simulation examples are given in order to provide some insight in the performance of the method and the kind of solutions generated. Let us rst consider the XOR problem. XOR is not interesting per se but since it is wellknown, we nd it useful as an initial example. In gure 1 the nal output of a GNG-LLM network for an XOR-like problem is shown together with the decision regions illustrating how the network generalizes over unseen patterns. The solution shown was obtained after the presentation of 300 single patterns. In contrast, a (input-hidden-output) multi-layer perceptron (MLP) trained with back-propagation (plus momentum) needed over patterns to converge on the same data. The development of the network for another classication problem is shown in gure 2. The total number of presented patterns for the GNG-LLM network was 5400 in this case (CPU-time: 2 17 sec.). A MLP needed presented patterns (CPU-time: 118 sec.). As a larger classication example a high-dimensional problem shall be examined. In this case it is the vowel data from the CMU benchmark collection which has been investigated with several network models (among them MLPs) by Robinson in his thesis [8]. The data consists of dimensional vectors derived from vowels spoken by male and female speakers. 528 vectors from four male and four female speakers are used to train the networks. The remaining 462 frames from four male and three female speakers are for testing. Since training and test data originate from disjunct speaker sets, the task is probably a dicult one. We observed 100 GNG-LLM networks growing until size 70 (see gure 3). The performance on the test set was checked at sizes 5; 10; : : : ; 70. The mean misclassifaction rate was 48 % (compared to % reported by Robinson for the models, 2 CPU time measurements are always problematic but we assume they may be useful for some readers. The computations have all been performed on (one processor of) an SGI Challenge L computer. Times on a Sparc 20 are about four times as large.
5 a) output of GNG-LLM network b) decision regions Figure 1: A solution of an XOR-\problem" found by the described GNG-LLM network. The data stems from four square regions in the unit square. Diagonally opposing squares belong to one class. The generated network consists of only two units each associated with a local linear mapping. The output of the network (a) can be thresholded to obtain sharp decision regions(b) which have been determined for a square region here. The parameters of this simulation were: " b = 0:02; " n = 0:0006; = 300; " m = 0:15; = 0:5; d = 0:9995 a) 2 units b) 7 units c) 18 units Figure 2: The development of a solution for a two-class classication problem. The training data stems from the two approximately u-shaped regions. Each region is one class. The parameters of this simulation are identical to those in the previous example. he investigated). About 9 % of the GNG-LLM networks of size 20 and up had a performance superior to 44 % error, the best result Robinson achieved (he got it with the nearest neighbor classier). An important practical aspect is that the GNG-LLM networks needed only about 60 training epochs 3 to reach their maximum size. Robinson, in contrast, did report that he used 3000 epochs for the models he investigated. GNG-LLM networks can also be used for function approximation. A simple example (on which we can not elaborate here due to lack of space) is shown in gure 4. Function approximation with GNG-LLM networks is a eld we intend to investigate closer in the future. References [1] B. Fritzke. Fast learning with incremental RBF networks. Neural Processing Letters, 1(1):2{5, [2] B. Fritzke. Growing cell structures { a self-organizing network for unsupervised and supervised learning. Neural Networks, 7(9):1441{1460, This is equivalent to = single patterns, or 11 min. SGI Challenge L CPU-time. A MLP (one of the sizes Robinson had investigated) needed over 4 hours to converge (and had a test error of 60 %).
6 % misclassifications on vowel test set mean error (w. std. dev.) maximum error minimum error number of units Figure 3: Performance of GNG-LLM networks on the vowel test data during growth. 100 networks have been evaluated and were allowed to grow until size 70. The graph does not show any signs of over-tting, although the nal mean performance of about 48 % error is reached already at size 20. The exact network size does not seem to inuence performance critically. a) training data b) 3 units c) 15 units d) 65 units Figure 4: A GNG-LLM network learns to approximate a two-dimensional bell curve. Shown is the training data set (a) and the output of the networks with 3, 15, and 65 units (b,c,d). The last plot (d) has the training data overlaid to ease comparison. [3] B. Fritzke. A growing neural gas network learns topologies. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing Systems 7 (to appear). MIT Press, Cambridge MA, [4] E. Littmann and H. Ritter. Cascade LLM networks. In I. Aleksander and J. Taylor, editors, Articial Neural Networks 2, pages 253{257. Elsevier Science Publishers B.V., North Holland, [5] T. M. Martinetz. Competitive Hebbian learning rule forms perfectly topology preserving maps. In ICANN'93: International Conference on Articial Neural Networks, pages 427{434, Amsterdam, Springer. [6] T. M. Martinetz, S. G. Berkovich, and K. J. Schulten. Neural-gas network for vector quantization and its application to time-series prediction. IEEE Transactions on Neural Networks, 4(4):558{569, [7] H. J. Ritter, T. M. Martinetz, and K. J. Schulten. Topology-conserving maps for learning visuo-motor-coordination. Neural Networks, 2:159{168, [8] A. J. Robinson. Dynamic Error Propagation Networks. Ph.D. thesis, Cambridge University, Cambridge, 1989.
Modification of the Growing Neural Gas Algorithm for Cluster Analysis
Modification of the Growing Neural Gas Algorithm for Cluster Analysis Fernando Canales and Max Chacón Universidad de Santiago de Chile; Depto. de Ingeniería Informática, Avda. Ecuador No 3659 - PoBox 10233;
More informationRepresentation of 2D objects with a topology preserving network
Representation of 2D objects with a topology preserving network Francisco Flórez, Juan Manuel García, José García, Antonio Hernández, Departamento de Tecnología Informática y Computación. Universidad de
More informationTopological Correlation
Topological Correlation K.A.J. Doherty, R.G. Adams and and N. Davey University of Hertfordshire, Department of Computer Science College Lane, Hatfield, Hertfordshire, UK Abstract. Quantifying the success
More informationTreeGNG - Hierarchical Topological Clustering
TreeGNG - Hierarchical Topological lustering K..J.Doherty,.G.dams, N.Davey Department of omputer Science, University of Hertfordshire, Hatfield, Hertfordshire, L10 9, United Kingdom {K..J.Doherty,.G.dams,
More informationClassifier C-Net. 2D Projected Images of 3D Objects. 2D Projected Images of 3D Objects. Model I. Model II
Advances in Neural Information Processing Systems 7. (99) The MIT Press, Cambridge, MA. pp.949-96 Unsupervised Classication of 3D Objects from D Views Satoshi Suzuki Hiroshi Ando ATR Human Information
More informationComparing Self-Organizing Maps Samuel Kaski and Krista Lagus Helsinki University of Technology Neural Networks Research Centre Rakentajanaukio 2C, FIN
Kaski, S. and Lagus, K. (1996) Comparing Self-Organizing Maps. In C. von der Malsburg, W. von Seelen, J. C. Vorbruggen, and B. Sendho (Eds.) Proceedings of ICANN96, International Conference on Articial
More informationProcess. Measurement vector (Feature vector) Map training and labeling. Self-Organizing Map. Input measurements 4. Output measurements.
Analysis of Complex Systems using the Self-Organizing Map Esa Alhoniemi, Olli Simula and Juha Vesanto Helsinki University of Technology Laboratory of Computer and Information Science P.O. Box 2200, FIN-02015
More informationFunction approximation using RBF network. 10 basis functions and 25 data points.
1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data
More information3D Hand Pose Estimation with Neural Networks
3D Hand Pose Estimation with Neural Networks Jose Antonio Serra 1, Jose Garcia-Rodriguez 1, Sergios Orts 1, Juan Manuel Garcia- Chamizo 1, Anastassia Angelopoulou 2, Alexandra Psarou 2, Markos Mentzelopoulos
More informationPerformance analysis of a MLP weight initialization algorithm
Performance analysis of a MLP weight initialization algorithm Mohamed Karouia (1,2), Régis Lengellé (1) and Thierry Denœux (1) (1) Université de Compiègne U.R.A. CNRS 817 Heudiasyc BP 49 - F-2 Compiègne
More informationGrowing Neural Gas A Parallel Approach
Growing Neural Gas A Parallel Approach Lukáš Vojáček 1 and JiříDvorský 2 1 IT4Innovations Centre of Excellence Ostrava, Czech Republic lukas.vojacek@vsb.cz 2 Department of Computer Science, VŠB Technical
More informationAn Instantaneous Topological Mapping Model for Correlated Stimuli
An Instantaneous Topological Mapping Model for Correlated Stimuli Ján Jockusch and Helge Ritter, Neuroinformatics Dept, University of Bielefeld { jan helge }@techfak.uni-bielefeld.de Abstract Topology-representing
More informationFigure (5) Kohonen Self-Organized Map
2- KOHONEN SELF-ORGANIZING MAPS (SOM) - The self-organizing neural networks assume a topological structure among the cluster units. - There are m cluster units, arranged in a one- or two-dimensional array;
More informationRichard S. Zemel 1 Georey E. Hinton North Torrey Pines Rd. Toronto, ONT M5S 1A4. Abstract
Developing Population Codes By Minimizing Description Length Richard S Zemel 1 Georey E Hinton University oftoronto & Computer Science Department The Salk Institute, CNL University oftoronto 0 North Torrey
More informationGraph projection techniques for Self-Organizing Maps
Graph projection techniques for Self-Organizing Maps Georg Pölzlbauer 1, Andreas Rauber 1, Michael Dittenbach 2 1- Vienna University of Technology - Department of Software Technology Favoritenstr. 9 11
More informationEstimating the Intrinsic Dimensionality of. Jorg Bruske, Erzsebet Merenyi y
Estimating the Intrinsic Dimensionality of Hyperspectral Images Jorg Bruske, Erzsebet Merenyi y Abstract. Estimating the intrinsic dimensionality (ID) of an intrinsically low (d-) dimensional data set
More informationGrowing Neural Gas approach for obtaining homogeneous maps by restricting the insertion of new nodes
Accepted Manuscript Growing Neural Gas approach for obtaining homogeneous maps by restricting the insertion of new nodes Yuri Quintana-Pacheco, Daniel Ruiz-Fernández, Agustín Magrans-Rico PII: S0893-6080(14)00006-9
More informationExtending the Growing Neural Gas Classifier for Context Recognition
Extending the Growing Neural Gas Classifier for Context Recognition Rene Mayrhofer 1 and Harald Radi 2 1 Lancaster University, Infolab21, South Drive, Lancaster, LA1 4WA, UK rene@comp.lancs.ac.uk 2 Tumpenweg
More informationStability Assessment of Electric Power Systems using Growing Neural Gas and Self-Organizing Maps
Stability Assessment of Electric Power Systems using Growing Gas and Self-Organizing Maps Christian Rehtanz, Carsten Leder University of Dortmund, 44221 Dortmund, Germany Abstract. Liberalized competitive
More informationCenter for Automation and Autonomous Complex Systems. Computer Science Department, Tulane University. New Orleans, LA June 5, 1991.
Two-phase Backpropagation George M. Georgiou Cris Koutsougeras Center for Automation and Autonomous Complex Systems Computer Science Department, Tulane University New Orleans, LA 70118 June 5, 1991 Abstract
More informationHandwritten Digit Recognition with a. Back-Propagation Network. Y. Le Cun, B. Boser, J. S. Denker, D. Henderson,
Handwritten Digit Recognition with a Back-Propagation Network Y. Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel AT&T Bell Laboratories, Holmdel, N. J. 07733 ABSTRACT
More informationUsing Local Trajectory Optimizers To Speed Up Global. Christopher G. Atkeson. Department of Brain and Cognitive Sciences and
Using Local Trajectory Optimizers To Speed Up Global Optimization In Dynamic Programming Christopher G. Atkeson Department of Brain and Cognitive Sciences and the Articial Intelligence Laboratory Massachusetts
More informationExtract an Essential Skeleton of a Character as a Graph from a Character Image
Extract an Essential Skeleton of a Character as a Graph from a Character Image Kazuhisa Fujita University of Electro-Communications 1-5-1 Chofugaoka, Chofu, Tokyo, 182-8585 Japan k-z@nerve.pc.uec.ac.jp
More informationAnalysis of Decision Boundaries Generated by Constructive Neural Network Learning Algorithms
Computer Science Technical Reports Computer Science 995 Analysis of Decision Boundaries Generated by Constructive Neural Network Learning Algorithms ChunHsien Chen Iowa State University R. G. Parekh Iowa
More informationAppART + Growing Neural Gas = high performance hybrid neural network for function approximation
1 AppART + Growing Neural Gas = high performance hybrid neural network for function approximation Luis Martí Ý Þ, Alberto Policriti Ý, Luciano García Þ and Raynel Lazo Þ Ý DIMI, Università degli Studi
More informationClassification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions
ENEE 739Q SPRING 2002 COURSE ASSIGNMENT 2 REPORT 1 Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions Vikas Chandrakant Raykar Abstract The aim of the
More informationColor reduction by using a new self-growing and self-organized neural network
Vision, Video and Graphics (2005) E. Trucco, M. Chantler (Editors) Color reduction by using a new self-growing and self-organized neural network A. Atsalakis and N. Papamarkos* Image Processing and Multimedia
More informationIntroduction to Machine Learning
Introduction to Machine Learning Isabelle Guyon Notes written by: Johann Leithon. Introduction The process of Machine Learning consist of having a big training data base, which is the input to some learning
More informationBuilding Adaptive Basis Functions with a Continuous Self-Organizing Map
Neural Processing Letters 11: 59 78, 2000. 2000 Kluwer Academic Publishers. Printed in the Netherlands. 59 Building Adaptive Basis Functions with a Continuous Self-Organizing Map MARCOS M. CAMPOS and GAIL
More informationt 1 y(x;w) x 2 t 2 t 3 x 1
Neural Computing Research Group Dept of Computer Science & Applied Mathematics Aston University Birmingham B4 7ET United Kingdom Tel: +44 (0)121 333 4631 Fax: +44 (0)121 333 4586 http://www.ncrg.aston.ac.uk/
More informationTitle. Author(s)Liu, Hao; Kurihara, Masahito; Oyama, Satoshi; Sato, Issue Date Doc URL. Rights. Type. File Information
Title An incremental self-organizing neural network based Author(s)Liu, Hao; Kurihara, Masahito; Oyama, Satoshi; Sato, CitationThe 213 International Joint Conference on Neural Ne Issue Date 213 Doc URL
More informationFigure 1: An Area Voronoi Diagram of a typical GIS Scene generated from the ISPRS Working group III/3 Avenches data set. 2 ARRANGEMENTS 2.1 Voronoi Di
Qualitative Spatial Relations using Arrangements for Complex Images M. Burge and W. Burger Johannes Kepler University, Department of Systems Science Computer Vision Laboratory, A-4040 Linz, Austria burge@cast.uni-linz.ac.at
More informationArtificial Neural Networks
Artificial Neural Networks If you try and take a cat apart to see how it works, the first thing you have on your hands is a non-working cat. Life is a level of complexity that almost lies outside our vision.
More informationRowena Cole and Luigi Barone. Department of Computer Science, The University of Western Australia, Western Australia, 6907
The Game of Clustering Rowena Cole and Luigi Barone Department of Computer Science, The University of Western Australia, Western Australia, 697 frowena, luigig@cs.uwa.edu.au Abstract Clustering is a technique
More informationCluster quality 15. Running time 0.7. Distance between estimated and true means Running time [s]
Fast, single-pass K-means algorithms Fredrik Farnstrom Computer Science and Engineering Lund Institute of Technology, Sweden arnstrom@ucsd.edu James Lewis Computer Science and Engineering University of
More informationOnline labelling strategies for growing neural gas
Online labelling strategies for growing neural gas Oliver Beyer and Philipp Cimiano Semantic Computing Group, CITEC, Bielefeld University, obeyer@cit-ec.uni-bielefeld.de http://www.sc.cit-ec.uni-bielefeld.de
More informationMulti-Clustering Centers Approach to Enhancing the Performance of SOM Clustering Ability
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 25, 1087-1102 (2009) Multi-Clustering Centers Approach to Enhancing the Performance of SOM Clustering Ability CHING-HWANG WANG AND CHIH-HAN KAO * Department
More information6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION
6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm
More informationMachine Learning in Biology
Università degli studi di Padova Machine Learning in Biology Luca Silvestrin (Dottorando, XXIII ciclo) Supervised learning Contents Class-conditional probability density Linear and quadratic discriminant
More informationThis leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section
An Algorithm for Incremental Construction of Feedforward Networks of Threshold Units with Real Valued Inputs Dhananjay S. Phatak Electrical Engineering Department State University of New York, Binghamton,
More informationWj = α TD(P,Wj) Wj : Current reference vector W j : New reference vector P : Input vector SENSITIVITY REGION. W j= Wj + Wj MANHATTAN DISTANCE
ICANN96, Springer-Verlag,1996. FPGA Implementation of an Adaptable-Size Neural Network Andres Perez-Uribe and Eduardo Sanchez Logic Systems Laboratory Swiss Federal Institute of Technology CH{1015 Lausanne,
More informationExtensive research has been conducted, aimed at developing
Chapter 4 Supervised Learning: Multilayer Networks II Extensive research has been conducted, aimed at developing improved supervised learning algorithms for feedforward networks. 4.1 Madalines A \Madaline"
More informationSelf-Organizing Feature Map. Kazuhiro MINAMIMOTO Kazushi IKEDA Kenji NAKAYAMA
Topology Analysis of Data Space Using Self-Organizing Feature Map Kazuhiro MINAMIMOTO Kazushi IKEDA Kenji NAKAYAMA Department of Electrical and Computer Eng., Faculty of Eng., Kanazawa Univ. 2-4-2, Kodatsuno,
More informationBMVC 1996 doi: /c.10.41
On the use of the 1D Boolean model for the description of binary textures M Petrou, M Arrigo and J A Vons Dept. of Electronic and Electrical Engineering, University of Surrey, Guildford GU2 5XH, United
More informationMöbius Transformations in Scientific Computing. David Eppstein
Möbius Transformations in Scientific Computing David Eppstein Univ. of California, Irvine School of Information and Computer Science (including joint work with Marshall Bern from WADS 01 and SODA 03) Outline
More information2 The MiníMax Principle First consider a simple problem. This problem will address the tradeos involved in a two-objective optimiation problem, where
Determining the Optimal Weights in Multiple Objective Function Optimiation Michael A. Gennert Alan L. Yuille Department of Computer Science Division of Applied Sciences Worcester Polytechnic Institute
More information(Preliminary Version 2 ) Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science. Texas A&M University. College Station, TX
Towards an Adaptive Distributed Shared Memory (Preliminary Version ) Jai-Hoon Kim Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, TX 77843-3 E-mail: fjhkim,vaidyag@cs.tamu.edu
More informationRadial Basis Function Neural Network Classifier
Recognition of Unconstrained Handwritten Numerals by a Radial Basis Function Neural Network Classifier Hwang, Young-Sup and Bang, Sung-Yang Department of Computer Science & Engineering Pohang University
More informationLocally Weighted Learning for Control. Alexander Skoglund Machine Learning Course AASS, June 2005
Locally Weighted Learning for Control Alexander Skoglund Machine Learning Course AASS, June 2005 Outline Locally Weighted Learning, Christopher G. Atkeson et. al. in Artificial Intelligence Review, 11:11-73,1997
More informationControlling the spread of dynamic self-organising maps
Neural Comput & Applic (2004) 13: 168 174 DOI 10.1007/s00521-004-0419-y ORIGINAL ARTICLE L. D. Alahakoon Controlling the spread of dynamic self-organising maps Received: 7 April 2004 / Accepted: 20 April
More informationLECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS
LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class
More information2. CNeT Architecture and Learning 2.1. Architecture The Competitive Neural Tree has a structured architecture. A hierarchy of identical nodes form an
Competitive Neural Trees for Vector Quantization Sven Behnke and Nicolaos B. Karayiannis Department of Mathematics Department of Electrical and Computer Science and Computer Engineering Martin-Luther-University
More informationClustering & Dimensionality Reduction. 273A Intro Machine Learning
Clustering & Dimensionality Reduction 273A Intro Machine Learning What is Unsupervised Learning? In supervised learning we were given attributes & targets (e.g. class labels). In unsupervised learning
More informationTechniques. IDSIA, Istituto Dalle Molle di Studi sull'intelligenza Articiale. Phone: Fax:
Incorporating Learning in Motion Planning Techniques Luca Maria Gambardella and Marc Haex IDSIA, Istituto Dalle Molle di Studi sull'intelligenza Articiale Corso Elvezia 36 - CH - 6900 Lugano Phone: +41
More informationN. Hitschfeld. Blanco Encalada 2120, Santiago, CHILE.
Generalization of modied octrees for geometric modeling N. Hitschfeld Dpto. Ciencias de la Computacion, Univ. de Chile Blanco Encalada 2120, Santiago, CHILE E-mail: nancy@dcc.uchile.cl Abstract. This paper
More informationNeural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani
Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer
More informationHierarchical Clustering 4/5/17
Hierarchical Clustering 4/5/17 Hypothesis Space Continuous inputs Output is a binary tree with data points as leaves. Useful for explaining the training data. Not useful for making new predictions. Direction
More informationVector Regression Machine. Rodrigo Fernandez. LIPN, Institut Galilee-Universite Paris 13. Avenue J.B. Clement Villetaneuse France.
Predicting Time Series with a Local Support Vector Regression Machine Rodrigo Fernandez LIPN, Institut Galilee-Universite Paris 13 Avenue J.B. Clement 9343 Villetaneuse France rf@lipn.univ-paris13.fr Abstract
More informationEnumeration of Full Graphs: Onset of the Asymptotic Region. Department of Mathematics. Massachusetts Institute of Technology. Cambridge, MA 02139
Enumeration of Full Graphs: Onset of the Asymptotic Region L. J. Cowen D. J. Kleitman y F. Lasaga D. E. Sussman Department of Mathematics Massachusetts Institute of Technology Cambridge, MA 02139 Abstract
More informationy 2 x 1 Simulator Controller client (C,Java...) Client Ball position joint 2 link 2 link 1 0 joint 1 link 0 (virtual) State of the ball
Vision-based interaction with virtual worlds for the design of robot controllers D. d'aulignac, V. Callaghan and S. Lucas Department of Computer Science, University of Essex, Colchester CO4 3SQ, UK Abstract
More informationAppears in Proceedings of the International Joint Conference on Neural Networks (IJCNN-92), Baltimore, MD, vol. 2, pp. II II-397, June, 1992
Appears in Proceedings of the International Joint Conference on Neural Networks (IJCNN-92), Baltimore, MD, vol. 2, pp. II-392 - II-397, June, 1992 Growing Layers of Perceptrons: Introducing the Extentron
More informationLab 2: Support Vector Machines
Articial neural networks, advanced course, 2D1433 Lab 2: Support Vector Machines March 13, 2007 1 Background Support vector machines, when used for classication, nd a hyperplane w, x + b = 0 that separates
More informationError measurements and parameters choice in the GNG3D model for mesh simplification
Error measurements and parameters choice in the GNG3D model for mesh simplification RAFAEL ALVAREZ LEANDRO TORTOSA JOSE F. VICENT ANTONIO ZAMORA Universidad de Alicante Departmento de Ciencia de la Computacion
More informationComparison of supervised self-organizing maps using Euclidian or Mahalanobis distance in classification context
6 th. International Work Conference on Artificial and Natural Neural Networks (IWANN2001), Granada, June 13-15 2001 Comparison of supervised self-organizing maps using Euclidian or Mahalanobis distance
More informationFigure 1: An instance of the path nding problem (left) and a solution to it (right). The white rectangle is the robot, the black circle is the goal, t
Learning Fine Motion in Robotics: Design and Experiments C. Versino, L.M. Gambardella IDSIA, Corso Elvezia 36, 6900 Lugano,Switzerland cristina@idsia.ch, luca@idsia.ch, http://www.idsia.ch Abstract Robotics
More informationTilings of the Euclidean plane
Tilings of the Euclidean plane Yan Der, Robin, Cécile January 9, 2017 Abstract This document gives a quick overview of a eld of mathematics which lies in the intersection of geometry and algebra : tilings.
More informationProcessing Missing Values with Self-Organized Maps
Processing Missing Values with Self-Organized Maps David Sommer, Tobias Grimm, Martin Golz University of Applied Sciences Schmalkalden Department of Computer Science D-98574 Schmalkalden, Germany Phone:
More informationA Learning Algorithm for Piecewise Linear Regression
A Learning Algorithm for Piecewise Linear Regression Giancarlo Ferrari-Trecate 1, arco uselli 2, Diego Liberati 3, anfred orari 1 1 nstitute für Automatik, ETHZ - ETL CH 8092 Zürich, Switzerland 2 stituto
More informationUnsupervised Learning
Unsupervised Learning Unsupervised learning Until now, we have assumed our training samples are labeled by their category membership. Methods that use labeled samples are said to be supervised. However,
More informationPattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition
Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant
More informationClassification of Face Images for Gender, Age, Facial Expression, and Identity 1
Proc. Int. Conf. on Artificial Neural Networks (ICANN 05), Warsaw, LNCS 3696, vol. I, pp. 569-574, Springer Verlag 2005 Classification of Face Images for Gender, Age, Facial Expression, and Identity 1
More informationSupervised Hybrid SOM-NG Algorithm
ADVCOMP : The Fifth International Conference on Advanced Engineering Computing and Applications in Sciences Supervised Hybrid -NG Algorithm Mario J. Crespo-Ramos, Iván Machón-González, Hilario López-García
More informationDept. of Computer Science. The eld of time series analysis and forecasting methods has signicantly changed in the last
Model Identication and Parameter Estimation of ARMA Models by Means of Evolutionary Algorithms Susanne Rolf Dept. of Statistics University of Dortmund Germany Joachim Sprave y Dept. of Computer Science
More informationA Bintree Representation of Generalized Binary. Digital Images
A intree Representation of Generalized inary Digital mages Hanspeter ieri gor Metz 1 inary Digital mages and Hyperimages A d-dimensional binary digital image can most easily be modelled by a d-dimensional
More informationOn-line Pattern Analysis by Evolving Self-Organizing Maps
On-line Pattern Analysis by Evolving Self-Organizing Maps D. Deng and N. Kasabov Department of Information Science, University of Otago, Dunedin, New Zealand (E-mail: {ddeng, nkasabov}@infoscience.otago.ac.nz)
More informationNeural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.
Lecture 24: Learning 3 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture Continuation of Neural Networks Artificial Neural Networks Compose of nodes/units connected by links Each link has a numeric
More informationCSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks
CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks Part IV 1 Function approximation MLP is both a pattern classifier and a function approximator As a function approximator,
More informationAUTOMATIC GENERATION OF MORPHOLOGICAL OPENING CLOSING SEQUENCES FOR TEXTURE SEGMENTATION. J. Racky, M. Pandit
AUTOMATIC GENERATION OF MORPHOLOGICAL OPENING CLOSING SEQUENCES FOR TEXTURE SEGMENTATION J. Racky, M. Pandit University of Kaiserslautern Department of Electrical Engineering Institute for Control and
More informationthe number of states must be set in advance, i.e. the structure of the model is not t to the data, but given a priori the algorithm converges to a loc
Clustering Time Series with Hidden Markov Models and Dynamic Time Warping Tim Oates, Laura Firoiu and Paul R. Cohen Computer Science Department, LGRC University of Massachusetts, Box 34610 Amherst, MA
More informationDocument Image Restoration Using Binary Morphological Filters. Jisheng Liang, Robert M. Haralick. Seattle, Washington Ihsin T.
Document Image Restoration Using Binary Morphological Filters Jisheng Liang, Robert M. Haralick University of Washington, Department of Electrical Engineering Seattle, Washington 98195 Ihsin T. Phillips
More informationOptimization Methods for Machine Learning (OMML)
Optimization Methods for Machine Learning (OMML) 2nd lecture Prof. L. Palagi References: 1. Bishop Pattern Recognition and Machine Learning, Springer, 2006 (Chap 1) 2. V. Cherlassky, F. Mulier - Learning
More informationThe task of inductive learning from examples is to nd an approximate definition
1 Initializing Neural Networks using Decision Trees Arunava Banerjee 1.1 Introduction The task of inductive learning from examples is to nd an approximate definition for an unknown function f(x), given
More informationRobustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification
Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification Tomohiro Tanno, Kazumasa Horie, Jun Izawa, and Masahiko Morita University
More informationUnsupervised learning
Unsupervised learning Enrique Muñoz Ballester Dipartimento di Informatica via Bramante 65, 26013 Crema (CR), Italy enrique.munoz@unimi.it Enrique Muñoz Ballester 2017 1 Download slides data and scripts:
More informationFigure 1: Representation of moving images using layers Once a set of ane models has been found, similar models are grouped based in a mean-square dist
ON THE USE OF LAYERS FOR VIDEO CODING AND OBJECT MANIPULATION Luis Torres, David Garca and Anna Mates Dept. of Signal Theory and Communications Universitat Politecnica de Catalunya Gran Capita s/n, D5
More informationKeywords: ANN; network topology; bathymetric model; representability.
Proceedings of ninth International Conference on Hydro-Science and Engineering (ICHE 2010), IIT Proceedings Madras, Chennai, of ICHE2010, India. IIT Madras, Aug 2-5,2010 DETERMINATION OF 2 NETWORK - 5
More informationVoronoi Diagram. Xiao-Ming Fu
Voronoi Diagram Xiao-Ming Fu Outlines Introduction Post Office Problem Voronoi Diagram Duality: Delaunay triangulation Centroidal Voronoi tessellations (CVT) Definition Applications Algorithms Outlines
More informationUnsupervised Learning of a Kinematic Arm Model
Unsupervised Learning of a Kinematic Arm Model Heiko Hoffmann and Ralf Möller Cognitive Robotics, Max Planck Institute for Psychological Research, Amalienstr. 33, D-80799 Munich, Germany hoffmann@psy.mpg.de,
More informationNeural Network Neurons
Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given
More informationA modular neural network architecture for inverse kinematics model learning
Neurocomputing 38}40 (2001) 797}805 A modular neural network architecture for inverse kinematics model learning Eimei Oyama*, Arvin Agah, Karl F. MacDorman, Taro Maeda, Susumu Tachi Intelligent System
More informationAutoencoders, denoising autoencoders, and learning deep networks
4 th CiFAR Summer School on Learning and Vision in Biology and Engineering Toronto, August 5-9 2008 Autoencoders, denoising autoencoders, and learning deep networks Part II joint work with Hugo Larochelle,
More information/00/$10.00 (C) 2000 IEEE
A SOM based cluster visualization and its application for false coloring Johan Himberg Helsinki University of Technology Laboratory of Computer and Information Science P.O. Box 54, FIN-215 HUT, Finland
More informationA Self Organizing Map for dissimilarity data 0
A Self Organizing Map for dissimilarity data Aïcha El Golli,2, Brieuc Conan-Guez,2, and Fabrice Rossi,2,3 Projet AXIS, INRIA-Rocquencourt Domaine De Voluceau, BP 5 Bâtiment 8 7853 Le Chesnay Cedex, France
More informationClassification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska
Classification Lecture Notes cse352 Neural Networks Professor Anita Wasilewska Neural Networks Classification Introduction INPUT: classification data, i.e. it contains an classification (class) attribute
More informationReinforcement Learning Scheme. for Network Routing. Michael Littman*, Justin Boyan. School of Computer Science. Pittsburgh, PA
A Distributed Reinforcement Learning Scheme for Network Routing Michael Littman*, Justin Boyan Carnegie Mellon University School of Computer Science Pittsburgh, PA * also Cognitive Science Research Group,
More informationUser Interface. Global planner. Local planner. sensors. actuators
Combined Map-Based and Case-Based Path Planning for Mobile Robot Navigation Maarja Kruusmaa and Bertil Svensson Chalmers University of Technology, Department of Computer Engineering, S-412 96 Gothenburg,
More informationPROJECTION MODELING SIMPLIFICATION MARKER EXTRACTION DECISION. Image #k Partition #k
TEMPORAL STABILITY IN SEQUENCE SEGMENTATION USING THE WATERSHED ALGORITHM FERRAN MARQU ES Dept. of Signal Theory and Communications Universitat Politecnica de Catalunya Campus Nord - Modulo D5 C/ Gran
More informationBumptrees for Efficient Function, Constraint, and Classification Learning
umptrees for Efficient Function, Constraint, and Classification Learning Stephen M. Omohundro International Computer Science Institute 1947 Center Street, Suite 600 erkeley, California 94704 Abstract A
More information3.1. Solution for white Gaussian noise
Low complexity M-hypotheses detection: M vectors case Mohammed Nae and Ahmed H. Tewk Dept. of Electrical Engineering University of Minnesota, Minneapolis, MN 55455 mnae,tewk@ece.umn.edu Abstract Low complexity
More informationLIF Marseille, CNRS & University Aix{Marseille address: URL:
1D EFFECTIVELY CLOSED SUBSHIFTS AND 2D TILINGS BRUNO DURAND 1, ANDREI ROMASHCHENKO 2, AND ALEXANDER SHEN 2 1 LIF Marseille, CNRS & University Aix{Marseille E-mail address: Bruno.Durand@lif.univ-mrs.fr
More information