Figure (5) Kohonen Self-Organized Map

Size: px
Start display at page:

Download "Figure (5) Kohonen Self-Organized Map"

Transcription

1 2- KOHONEN SELF-ORGANIZING MAPS (SOM) - The self-organizing neural networks assume a topological structure among the cluster units. - There are m cluster units, arranged in a one- or two-dimensional array; the input signals are n-tuples [Kohonen, 1989al]. - The weight vector for a cluster unit serves as an exemplar of the input patterns associated with that cluster. - During the self-organization process, the cluster unit whose weight vector matches the input pattern most closely (typically, the square of the minimum Euclidean distance) is chosen as the winner. - The winning unit and its neighboring units (in terms of the topology of the cluster units) update their weights. - The weight vectors of neighboring units are not, in general, close to the input pattern. - The architecture and algorithm that follow for the net can be used to cluster a set of p continuous-valued vectors x = ( x 1,..., x i,..., x n ) into m clusters. - Note that the connection weights do not multiply the signal sent from the input units to the cluster units (unless the dot product measure of similarity is being used). Architecture The architecture of the Kohonen self-organizing map is shown in Fig. 5. Figure (5) Kohonen Self-Organized Map 98

2 Neighborhoods of the unit designated by # of radius R = 2, 1, and 0 in a one-dimensional topology (with 10 cluster units) are shown in Figure 6. * * * {* (* [#] *) *} * * Figure (6) Linear array of cluster unit The neighborhoods of radius R = 2, 1 and 0 are shown in Figure 7 for a rectangular grid and in Figure 8 for a hexagonal grid (each with 49 units). In each illustration, the winning unit is indicated by the symbol "#" and the other units are denoted by "*". Note that each unit has eight nearest neighbors in the rectangular grid, but only six in the hexagonal grid. Winning units that are close to the edge of the grid will have some neighborhoods that have fewer units than that shown in the respective figure. Figure (7) Neighborhoods for rectangular grid 99

3 Algorithm 1- Initialize weights wij. (Random values may be assigned for the initial weights as a possible choices) 2- Set topological neighborhood parameters, (R2, R1, and R0) 3- Set learning rate parameters α 4- For each input vector x, and For each j, compute: D(j) = (w ij x i ) 2 i 5- Find index J such that D(J) is minimum. 6- For all units j within a specified neighborhood of J, and for all i update the weights: w ij (new) = w ij (old) + α[x i w ij (old)] 7- Update learning rate 8- Reduce radius of topological neighborhood at specified times. 9- Test stopping condition. Alternative structures are possible for reducing R and α. The learning rate α is a slowly decreasing function of time (or training epochs). The radius of the neighborhood around a cluster unit also decreases as the clustering process progresses. Figure (8) Neighborhoods for hexagonal grid 100

4 Random values may be assigned for the initial weights. In the next example the weights are initialized to random values (chosen from the same range of values as the components of the input vectors). Example-4: A Kohonen self-organizing map (SOM) to cluster four vectors Let the vectors to be clustered be (1, 1, 0, 0); (0, 0, 0, 1); (1, 0, 0, 0); (0, 0, 1, 1). The maximum number of clusters to be formed is m = 2. Suppose the learning rate (geometric decrease) is : α(0) = 0.6 α (t + 1) = 0.5 α ( t ) With only two clusters available, the neighborhood of node J (Step 5) is set so that only one cluster updates its weights at each step (i.e., R = 0). Sol: Initial weight matrix: Initial radius: R = 0 Initial learning rate: α(0) = 0.6 D(j) = (w ij x i ) 2 i w ij (new) = w ij (old) + α[x i w ij (old)] Begin training For the first vector, (1, 1, 0, 0): 1.86 D(1) = (w 11 x 1 ) 2 + (w 21 x 2 ) 2 + (w 31 x 3 ) 2 + (w 41 x 4 ) 2 D(1) =(.2-1) 2 +(.6 1) 2 +(.5 0) 2 + (.9-0) 2 = = D(2) = (w 12 x 1 ) 2 + (w 22 x 2 ) 2 + (w 32 x 3 ) 2 + (w 42 x 4 ) 2 101

5 D(2) =(.8-1) 2 +(.4 1) 2 +(.7 0) 2 +(.3-0) 2 = = 0.98 The input vector is closest to output node 2, so: J = 2 The weights on the winning unit are updated, This gives the weight matrix For the second vector, (0, 0, 0, l ): D(1) = (w 11 x 1 ) 2 + (w 21 x 2 ) 2 + (w 31 x 3 ) 2 + (w 41 x 4 ) 2 D(1) =(.2-0) 2 +(.6 0) 2 +(.5 0) 2 +(.9-1) 2 = = 0.66 D(2) = (w 12 x 1 ) 2 + (w 22 x 2 ) 2 + (w 32 x 3 ) 2 + (w 42 x 4 ) 2 D(2) = (.92-0) 2 + (.76 0) 2 + (..28 0) 2 + (.12-1) 2 = The input vector is closest to output node 1, so J = 1 Update the first column of the weight matrix: For the third vector, (1, 0, 0, 0), D(1) = D(2) = The input vector is closest to output node 2, so J = 2. Update the second column of the weight matrix: For the fourth vector, (0, 0, 1, 1), D(1) =

6 D(2) = J = 1. Update the first column of the weight matrix: Reduce the learning rate: α (t + 1) = 0.5 α ( t ) α =.5 (0.6) =.3 The weight update equations are now w ij (new) = w ij (old) [x i - w ij (old)] =0.7w ij (0Id) + 0.3x i The weight matrix after the second epoch of training is Modifying the adjustment procedure for the learning rate so that it decreases geometrically from.6 to.01 over 100iterations (epochs) gives the following results: These weight matrices appear to be converging to the matrix the first column of which is the average of the two vectors placed in cluster 1 and the second column of which is the average of the two vectors placed in cluster

7 3- LEARNING VECTOR QUANTIZATION Learning vector quantization (LVQ) [Kohonen, 1989a, 1990a] is a pattern classification method in which each output unit represents a particular class or category. (Several output units should be used for each class). The weight vector for an output unit is often referred to as a reference (or codebook) vector for the class that the unit represents. During training, it is assumed that a set of training patterns with known classifications is provided, along with an initial distribution of reference vectors (each of which represents a known classification). After training, an LVQ net classifies an input vector by assigning it to the same class as the output unit that has its weight vector (reference vector) closest to the input vector. Architecture The architecture of an LVQ neural net, shown in Figure 9, is the same as that of a Kohonen, self-organizing map (without a topological structure being assumed for the output units). In addition, each output unit has a known class that it represents. Figure (9) Learning vector quantization neural net(lvq) 104

8 Algorithm The motivation for the algorithm for the LVQ net is to find the output unit that is closest to the input vector. Toward that end, if x and w, belong to the same class, then we move the weights toward the new input vector; if x and w, belong to different classes, then we move the weights away from this input vector. The nomenclature we use is as follows: X training vector (x 1,...,x i,..., x n ). T correct category or class for the training vector. w j weight vector for jth output unit (w 1j,, w ij,..., w nj ) C j category or class represented by jth output unit. x w j Euclidean distance between input vector and (weight vector for jth output unit. 1- Initialize reference vectors (several strategies are discussed shortly); 2- Initialize learning rate, α(0). 3- For each training input vector x, find J so that x w j is a minimum. 4- Update w j as follows: if T = C j, then w J (new) = w J (old) + α[x w J (old)] if T C j, then w J (new) = w J (old) - α[x w J (old)] 5- Reduce learning rate. 6- Test stopping condition: The condition may specify a fixed number of iterations or the learning rate reaching a sufficiently small value. The simplest method of initializing the weight (reference) vectors is to take the first m training vectors and use them as weight vectors; the remaining vectors are then used for training [Kohonen, 1989a]. Another 105

9 simple method is to assign the initial weights and classifications randomly. Another possible method of initializing the weights is to use the self-organizing map [Kohonen, 1989a] to place the weights. Each weight vector is then calibrated by determining the input patterns that are closest to it, finding the class that the largest number of these input patterns belong to, and assigning that class to the weight vector. Example-5: Learning vector quantization (LVQ): five vectors assigned to two classes In this example, two reference vectors will be used. The following input vectors represent two classes, 1 and 2: VECTOR CLASS (1, l, 0, 0) 1 (0, 0, 0, 1) 2 (0, 0, 1, 1) 2 (1, 0, 0, 0) 1 (0, 1, 1, 0) 2 The first two vectors will be used to initialize the two reference vectors. Thus, the first output unit represents class 1, the second class 2 (symbolically, C 1 = 1 and C 2 = 2). This leaves vectors (0, 0, 1, 1), (1, 0, 0, 0), and (0, 1, 1. 0) as the training vectors. Only one iteration (one epoch) is shown: Sol: Initialize weights: w 1 = (1, l, 0, 0) w 2 = (0, 0, 0, 1) Initialize learning rate, α = 0.1 For input vector x = (0, 0, 1, 1) with T = 2, D(1) = (w 11 x 1 ) 2 + (w 21 x 2 ) 2 + (w 31 x 3 ) 2 + (w 41 x 4 ) 2 D(1) =(1-0) 2 +(1 0) 2 +(0 1) 2 + (0-1) 2 = 4 D(2) = (w 12 x 1 ) 2 + (w 22 x 2 ) 2 + (w 32 x 3 ) 2 + (w 42 x 4 ) 2 106

10 D(2) =(0-0) 2 +(0 0) 2 +(0 1) 2 +(1-1) 2 = 1 J = 2, since x is closer to w 2 than to w 1 Since T = 2 and C 2 = 2, update w 2 as follows: w J (new) = w J (old) + α[x w J (old)] w 2 = (0, 0, 0, 1) + 0.1[(0, 0, 1, 1) (0, 0, 0, 1)] = (0, 0, 0.1, 1) For input vector x = (1, 0, 0, 0) with T = 1, D(1) = (w 11 x 1 ) 2 + (w 21 x 2 ) 2 + (w 31 x 3 ) 2 + (w 41 x 4 ) 2 D(1) =(1-1) 2 +(1 0) 2 +(0 0) 2 + (0-0) 2 = 1 D(2) = (w 12 x 1 ) 2 + (w 22 x 2 ) 2 + (w 32 x 3 ) 2 + (w 42 x 4 ) 2 D(2) =(0-1) 2 +(0 0) 2 +(0.1 0) 2 +(1-0) 2 = 2.01 J = 1, since x is closer to w 1 than to w 2 Since T = 1 and C 2 = 1, update w 1 as follows: w J (new) = w J (old) + α[x w J (old)] w 1 = (1, 1, 0, 0) + 0.1[(1, 0, 0, 0) (1, 1, 0, 0)] = (1, 0.9, 0, 0) For input vector x = (0, 1, 1, 0) with T = 2, D(1) = (w 11 x 1 ) 2 + (w 21 x 2 ) 2 + (w 31 x 3 ) 2 + (w 41 x 4 ) 2 D(1) =(1-0) 2 +(0.9 1) 2 +(0 1) 2 + (0-0) 2 = 2.01 D(2) = (w 12 x 1 ) 2 + (w 22 x 2 ) 2 + (w 32 x 3 ) 2 + (w 42 x 4 ) 2 D(2) =(0-0) 2 +(0 1) 2 +(0.1 1) 2 +(1-0) 2 = 2.81 J = 1, since x is closer to w 1 than to w 2 Since T = 2 and C 2 = 1, (T C j ) update w 1 as follows: w J (new) = w J (old) - α[x w J (old)] w 1 =(1,.9, 0, 0) - 0.1[(0, 1, 1, 0) (1,.9, 0, 0)]=(1.1, 0.89, -.1, 0) This complete one epoch of the training. Reduce the learning rate, and test the stopping condition 107

11 Variations: Several algorithms represent an improvement to LVQ, called LVQ2, LVQ2.1 [Kohonen, 1990a], and LVQ3 [Kohonen, 1990b]. In the original LVQ algorithm, only the reference vector that is closest to the input vector is updated. In the improved algorithms, two vectors (the winner and a runner-up) learn if several conditions are satisfied. The idea is that if the input is approximately the same distance from both the winner and the runner-up, then each of them should learn. 108

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION 75 CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION 6.1 INTRODUCTION Counter propagation network (CPN) was developed by Robert Hecht-Nielsen as a means to combine an unsupervised Kohonen

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Unsupervised learning Until now, we have assumed our training samples are labeled by their category membership. Methods that use labeled samples are said to be supervised. However,

More information

Function approximation using RBF network. 10 basis functions and 25 data points.

Function approximation using RBF network. 10 basis functions and 25 data points. 1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data

More information

Data Mining. Kohonen Networks. Data Mining Course: Sharif University of Technology 1

Data Mining. Kohonen Networks. Data Mining Course: Sharif University of Technology 1 Data Mining Kohonen Networks Data Mining Course: Sharif University of Technology 1 Self-Organizing Maps Kohonen Networks developed in 198 by Tuevo Kohonen Initially applied to image and sound analysis

More information

Simulation of WSN in NetSim Clustering using Self-Organizing Map Neural Network

Simulation of WSN in NetSim Clustering using Self-Organizing Map Neural Network Simulation of WSN in NetSim Clustering using Self-Organizing Map Neural Network Software Recommended: NetSim Standard v11.0, Visual Studio 2015/2017, MATLAB 2016a Project Download Link: https://github.com/netsim-tetcos/wsn_som_optimization_v11.0/archive/master.zip

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 8 Artificial neural networks: Unsupervised learning Introduction Hebbian learning Generalised Hebbian learning algorithm Competitive learning Self-organising computational map: Kohonen network

More information

Simulation of WSN in NetSim Clustering using Self-Organizing Map Neural Network

Simulation of WSN in NetSim Clustering using Self-Organizing Map Neural Network Simulation of WSN in NetSim Clustering using Self-Organizing Map Neural Network Software Recommended: NetSim Standard v11.1 (32/64bit), Visual Studio 2015/2017, MATLAB (32/64 bit) Project Download Link:

More information

Unsupervised Learning

Unsupervised Learning Networks for Pattern Recognition, 2014 Networks for Single Linkage K-Means Soft DBSCAN PCA Networks for Kohonen Maps Linear Vector Quantization Networks for Problems/Approaches in Machine Learning Supervised

More information

Supervised vs.unsupervised Learning

Supervised vs.unsupervised Learning Supervised vs.unsupervised Learning In supervised learning we train algorithms with predefined concepts and functions based on labeled data D = { ( x, y ) x X, y {yes,no}. In unsupervised learning we are

More information

Machine Learning : Clustering, Self-Organizing Maps

Machine Learning : Clustering, Self-Organizing Maps Machine Learning Clustering, Self-Organizing Maps 12/12/2013 Machine Learning : Clustering, Self-Organizing Maps Clustering The task: partition a set of objects into meaningful subsets (clusters). The

More information

Using Self-Organizing Maps for Sentiment Analysis. Keywords Sentiment Analysis, Self-Organizing Map, Machine Learning, Text Mining.

Using Self-Organizing Maps for Sentiment Analysis. Keywords Sentiment Analysis, Self-Organizing Map, Machine Learning, Text Mining. Using Self-Organizing Maps for Sentiment Analysis Anuj Sharma Indian Institute of Management Indore 453331, INDIA Email: f09anujs@iimidr.ac.in Shubhamoy Dey Indian Institute of Management Indore 453331,

More information

Chapter 7: Competitive learning, clustering, and self-organizing maps

Chapter 7: Competitive learning, clustering, and self-organizing maps Chapter 7: Competitive learning, clustering, and self-organizing maps António R. C. Paiva EEL 6814 Spring 2008 Outline Competitive learning Clustering Self-Organizing Maps What is competition in neural

More information

Artificial Neural Networks Unsupervised learning: SOM

Artificial Neural Networks Unsupervised learning: SOM Artificial Neural Networks Unsupervised learning: SOM 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001

More information

COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS

COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS Toomas Kirt Supervisor: Leo Võhandu Tallinn Technical University Toomas.Kirt@mail.ee Abstract: Key words: For the visualisation

More information

Computer Vision. Exercise Session 10 Image Categorization

Computer Vision. Exercise Session 10 Image Categorization Computer Vision Exercise Session 10 Image Categorization Object Categorization Task Description Given a small number of training images of a category, recognize a-priori unknown instances of that category

More information

Self-Organizing Maps for cyclic and unbounded graphs

Self-Organizing Maps for cyclic and unbounded graphs Self-Organizing Maps for cyclic and unbounded graphs M. Hagenbuchner 1, A. Sperduti 2, A.C. Tsoi 3 1- University of Wollongong, Wollongong, Australia. 2- University of Padova, Padova, Italy. 3- Hong Kong

More information

USING OF THE K NEAREST NEIGHBOURS ALGORITHM (k-nns) IN THE DATA CLASSIFICATION

USING OF THE K NEAREST NEIGHBOURS ALGORITHM (k-nns) IN THE DATA CLASSIFICATION USING OF THE K NEAREST NEIGHBOURS ALGORITHM (k-nns) IN THE DATA CLASSIFICATION Gîlcă Natalia, Roșia de Amaradia Technological High School, Gorj, ROMANIA Gîlcă Gheorghe, Constantin Brîncuși University from

More information

Seismic regionalization based on an artificial neural network

Seismic regionalization based on an artificial neural network Seismic regionalization based on an artificial neural network *Jaime García-Pérez 1) and René Riaño 2) 1), 2) Instituto de Ingeniería, UNAM, CU, Coyoacán, México D.F., 014510, Mexico 1) jgap@pumas.ii.unam.mx

More information

SOMSN: An Effective Self Organizing Map for Clustering of Social Networks

SOMSN: An Effective Self Organizing Map for Clustering of Social Networks SOMSN: An Effective Self Organizing Map for Clustering of Social Networks Fatemeh Ghaemmaghami Research Scholar, CSE and IT Dept. Shiraz University, Shiraz, Iran Reza Manouchehri Sarhadi Research Scholar,

More information

Cluster analysis of 3D seismic data for oil and gas exploration

Cluster analysis of 3D seismic data for oil and gas exploration Data Mining VII: Data, Text and Web Mining and their Business Applications 63 Cluster analysis of 3D seismic data for oil and gas exploration D. R. S. Moraes, R. P. Espíndola, A. G. Evsukoff & N. F. F.

More information

What is a receptive field? Why a sensory neuron has such particular RF How a RF was developed?

What is a receptive field? Why a sensory neuron has such particular RF How a RF was developed? What is a receptive field? Why a sensory neuron has such particular RF How a RF was developed? x 1 x 2 x 3 y f w 1 w 2 w 3 T x y = f (wx i i T ) i y x 1 x 2 x 3 = = E (y y) (y f( wx T)) 2 2 o o i i i

More information

Centroid Neural Network based clustering technique using competetive learning

Centroid Neural Network based clustering technique using competetive learning Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Centroid Neural Network based clustering technique using competetive learning

More information

Dr. Qadri Hamarsheh figure, plotsomnd(net) figure, plotsomplanes(net) figure, plotsomhits(net,x) figure, plotsompos(net,x) Example 2: iris_dataset:

Dr. Qadri Hamarsheh figure, plotsomnd(net) figure, plotsomplanes(net) figure, plotsomhits(net,x) figure, plotsompos(net,x) Example 2: iris_dataset: Self-organizing map using matlab Create a Self-Organizing Map Neural Network: selforgmap Syntax: selforgmap (dimensions, coversteps, initneighbor, topologyfcn, distancefcn) takes these arguments: dimensions

More information

Two-step Modified SOM for Parallel Calculation

Two-step Modified SOM for Parallel Calculation Two-step Modified SOM for Parallel Calculation Two-step Modified SOM for Parallel Calculation Petr Gajdoš and Pavel Moravec Petr Gajdoš and Pavel Moravec Department of Computer Science, FEECS, VŠB Technical

More information

IMAGE CLASSIFICATION USING COMPETITIVE NEURAL NETWORKS

IMAGE CLASSIFICATION USING COMPETITIVE NEURAL NETWORKS IMAGE CLASSIFICATION USING COMPETITIVE NEURAL NETWORKS V. Musoko, M. Kolı nova, A. Procha zka Institute of Chemical Technology, Department of Computing and Control Engineering Abstract The contribution

More information

arxiv: v1 [physics.data-an] 2 Aug 2011

arxiv: v1 [physics.data-an] 2 Aug 2011 Application of Self-Organizing Map to stellar spectral classifications arxiv:1108.0514v1 [physics.data-an] 2 Aug 2011 Bazarghan Mahdi 1 Department of Physics, Zanjan University, Zanjan 313, Iran bazargan@znu.ac.ir

More information

5.6 Self-organizing maps (SOM) [Book, Sect. 10.3]

5.6 Self-organizing maps (SOM) [Book, Sect. 10.3] Ch.5 Classification and Clustering 5.6 Self-organizing maps (SOM) [Book, Sect. 10.3] The self-organizing map (SOM) method, introduced by Kohonen (1982, 2001), approximates a dataset in multidimensional

More information

Unsupervised learning

Unsupervised learning Unsupervised learning Enrique Muñoz Ballester Dipartimento di Informatica via Bramante 65, 26013 Crema (CR), Italy enrique.munoz@unimi.it Enrique Muñoz Ballester 2017 1 Download slides data and scripts:

More information

Methods for Intelligent Systems

Methods for Intelligent Systems Methods for Intelligent Systems Lecture Notes on Clustering (II) Davide Eynard eynard@elet.polimi.it Department of Electronics and Information Politecnico di Milano Davide Eynard - Lecture Notes on Clustering

More information

CS6716 Pattern Recognition

CS6716 Pattern Recognition CS6716 Pattern Recognition Prototype Methods Aaron Bobick School of Interactive Computing Administrivia Problem 2b was extended to March 25. Done? PS3 will be out this real soon (tonight) due April 10.

More information

Controlling the spread of dynamic self-organising maps

Controlling the spread of dynamic self-organising maps Neural Comput & Applic (2004) 13: 168 174 DOI 10.1007/s00521-004-0419-y ORIGINAL ARTICLE L. D. Alahakoon Controlling the spread of dynamic self-organising maps Received: 7 April 2004 / Accepted: 20 April

More information

arxiv: v1 [physics.data-an] 27 Sep 2007

arxiv: v1 [physics.data-an] 27 Sep 2007 Classification of Interest Rate Curves Using Self-Organising Maps arxiv:0709.4401v1 [physics.data-an] 27 Sep 2007 M.Kanevski a,, M.Maignan b, V.Timonin a,1, A.Pozdnoukhov a,1 a Institute of Geomatics and

More information

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves Machine Learning A 708.064 11W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence I [2 P] a) [1 P] Give an example for a probability distribution P (A, B, C) that disproves

More information

Improving A Trajectory Index For Topology Conserving Mapping

Improving A Trajectory Index For Topology Conserving Mapping Proceedings of the 8th WSEAS Int. Conference on Automatic Control, Modeling and Simulation, Prague, Czech Republic, March -4, 006 (pp03-08) Improving A Trajectory Index For Topology Conserving Mapping

More information

Machine Learning Reliability Techniques for Composite Materials in Structural Applications.

Machine Learning Reliability Techniques for Composite Materials in Structural Applications. Machine Learning Reliability Techniques for Composite Materials in Structural Applications. Roberto d Ippolito, Keiichi Ito, Silvia Poles, Arnaud Froidmont Noesis Solutions Optimus by Noesis Solutions

More information

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter Review: Final Exam Model for a Learning Step Learner initially Environm ent Teacher Compare s pe c ia l Information Control Correct Learning criteria Feedback changed Learner after Learning Learning by

More information

A Topography-Preserving Latent Variable Model with Learning Metrics

A Topography-Preserving Latent Variable Model with Learning Metrics A Topography-Preserving Latent Variable Model with Learning Metrics Samuel Kaski and Janne Sinkkonen Helsinki University of Technology Neural Networks Research Centre P.O. Box 5400, FIN-02015 HUT, Finland

More information

CHAPTER 3 TUMOR DETECTION BASED ON NEURO-FUZZY TECHNIQUE

CHAPTER 3 TUMOR DETECTION BASED ON NEURO-FUZZY TECHNIQUE 32 CHAPTER 3 TUMOR DETECTION BASED ON NEURO-FUZZY TECHNIQUE 3.1 INTRODUCTION In this chapter we present the real time implementation of an artificial neural network based on fuzzy segmentation process

More information

Unsupervised Learning : Clustering

Unsupervised Learning : Clustering Unsupervised Learning : Clustering Things to be Addressed Traditional Learning Models. Cluster Analysis K-means Clustering Algorithm Drawbacks of traditional clustering algorithms. Clustering as a complex

More information

Extract an Essential Skeleton of a Character as a Graph from a Character Image

Extract an Essential Skeleton of a Character as a Graph from a Character Image Extract an Essential Skeleton of a Character as a Graph from a Character Image Kazuhisa Fujita University of Electro-Communications 1-5-1 Chofugaoka, Chofu, Tokyo, 182-8585 Japan k-z@nerve.pc.uec.ac.jp

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Learning without a teacher No targets for the outputs Networks which discover patterns, correlations, etc. in the input data This is a self organisation Self organising networks An

More information

A Dendrogram. Bioinformatics (Lec 17)

A Dendrogram. Bioinformatics (Lec 17) A Dendrogram 3/15/05 1 Hierarchical Clustering [Johnson, SC, 1967] Given n points in R d, compute the distance between every pair of points While (not done) Pick closest pair of points s i and s j and

More information

DESIGN OF KOHONEN SELF-ORGANIZING MAP WITH REDUCED STRUCTURE

DESIGN OF KOHONEN SELF-ORGANIZING MAP WITH REDUCED STRUCTURE DESIGN OF KOHONEN SELF-ORGANIZING MAP WITH REDUCED STRUCTURE S. Kajan, M. Lajtman Institute of Control and Industrial Informatics, Faculty of Electrical Engineering and Information Technology, Slovak University

More information

The Traveling Salesman

The Traveling Salesman Neural Network Approach To Solving The Traveling Salesman Problem The Traveling Salesman The shortest route for a salesman to visit every city, without stopping at the same city twice. 1 Random Methods

More information

Data Mining and Data Warehousing Henryk Maciejewski Data Mining Clustering

Data Mining and Data Warehousing Henryk Maciejewski Data Mining Clustering Data Mining and Data Warehousing Henryk Maciejewski Data Mining Clustering Clustering Algorithms Contents K-means Hierarchical algorithms Linkage functions Vector quantization SOM Clustering Formulation

More information

/00/$10.00 (C) 2000 IEEE

/00/$10.00 (C) 2000 IEEE A SOM based cluster visualization and its application for false coloring Johan Himberg Helsinki University of Technology Laboratory of Computer and Information Science P.O. Box 54, FIN-215 HUT, Finland

More information

Chapter 18. Geometric Operations

Chapter 18. Geometric Operations Chapter 18 Geometric Operations To this point, the image processing operations have computed the gray value (digital count) of the output image pixel based on the gray values of one or more input pixels;

More information

Associative Cellular Learning Automata and its Applications

Associative Cellular Learning Automata and its Applications Associative Cellular Learning Automata and its Applications Meysam Ahangaran and Nasrin Taghizadeh and Hamid Beigy Department of Computer Engineering, Sharif University of Technology, Tehran, Iran ahangaran@iust.ac.ir,

More information

The Hitchhiker s Guide to TensorFlow:

The Hitchhiker s Guide to TensorFlow: The Hitchhiker s Guide to TensorFlow: Beyond Recurrent Neural Networks (sort of) Keith Davis @keithdavisiii iamthevastidledhitchhiker.github.io Topics Kohonen/Self-Organizing Maps LSTMs in TensorFlow GRU

More information

CHAPTER FOUR NEURAL NETWORK SELF- ORGANIZING MAP

CHAPTER FOUR NEURAL NETWORK SELF- ORGANIZING MAP 96 CHAPTER FOUR NEURAL NETWORK SELF- ORGANIZING MAP 97 4.1 INTRODUCTION Neural networks have been successfully applied by many authors in solving pattern recognition problems. Unsupervised classification

More information

Cluster Analysis using Spherical SOM

Cluster Analysis using Spherical SOM Cluster Analysis using Spherical SOM H. Tokutaka 1, P.K. Kihato 2, K. Fujimura 2 and M. Ohkita 2 1) SOM Japan Co-LTD, 2) Electrical and Electronic Department, Tottori University Email: {tokutaka@somj.com,

More information

Clustering Color/Intensity. Group together pixels of similar color/intensity.

Clustering Color/Intensity. Group together pixels of similar color/intensity. Clustering Color/Intensity Group together pixels of similar color/intensity. Agglomerative Clustering Cluster = connected pixels with similar color. Optimal decomposition may be hard. For example, find

More information

PATTERN RECOGNITION USING NEURAL NETWORKS

PATTERN RECOGNITION USING NEURAL NETWORKS PATTERN RECOGNITION USING NEURAL NETWORKS Santaji Ghorpade 1, Jayshree Ghorpade 2 and Shamla Mantri 3 1 Department of Information Technology Engineering, Pune University, India santaji_11jan@yahoo.co.in,

More information

Feature weighting using particle swarm optimization for learning vector quantization classifier

Feature weighting using particle swarm optimization for learning vector quantization classifier Journal of Physics: Conference Series PAPER OPEN ACCESS Feature weighting using particle swarm optimization for learning vector quantization classifier To cite this article: A Dongoran et al 2018 J. Phys.:

More information

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods

More information

application of learning vector quantization algorithms. In Proceedings of the International Joint Conference on

application of learning vector quantization algorithms. In Proceedings of the International Joint Conference on [5] Teuvo Kohonen. The Self-Organizing Map. In Proceedings of the IEEE, pages 1464{1480, 1990. [6] Teuvo Kohonen, Jari Kangas, Jorma Laaksonen, and Kari Torkkola. LVQPAK: A program package for the correct

More information

A Self Organizing Map for dissimilarity data 0

A Self Organizing Map for dissimilarity data 0 A Self Organizing Map for dissimilarity data Aïcha El Golli,2, Brieuc Conan-Guez,2, and Fabrice Rossi,2,3 Projet AXIS, INRIA-Rocquencourt Domaine De Voluceau, BP 5 Bâtiment 8 7853 Le Chesnay Cedex, France

More information

4. Feedforward neural networks. 4.1 Feedforward neural network structure

4. Feedforward neural networks. 4.1 Feedforward neural network structure 4. Feedforward neural networks 4.1 Feedforward neural network structure Feedforward neural network is one of the most common network architectures. Its structure and some basic preprocessing issues required

More information

4. Cluster Analysis. Francesc J. Ferri. Dept. d Informàtica. Universitat de València. Febrer F.J. Ferri (Univ. València) AIRF 2/ / 1

4. Cluster Analysis. Francesc J. Ferri. Dept. d Informàtica. Universitat de València. Febrer F.J. Ferri (Univ. València) AIRF 2/ / 1 Anàlisi d Imatges i Reconeixement de Formes Image Analysis and Pattern Recognition:. Cluster Analysis Francesc J. Ferri Dept. d Informàtica. Universitat de València Febrer 8 F.J. Ferri (Univ. València)

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Chapter 14: The Elements of Statistical Learning Presented for 540 by Len Tanaka Objectives Introduction Techniques: Association Rules Cluster Analysis Self-Organizing Maps Projective

More information

Unsupervised learning in Vision

Unsupervised learning in Vision Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual

More information

A Neural Classifier for Anomaly Detection in Magnetic Motion Capture

A Neural Classifier for Anomaly Detection in Magnetic Motion Capture A Neural Classifier for Anomaly Detection in Magnetic Motion Capture Iain Miller 1 and Stephen McGlinchey 2 1 University of Paisley, Paisley. PA1 2BE, UK iain.miller@paisley.ac.uk, 2 stephen.mcglinchey@paisley.ac.uk

More information

Machine Learning Classifiers and Boosting

Machine Learning Classifiers and Boosting Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve

More information

Relation Organization of SOM Initial Map by Improved Node Exchange

Relation Organization of SOM Initial Map by Improved Node Exchange JOURNAL OF COMPUTERS, VOL. 3, NO. 9, SEPTEMBER 2008 77 Relation Organization of SOM Initial Map by Improved Node Echange MIYOSHI Tsutomu Department of Information and Electronics, Tottori University, Tottori,

More information

Advanced visualization techniques for Self-Organizing Maps with graph-based methods

Advanced visualization techniques for Self-Organizing Maps with graph-based methods Advanced visualization techniques for Self-Organizing Maps with graph-based methods Georg Pölzlbauer 1, Andreas Rauber 1, and Michael Dittenbach 2 1 Department of Software Technology Vienna University

More information

Recognizing Handwritten Digits Using the LLE Algorithm with Back Propagation

Recognizing Handwritten Digits Using the LLE Algorithm with Back Propagation Recognizing Handwritten Digits Using the LLE Algorithm with Back Propagation Lori Cillo, Attebury Honors Program Dr. Rajan Alex, Mentor West Texas A&M University Canyon, Texas 1 ABSTRACT. This work is

More information

Fuzzy-Kernel Learning Vector Quantization

Fuzzy-Kernel Learning Vector Quantization Fuzzy-Kernel Learning Vector Quantization Daoqiang Zhang 1, Songcan Chen 1 and Zhi-Hua Zhou 2 1 Department of Computer Science and Engineering Nanjing University of Aeronautics and Astronautics Nanjing

More information

Slide07 Haykin Chapter 9: Self-Organizing Maps

Slide07 Haykin Chapter 9: Self-Organizing Maps Slide07 Haykin Chapter 9: Self-Organizing Maps CPSC 636-600 Instructor: Yoonsuck Choe Spring 2012 Introduction Self-organizing maps (SOM) is based on competitive learning, where output neurons compete

More information

Images Reconstruction using an iterative SOM based algorithm.

Images Reconstruction using an iterative SOM based algorithm. Images Reconstruction using an iterative SOM based algorithm. M.Jouini 1, S.Thiria 2 and M.Crépon 3 * 1- LOCEAN, MMSA team, CNAM University, Paris, France 2- LOCEAN, MMSA team, UVSQ University Paris, France

More information

Clustering with Reinforcement Learning

Clustering with Reinforcement Learning Clustering with Reinforcement Learning Wesam Barbakh and Colin Fyfe, The University of Paisley, Scotland. email:wesam.barbakh,colin.fyfe@paisley.ac.uk Abstract We show how a previously derived method of

More information

Giri Narasimhan. CAP 5510: Introduction to Bioinformatics. ECS 254; Phone: x3748

Giri Narasimhan. CAP 5510: Introduction to Bioinformatics. ECS 254; Phone: x3748 CAP 5510: Introduction to Bioinformatics Giri Narasimhan ECS 254; Phone: x3748 giri@cis.fiu.edu www.cis.fiu.edu/~giri/teach/bioinfs07.html 3/3/08 CAP5510 1 Gene g Probe 1 Probe 2 Probe N 3/3/08 CAP5510

More information

Data Clustering. Chapter 9

Data Clustering. Chapter 9 Chapter 9 Data Clustering When we are confronted with a novel data set it is customary to first do some exploratory analysis. A clustering of the data is an algorithm that can help us summarize how the

More information

Multi-Clustering Centers Approach to Enhancing the Performance of SOM Clustering Ability

Multi-Clustering Centers Approach to Enhancing the Performance of SOM Clustering Ability JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 25, 1087-1102 (2009) Multi-Clustering Centers Approach to Enhancing the Performance of SOM Clustering Ability CHING-HWANG WANG AND CHIH-HAN KAO * Department

More information

Voronoi Region. K-means method for Signal Compression: Vector Quantization. Compression Formula 11/20/2013

Voronoi Region. K-means method for Signal Compression: Vector Quantization. Compression Formula 11/20/2013 Voronoi Region K-means method for Signal Compression: Vector Quantization Blocks of signals: A sequence of audio. A block of image pixels. Formally: vector example: (0.2, 0.3, 0.5, 0.1) A vector quantizer

More information

A flexible multi-layer self-organizing map for generic processing of tree-structured data

A flexible multi-layer self-organizing map for generic processing of tree-structured data Pattern Recognition 40 (2007) 1406 1424 www.elsevier.com/locate/pr A flexible multi-layer self-organizing map for generic processing of tree-structured data M.K.M. Rahman, Wang Pi Yang, Tommy W.S. Chow,

More information

Self-Organizing Map. presentation by Andreas Töscher. 19. May 2008

Self-Organizing Map. presentation by Andreas Töscher. 19. May 2008 19. May 2008 1 Introduction 2 3 4 5 6 (SOM) aka Kohonen Network introduced by Teuvo Kohonen implements a discrete nonlinear mapping unsupervised learning Structure of a SOM Learning Rule Introduction

More information

What to come. There will be a few more topics we will cover on supervised learning

What to come. There will be a few more topics we will cover on supervised learning Summary so far Supervised learning learn to predict Continuous target regression; Categorical target classification Linear Regression Classification Discriminative models Perceptron (linear) Logistic regression

More information

Nonlinear dimensionality reduction of large datasets for data exploration

Nonlinear dimensionality reduction of large datasets for data exploration Data Mining VII: Data, Text and Web Mining and their Business Applications 3 Nonlinear dimensionality reduction of large datasets for data exploration V. Tomenko & V. Popov Wessex Institute of Technology,

More information

Radial Basis Function Neural Network Classifier

Radial Basis Function Neural Network Classifier Recognition of Unconstrained Handwritten Numerals by a Radial Basis Function Neural Network Classifier Hwang, Young-Sup and Bang, Sung-Yang Department of Computer Science & Engineering Pohang University

More information

Influence of Neighbor Size for Initial Node Exchange of SOM Learning

Influence of Neighbor Size for Initial Node Exchange of SOM Learning FR-E3-3 SCIS&ISIS2006 @ Tokyo, Japan (September 20-24, 2006) Influence of Neighbor Size for Initial Node Exchange of SOM Learning MIYOSHI Tsutomu Department of Information and Knowledge Engineering, Tottori

More information

A Comparison of Document Clustering Techniques

A Comparison of Document Clustering Techniques A Comparison of Document Clustering Techniques M. Steinbach, G. Karypis, V. Kumar Present by Leo Chen Feb-01 Leo Chen 1 Road Map Background & Motivation (2) Basic (6) Vector Space Model Cluster Quality

More information

Cluster Analysis. Ying Shen, SSE, Tongji University

Cluster Analysis. Ying Shen, SSE, Tongji University Cluster Analysis Ying Shen, SSE, Tongji University Cluster analysis Cluster analysis groups data objects based only on the attributes in the data. The main objective is that The objects within a group

More information

Machine Learning Based Autonomous Network Flow Identifying Method

Machine Learning Based Autonomous Network Flow Identifying Method Machine Learning Based Autonomous Network Flow Identifying Method Hongbo Shi 1,3, Tomoki Hamagami 1,3, and Haoyuan Xu 2,3 1 Division of Physics, Electrical and Computer Engineering, Graduate School of

More information

Adaptive Metric Nearest Neighbor Classification

Adaptive Metric Nearest Neighbor Classification Adaptive Metric Nearest Neighbor Classification Carlotta Domeniconi Jing Peng Dimitrios Gunopulos Computer Science Department Computer Science Department Computer Science Department University of California

More information

University of Florida CISE department Gator Engineering. Clustering Part 5

University of Florida CISE department Gator Engineering. Clustering Part 5 Clustering Part 5 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville SNN Approach to Clustering Ordinary distance measures have problems Euclidean

More information

Nearest Neighbor Classification. Machine Learning Fall 2017

Nearest Neighbor Classification. Machine Learning Fall 2017 Nearest Neighbor Classification Machine Learning Fall 2017 1 This lecture K-nearest neighbor classification The basic algorithm Different distance measures Some practical aspects Voronoi Diagrams and Decision

More information

Cartographic Selection Using Self-Organizing Maps

Cartographic Selection Using Self-Organizing Maps 1 Cartographic Selection Using Self-Organizing Maps Bin Jiang 1 and Lars Harrie 2 1 Division of Geomatics, Institutionen för Teknik University of Gävle, SE-801 76 Gävle, Sweden e-mail: bin.jiang@hig.se

More information

Some Other Applications of the SOM algorithm : how to use the Kohonen algorithm for forecasting

Some Other Applications of the SOM algorithm : how to use the Kohonen algorithm for forecasting Some Other Applications of the SOM algorithm : how to use the Kohonen algorithm for forecasting Marie Cottrell SAMOS-MATISSE, CNRS UMR 8595, Université Paris 1 cottrell@univ-paris1.fr Abstract: The Kohonen

More information

ASSOCIATIVE MEMORY MODELS WITH STRUCTURED CONNECTIVITY

ASSOCIATIVE MEMORY MODELS WITH STRUCTURED CONNECTIVITY ASSOCIATIVE MEMORY MODELS WITH STRUCTURED CONNECTIVITY Simon Turvey, Steve Hunt, Ray Frank, Neil Davey Department of Computer Science, University of Hertfordshire, Hatfield, AL1 9AB. UK {S.P.Turvey, S.P.Hunt,

More information

9.1. K-means Clustering

9.1. K-means Clustering 424 9. MIXTURE MODELS AND EM Section 9.2 Section 9.3 Section 9.4 view of mixture distributions in which the discrete latent variables can be interpreted as defining assignments of data points to specific

More information

Pattern recognition methods for electronic tongue systems. Patrycja Ciosek

Pattern recognition methods for electronic tongue systems. Patrycja Ciosek Pattern recognition methods for electronic tongue systems 1 Electronic tongue A SYSTEM FOR AUTOMATIC ANALYSIS AND CLASSIFICATION (RECOGNITION) OF LIQUID SAMPLES AN ARRAY OF CHEMICAL SENSORS PATTERN RECOGNITION

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,500 108,000 1.7 M Open access books available International authors and editors Downloads Our

More information

Dimension Reduction of Image Manifolds

Dimension Reduction of Image Manifolds Dimension Reduction of Image Manifolds Arian Maleki Department of Electrical Engineering Stanford University Stanford, CA, 9435, USA E-mail: arianm@stanford.edu I. INTRODUCTION Dimension reduction of datasets

More information

Affine Arithmetic Self Organizing Map

Affine Arithmetic Self Organizing Map Affine Arithmetic Self Organizing Map Tony Bazzi Department of Electrical and Systems Engineering Oakland University Rochester, MI 48309, USA Email: tbazzi [AT] oakland.edu Jasser Jasser Department of

More information

Today. Gradient descent for minimization of functions of real variables. Multi-dimensional scaling. Self-organizing maps

Today. Gradient descent for minimization of functions of real variables. Multi-dimensional scaling. Self-organizing maps Today Gradient descent for minimization of functions of real variables. Multi-dimensional scaling Self-organizing maps Gradient Descent Derivatives Consider function f(x) : R R. The derivative w.r.t. x

More information

Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD

Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD Goals. The goal of the first part of this lab is to demonstrate how the SVD can be used to remove redundancies in data; in this example

More information

Preprocessing DWML, /33

Preprocessing DWML, /33 Preprocessing DWML, 2007 1/33 Preprocessing Before you can start on the actual data mining, the data may require some preprocessing: Attributes may be redundant. Values may be missing. The data contains

More information

Data Warehousing and Machine Learning

Data Warehousing and Machine Learning Data Warehousing and Machine Learning Preprocessing Thomas D. Nielsen Aalborg University Department of Computer Science Spring 2008 DWML Spring 2008 1 / 35 Preprocessing Before you can start on the actual

More information

Bumptrees for Efficient Function, Constraint, and Classification Learning

Bumptrees for Efficient Function, Constraint, and Classification Learning umptrees for Efficient Function, Constraint, and Classification Learning Stephen M. Omohundro International Computer Science Institute 1947 Center Street, Suite 600 erkeley, California 94704 Abstract A

More information

Prototype Based Classifier Design with Pruning

Prototype Based Classifier Design with Pruning Prototype Based Classifier Design with Pruning Jiang Li, Michael T. Manry, and Changhua Yu Department of Electrical Engineering, University of Texas at Arlington Arlington, Texas 7619. manry@uta.edu Abstract

More information