CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION

Similar documents
Figure (5) Kohonen Self-Organized Map

11/14/2010 Intelligent Systems and Soft Computing 1

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK FOR IMAGE RESTORATION

Design and Implementation of Parallel Counterpropagation Networks Using MPI

Multilayer Feed-forward networks

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

Fuzzy-Kernel Learning Vector Quantization

A comparative study of local classifiers based on clustering techniques and one-layer neural networks

Supervised Learning in Neural Networks (Part 2)

Centroid Neural Network based clustering technique using competetive learning

Chapter 7: Competitive learning, clustering, and self-organizing maps

Rough Set Approach to Unsupervised Neural Network based Pattern Classifier

Supervised vs.unsupervised Learning

KINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL NETWORK

Data Mining. Kohonen Networks. Data Mining Course: Sharif University of Technology 1

A Comparative Study of Conventional and Neural Network Classification of Multispectral Data

Neural networks for variable star classification

Feature weighting using particle swarm optimization for learning vector quantization classifier

A neural network that classifies glass either as window or non-window depending on the glass chemistry.

Hybrid ART/Kohonen Neural Model for Document Image Compression. Computer Science Department NMT Socorro, New Mexico {hss,

Function approximation using RBF network. 10 basis functions and 25 data points.

Seismic regionalization based on an artificial neural network

11/14/2010 Intelligent Systems and Soft Computing 1

INTELLIGENT PROCESS SELECTION FOR NTM - A NEURAL NETWORK APPROACH

COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS

SOMSN: An Effective Self Organizing Map for Clustering of Social Networks

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)

Multi-Clustering Centers Approach to Enhancing the Performance of SOM Clustering Ability

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska

CHAPTER 4 IMPLEMENTATION OF BACK PROPAGATION ALGORITHM NEURAL NETWORK FOR STEGANALYSIS

Image Compression: An Artificial Neural Network Approach

Color reduction by using a new self-growing and self-organized neural network

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

CHAPTER FOUR NEURAL NETWORK SELF- ORGANIZING MAP

DESIGN OF KOHONEN SELF-ORGANIZING MAP WITH REDUCED STRUCTURE

Richard S. Zemel 1 Georey E. Hinton North Torrey Pines Rd. Toronto, ONT M5S 1A4. Abstract

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically

Chapter 5 Neural Network Concepts and Paradigms

A Population Based Convergence Criterion for Self-Organizing Maps

Unsupervised Learning

Computational Intelligence Meets the NetFlix Prize

Pattern Classification Algorithms for Face Recognition

Optimizing Number of Hidden Nodes for Artificial Neural Network using Competitive Learning Approach

Adaptive Regularization. in Neural Network Filters

Radial Basis Function Neural Network Classifier

Multi Layer Perceptron with Back Propagation. User Manual

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter

4. Cluster Analysis. Francesc J. Ferri. Dept. d Informàtica. Universitat de València. Febrer F.J. Ferri (Univ. València) AIRF 2/ / 1

Lab Manual on Soft Computing [CS-801]

IMAGE CLASSIFICATION USING COMPETITIVE NEURAL NETWORKS

Extract an Essential Skeleton of a Character as a Graph from a Character Image

Neuro-Fuzzy Comp. Ch. 8 May 12, 2005

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016

Artificial Neural Networks Unsupervised learning: SOM

Image Compression Using SOFM

Artificial Intelligence Introduction Handwriting Recognition Kadir Eren Unal ( ), Jakob Heyder ( )

Affine Arithmetic Self Organizing Map

PATTERN RECOGNITION USING NEURAL NETWORKS

Neural Networks Laboratory EE 329 A

Neuro-Fuzzy Computing

Inductive Modelling of Temporal Sequences by Means of Self-organization

The Use of TurSOM for Color Image Segmentation

A Dendrogram. Bioinformatics (Lec 17)

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS

CS6220: DATA MINING TECHNIQUES

Asst. Prof. Bhagwat Kakde

Using Self-Organizing Maps for Sentiment Analysis. Keywords Sentiment Analysis, Self-Organizing Map, Machine Learning, Text Mining.

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

Training of Neural Networks. Q.J. Zhang, Carleton University

Ensemble methods in machine learning. Example. Neural networks. Neural networks

Effect of Grouping in Vector Recognition System Based on SOM

A novel firing rule for training Kohonen selforganising

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Self-Organizing Maps for cyclic and unbounded graphs

LVQ FOR HAND GESTURE RECOGNITION BASED ON DCT AND PROJECTION FEATURES

Introduction to Neural Networks

AMULTILAYER neural network based on multivalued neurons

Image Compression using Single Layer Linear Neural Networks B.Arunapriya a, * D.Kavitha Devi b

The Prediction of Real estate Price Index based on Improved Neural Network Algorithm

Relation Organization of SOM Initial Map by Improved Node Exchange

CL7204-SOFT COMPUTING TECHNIQUES

Graph Neural Network. learning algorithm and applications. Shujia Zhang

COMBINING NEURAL NETWORKS FOR SKIN DETECTION

KeyWords: Image Compression, LBG, ENN, BPNN, FBP.

Iterative Improvement of a Nearest Neighbor Classifier

Subgraph Matching Using Graph Neural Network

Applying Kohonen Network in Organising Unstructured Data for Talus Bone

Accurate modeling of SiGe HBT using artificial neural networks: Performance Comparison of the MLP and RBF Networks

MURDOCH RESEARCH REPOSITORY

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

Growing Neural Gas A Parallel Approach

An Edge Detection Method Using Back Propagation Neural Network

MURDOCH RESEARCH REPOSITORY

Linear Separability. Linear Separability. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks

Keywords: ANN; network topology; bathymetric model; representability.

J /Ix - ws (x) /l2p(x)dx.

Research Report on Bangla Optical Character Recognition Using Kohonen etwork

CHAPTER 6 IMPLEMENTATION OF RADIAL BASIS FUNCTION NEURAL NETWORK FOR STEGANALYSIS

Visual object classification by sparse convolutional neural networks

Transcription:

75 CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION 6.1 INTRODUCTION Counter propagation network (CPN) was developed by Robert Hecht-Nielsen as a means to combine an unsupervised Kohonen layer with a teachable output layer known as Grossberg layer, by [James et al., 1991, Athanasios et al., 2004]. The operation of this network is very similar to that of the Learning Vector Quantization (LVQ) network, in that the middle (Kohonen) layer acts as an adaptive look-up table. 6.2 COUNTER PROPAGATION NETWORK The Figure 6.1 gives the flowchart of the CPN. This figure shows three layers for CPN; an input layer that reads input patterns from the training set and forwards them to the network, a hidden layer that works in a competitive fashion and associates each input pattern with one of the hidden units and the output layer which is trained via a teaching algorithm that tries to minimize the mean square error (MSE) between the actual network output and the desired output associated with the current input vector. In some cases, a fourth layer is used to normalize the input vectors, but this normalization can be easily performed by the application before these vectors are sent to the Kohonen layer. Regarding the training process of the counter-propagation network, it can be described as a two-stage procedure; in the first stage, the process updates the weights of the synapses between the input and the Kohonen layer, while in the second stage the weights of

76 the synapses between the Kohonen and the Grossberg layer are updated. Fig.6.1 Flow chart for Counter propagation network

77 Training of the weights from the input to the hidden nodes is given as follows: Step 1: The synaptic weights of the network between the input and the Kohonen layer are set to small random values in the interval [0, 1]. Step 2: A vector pair (x, y) of the training set is selected at random. Step 3: normalized. Step 4: The normalized input vector is sent to the network. Step 5: In the hidden competitive layer, the distance between the weight vector and the current input vector is calculated for 6.1). K 2 (x w i) i 1 D (6.1) where K is the number of the hidden neurons and W i is the weight of the synapse that oins the i th neuron of the input layer with the th neuron of the Kohonen layer. Step 6: Step 7: The synaptic weights between the winner ne neurons of the input layer are adusted according to the equation (6.2). W wi (t+1) = W wi (t) + (t)(x i - W wi (t)) (6.2) Kohonen learning rate. The training process starts with an init during training according to the equation (6.3).

78 where t o (1 ) (6.3) T algorithm. A typical initial value for the Kohonen learning rate is a Step 8: The steps 1 to 6 are repeated until all training patterns p processing. The storage of this distance is performed before the weight update operation. Step 9: At the end of each epoch, the training set mean error is calculated according to the equation (6.4). p 1 E (6.4) i D k p k 1 where P is the number of pairs in the training set, D k is the distance of the winning neuron fo The network converges when the error measure falls below a user supplied tolerance value. The network also stops training in the case where the specified number of iterations has been performed, but the error value has not converged to a specific value. Training of the weights from the hidden to the output nodes; Step 1: The synaptic weights of the network between the Kohonen and the Grossberg layer are set to small random values in the interval [0, 1]. Step 2: A vector pair (x, y) of the training set, is selected in random.

79 Step 3: normalized Step 4: The normalized input vector is sent to the network. Step 5: In the hidden competitive layer the distance between the weight vector and the current input vector is calculated for each hidden neuron ). Step 6: output of this node is set to unity while the outputs of the other hidden nodes are assigned to zero values. Step 7: The connection weights between the winning neuron of the adusted according to the equation (6.5) V w (t+1) = V w (t) + (y - V w (t)) (6.5) In the equation (6.5 Grossberg learning rate. Step 8: The above procedure is performed for each training pattern. In this case, the error measure is computed as the mean Euclidean distance (6.6) weights and the desired output, that is N 1 1 E D (6.6) p 1 P N 2 (yk w k) p 1 k 1 Figure 6.2 shows flow of information from input to hidden to output layers. The frame is segmented and presented to HMM. Two features are obtained using FLD. These features are presented to the input layer of the network and the target values are presented to the output layer of the network. Training of the network is stopped when all the training patterns are presented to the network

80 Fig.6.2 Training the CPN 6.3 EFFECT OF NUMBER OF NODES IN HIDDEN LAYER OF CPN NETWORK Fig.6.3 Number of nodes in the hidden layer of CPN

81 Figure 6.3 presents bar chart showing the outputs of CPN for different number of nodes in the hidden layer of CPN topology. Fig.6.4 Persons recognition performance Figure 6.4 shows the outputs CPN for identification of persons. The red color shows the target value used for training the network. The outputs during testing are shown in green color. The lines are overlapping. Hence, the network output and target outputs are same. 6.4 SUMMARY This chapter has presented the implementation of CPN for identifying a person using features extracted from HMM. Chapter 7 presents the results and discussions.