ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION

Similar documents
WAVELET USE FOR IMAGE CLASSIFICATION. Andrea Gavlasová, Aleš Procházka, and Martina Mudrová

Wavelet Transform in Image Regions Classification

Texture Segmentation and Classification in Biomedical Image Processing

IMAGE CLASSIFICATION USING COMPETITIVE NEURAL NETWORKS

WAVELET USE FOR IMAGE RESTORATION

ROTATION INVARIANT TRANSFORMS IN TEXTURE FEATURE EXTRACTION

ALGORITHMS FOR INITIALIZATION OF NEURAL NETWORK WEIGHTS

NON-LINEAR MEDIAN FILTERING OF BIOMEDICAL IMAGES

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

Biomedical Image Enhancement, Segmentation and Classification Using Wavelet Transform

Character Recognition Using Convolutional Neural Networks

In this assignment, we investigated the use of neural networks for supervised classification

Operators to calculate the derivative of digital signals

MATLAB representation of neural network Outline Neural network with single-layer of neurons. Neural network with multiple-layer of neurons.

Face Detection Using Radial Basis Function Neural Networks With Fixed Spread Value

Artificial Neural Network Based Byzantine Agreement Protocol

DISTRIBUTED SIGNAL PROCESSING

Seismic regionalization based on an artificial neural network

Rough Set Approach to Unsupervised Neural Network based Pattern Classifier

DILATION AND EROSION OF GRAY IMAGES WITH SPHERICAL MASKS

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

CHAPTER VI BACK PROPAGATION ALGORITHM

Figure (5) Kohonen Self-Organized Map

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Improving Trajectory Tracking Performance of Robotic Manipulator Using Neural Online Torque Compensator

Medical Image Segmentation Based on Mutual Information Maximization

Artificial Neural Networks Unsupervised learning: SOM

Fuzzy-Kernel Learning Vector Quantization

NEURAL NETWORK BASED REGRESSION TESTING

2. Neural network basics

DESIGN OF KOHONEN SELF-ORGANIZING MAP WITH REDUCED STRUCTURE

Several pattern recognition approaches for region-based image analysis

CS6220: DATA MINING TECHNIQUES

EDGE DETECTION IN MEDICAL IMAGES USING THE WAVELET TRANSFORM

Yuki Osada Andrew Cannon

Image Compression: An Artificial Neural Network Approach

Handwritten Character Recognition with Feedback Neural Network

Pattern Classification Algorithms for Face Recognition

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

WAVELET TRANSFORM IN SIGNAL AND IMAGE RESTORATION

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically

INTERACTIVE PROGRAMMING IN MATLAB USING MATLAB WEB SERVER. 2 Information Technologies Used for Web Server Application

Neural Networks Laboratory EE 329 A

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

DIFFERENTIAL IMAGE COMPRESSION BASED ON ADAPTIVE PREDICTION

Module 9 : Numerical Relaying II : DSP Perspective

Design of Adaptive Filters Using Least P th Norm Algorithm

EE 584 MACHINE VISION

for each n 2 N and a; b 2 L. a ffi b = :(a $ b) =ja bj; a Φ b = :(:a Ω:b) = min(a + b; 1); a Ψ b = a Ω:b = max(a b; 0); na = Φ n k=1a = min(n a; 1); a

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

GEMINI GEneric Multimedia INdexIng

Channel Performance Improvement through FF and RBF Neural Network based Equalization

Effect of Grouping in Vector Recognition System Based on SOM

Detection and recognition of moving objects using statistical motion detection and Fourier descriptors

Chapter 7: Competitive learning, clustering, and self-organizing maps

WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)

Brain Tumor Classification using Probabilistic Neural Network

Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map

Fast Orthogonal Neural Networks

Machine Learning (CSMML16) (Autumn term, ) Xia Hong

A Learning Algorithm for Piecewise Linear Regression

Mobile Application with Optical Character Recognition Using Neural Network

FAST FIR FILTERS FOR SIMD PROCESSORS WITH LIMITED MEMORY BANDWIDTH

Evaluating MMX Technology Using DSP and Multimedia Applications

Dynamic Thresholding for Image Analysis

COMPARISON OF DIFFERENT REALIZATION TECHNIQUES OF IIR FILTERS USING SYSTEM GENERATOR

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Analysis of Planar Anisotropy of Fibre Systems by Using 2D Fourier Transform

An Edge Detection Method Using Back Propagation Neural Network

CHAPTER 6 IMPLEMENTATION OF RADIAL BASIS FUNCTION NEURAL NETWORK FOR STEGANALYSIS

11/14/2010 Intelligent Systems and Soft Computing 1

Lecture 8 Object Descriptors

Analytical Approach for Numerical Accuracy Estimation of Fixed-Point Systems Based on Smooth Operations

Statistical Image Compression using Fast Fourier Coefficients

A Fast Personal Palm print Authentication based on 3D-Multi Wavelet Transformation

Neural network based Numerical digits Recognization using NNT in Matlab

Fundamentals of Digital Image Processing

BME I5000: Biomedical Imaging

Machine Learning : Clustering, Self-Organizing Maps

PARALLEL TRAINING OF NEURAL NETWORKS FOR SPEECH RECOGNITION

11/14/2010 Intelligent Systems and Soft Computing 1

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network

ESTIMATION OF SUBSURFACE QANATS DEPTH BY MULTI LAYER PERCEPTRON NEURAL NETWORK VIA MICROGRAVITY DATA

Khmer Character Recognition using Artificial Neural Network

Automatic Classification of Attacks on IP Telephony

EEE 512 ADVANCED DIGITAL SIGNAL AND IMAGE PROCESSING

Climate Precipitation Prediction by Neural Network

Radial Basis Function Neural Network Classifier

Performance Analysis of A Feed-Forward Artifical Neural Network With Small-World Topology

Digital Image Processing

Extract an Essential Skeleton of a Character as a Graph from a Character Image

Edge detection in medical images using the Wavelet Transform

Inductive Modelling of Temporal Sequences by Means of Self-organization

COMPARISON OF IMAGE ANALYSIS FOR THAI HANDWRITTEN CHARACTER RECOGNITION

Efficient Object Extraction Using Fuzzy Cardinality Based Thresholding and Hopfield Network

A Comparative Study of Conventional and Neural Network Classification of Multispectral Data

Adaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions

Lecture 11: Classification

Transcription:

ARTIFICIAL NEURAL NETWORKS IN PATTERN RECOGNITION Mohammadreza Yadollahi, Aleš Procházka Institute of Chemical Technology, Department of Computing and Control Engineering Abstract Pre-processing stages such as filtering, segmentation and feature extraction form important stages before the use of artificial neural networks in pattern recognition or feature vectors classification. The paper is devoted to analysis of preprocessing stages before the application of artificial neural networks in pattern recognition by Kohonen s method and to numerical comparison of results of classification. All algorithms proposed are applied for a biomedical image processing in the MATLAB environment. Introduction Nowadays, artificial neural networks (ANN) are applicable in many scientific fields such as medicine, computer science, neuroscience and engineering. It is believed that they will receive extensive application to biomedical systems and engineering. They can be also applied in information processing system to solve pattern analysis problems or feature vectors classification [] for example. This paper describes pre-processing stages before the use of artificial neural networks that include (i) filtering tools to reject noise, (ii) image segmentation to divide the image area into separate regions of interest, and (iii) the use of mathematical method for extraction of feature vectors from each image segment. The set of feature vectors form finally the pattern matrix as the input to the artificial neural networks for their classification. This process is followed by the application of ANN for classification of feature vectors by the unsupervised learning process based on the Kohonen s method. All algorithms used were designed in the Matlab environment. 2 Mathematical Background The mathematical methods used include the discrete Fourier transform for signal analysis, application of difference equations for digital filtering and optimization methods for training of artificial neural networks. 2. Discrete Fourier Transform in Signal Analysis Discrete Fourier Transform (DFT) is a method used almost in all branches of engineering and science [2]. It is a basic operation for transform of the ordered sequence of data samples from the time domain into the frequency domain to distinguish its frequency components [3]. Discrete Fourier transform algorithms include both the direct and the inverse discrete Fourier transform, which converts the result of direct transform back into the time domain. The discrete Fourier transform of the signal {x(n)}, n =0,, 2,..., N, is defined [4] by relation X(k) = N n=0 x(n) e jkn2 π N () where k =0,,..., N. The inverse discrete Fourier transform of a sequence {X(k)}, k = 0,, 2,..., N is defined [4] by formula where n =0,,..., N. x(n) = N N k=0 X(k)e jkn2 π N (2)

2.2 Digital Filters Filtering is the most common form of signal processing used in many applications including telecommunications, speech processing, biomedical systems, image processing, etc. to remove the frequency components in certain regions and to improve the magnitude, phase, or group delay in other signal spectrum ranges [5]. Finite Impulse Response (FIR) filters [5] or non-recursive filters do not have feed back and input-output relation is defined by equation y(n) = M b k x(n k) (3) k=0 where {y(n)} and {x(n)} are output and input sequences respectively and M is the filter order. Fig. displays the given sequence, output of FIR filter and both the frequency response of the FIR filter and frequency spectrum of the given sequence. 2 (a) Given sequence 0 2 0 50 00 50 200 Index (b) Output of FIR filter 0 0 50 00 50 200 Index (c) Frequency response of FIR filter 2 0 0 0.2 0.4 0.6 0.8 Frequency [Hz] Figure : FIR filtering (a) the given simulated sequence formed by the sum of two harmonic functions, (b) result of its processing by the FIR filter of order M = 60, and (c) frequency spectrum of the given sequence together with the frequency response of the FIR filter Infinite Impulse Response (IIR) filters [4] or recursive filters have feedback from the output to input. They can be described by the following general equation M M y(n) = a k y(n k)+ b k x(n k) (4) k= Owing to the lower number of necessary coefficients to obtain similar accuracy the IIR filters are more efficient and faster requiring less storage comparing to FIR filters. On the other hand while FIR filters are always stable the IIR filters can become unstable [4]. k=0 2.3 Principle of Neural Networks Following subsections are devoted to mathematical description of ANN and to the selected problems of unsupervised learning by the Kohonen s method for classification of feature vectors. 2.3. Mathematical Description of Neural Networks An artificial neural network is an adaptive mathematical model or a computational structure that is designed to simulate a system of biological neurons to transfer an information from its input to output in a desired way [6]. In the following description small (italic) letters are used

for scalars, small (bold, nonitalic) letters for vectors and capital (BOLD, nonitalic) letters for matrices. Basic neuron structures [7] include Single-Input Neuron (Fig. 2(a)): The scalar input p is multiplied by the scalar weight w. The bias b is then added to the product wpby the summing junction and the transfer function f is then applied for n=wp+b. This transfer function is typically a step function or a sigmoidal function shifted by value of b producing the output a. Multiple-Input Neuron (Fig. 2(b)): Individual inputs forming a vector p=[p,p 2,..., p R ] are multiplied by weights w=[w,,w,2,..., w,r ] and then fed to the summing junction. The bias b is then added to the product wp forming value n = wp+ b = w, p + w,2 p 2 +... + w,r p R + b (5) which can be written in the following matrix form n = wp+ b (6) This value is then processed by the non-linear transfer function to form the neuron output a = f(wp+ b) (7) Layer of Neurons (Fig. 2(c)): Each element of the input vector p is connected to each neuron using coefficients of the weight matrix W. w, w,2... w,r W = w 2, w 2,2... w 2,R............ w S, w S,2... w S,R (a) Single input neuron P Inputs (b) Multiple input neuron P 2 w, Input P w b n f a=f(wp+b) a P 3 n f a a=f( wp+b) b w,r P R Inputs P (c) Layer of S neurons Inputs P (d) Three layer of network w, w, First Layer Second Layer Third Layer P 2 n f a P 2 n f a w 2, n 2 f 2 a 2 w 3, n 3 f 3 a 3 b b b 2 b 3 P 3 n 2 f a 2 P 3 n 2 f a 2 n 2 2 f 2 a 2 2 n 3 2 f 3 a 3 2 b 2 b 2 b 2 2 b 3 2 b s ws,r P R n s f a=f(wp+b) a s P R n s f a s n 2 s 2 f 2 a 2 s 2 n 3 s 3 f 3 w 2 s 2,s w 3 s 3,s 2 b s b 2 s 2 b 3 s 3 w s,r a =f (w p+b ) a 2 =f 2 (w 2 a +b 2 ) a 3 =f 3 (w 3 a 2 +b 3 ) a 3 s 3 a 3 =f 3 (w 3 f 2 (w 2 f (w p+b )+b 2 )+b 3 ) Figure 2: Different structures of neural networks presenting (a) a single-input neuron with a bias, (b) a multiple-input neuron with biases, (c) a layer of neurons, and (d) multiple layers of neurons

The i-th neuron has a summer that gathers its weighted inputs and bias to form its own scalar output n(i) ofans-element net vector n n = Wp+ b (8) This value is then processed by the non-linear transfer function to form the neuron output a = f(wp+ b) (9) The row indices on the elements of matrix W indicate the destination neuron of the weight and the column indices indicate which source is the input for that weight [7]. Multiple Layers of Neurons (Fig. 2(d)): Each layer has its own weight matrix W, its own bias vector b, a net input vector n and an output vector a. The weight matrix for the first layer can be denoted as W (), the weight matrix for the second layer as W (2),etc. The network has R inputs, S () neurons in the first layer, S (2) neurons in the second layer, etc. The outputs of layers one and two are the inputs for layers two and three. Thus layer 2 can be viewed as a one-layer network with R = S () inputs, S = S (2) neurons and weight matrix W (2) of size S () S (2). The layer 2 has input a () and output a (2). The layers of a multilayer network play different roles. A layer that produces the network output is called an output layer while other layers are called hidden layers. The three layer network has one output layer (layer 3) and two hidden layers (layer and 2) [7]. 2.3.2 Unsupervised Learning Data clustering forms a basic application of unsupervised learning called also the learning without any external teacher and target vectors using one-layer network having R inputs and S outputs. Each input column vector p=[p,p 2,..., p R ] forming the pattern matrix P is classified into the suitable cluster using the Kohonen s method and the weight matrix W S,R. The competitive learning algorithm [8] includes the following basic steps Initialization of weights to small random values Evaluation of distances D s of the pattern vector p to the s-th node D s = R (p j W s,j ) 2 (0) j= for s =, 2,..., S Selection of the minimum distance node or in other words the winner node index i node Update of the weights of the winner node i where η is the learning rate w new i,j = w old i,j + η(p j w old i,j ) () 3 Image Segmentation Image segmentation [9] forms an important step to image regions classification. The Watershed transform (WT) can be used to contribute to this task as it can find the watershed ridges in an image. It assigns zero to all ridges detected and it assigns the same unique labels to all points inside separate catchment basins (regions) using the gray-scale image as a topological surface. The Distance transform (DT) forms the fundamental part of the Watershed transform used for the binary image [9, 0] and based upon the calculation of Euclidean distance of each pixel to the nearest pixel with the value. Result of this transformation [0] is formed by { 0 for ai,j = b i,j = ( (i ) k) 2 +(j l) 2 for a i,j =0 min k,l,a k,l = (2)

where a i,j is each pixel and a k,l is nearest pixel for i =, 2,..., M and j =, 2,..., N. Application of defined formula by Eq. (2) is showed in following example of matrix 0 0 0 0 0 0 I bw = 0 0 0 0 0 (3) 0 0 0 0 0 0 0 Its distance transform imply the matrix 0 0.0000 2.0000 3.0000 0 0.0000 2.0000 2.236 I dt =.0000.0000.0000.0000.442.0000 0 0 0.0000.442.0000.0000.0000.442 (4) Fig. 3 displays a simulated gray-scale image, its conversion to the binary image, results of the application of the distance transform and the final image presenting image edges obtained by the watershed transform. (a) Simulated image (b) Binary image (c) Distance transform (d) Watershed transform Figure 3: Process of segmentation including (a) simulated gray-scale image, (b) its conversion to the binary image, (c) results of the evaluation of the distance transform of complementary binary image, and (d) watershed transform 4 Results The pre-processing stages of image filtering, segmentation, and feature extraction were applied for real biomedical images of size 28 28 to enable the following classification by artificial neural networks discussed. Fig. 4 presents the set of all these processes applied for a biomedical image (a) and results of its median filtering (b), segmentation by the Watershed transfom (c), image segments DFT features (d) and results of classification using artificial neural networks and Kohonen s learning rule (e).

(a) Biomedical image (b) Filtering (c) Segmentation (d) DFT features of image segments (e) Classification of feature vectors by Kohonen s method Figure 4: Processing stages of a selected biomedical image presenting (a) the given image, (b) results of its median filtering, (c) evaluation of the Watershed transfom, (d) image segments DFT features plot, and (e) results of classification using artificial neural networks by the Kohonen s learning rule with class boundaries. In the present study the median filtering has been used and results compared with other methods. A special attention has been paid to estimation of image regions properties. Both DFT and selected statistical characteristics have been used to form feature vectors of all segments of the image. To allow visualization of results only two features were selected to form feature matrix [ ] p,... p P =,k... p,q p 2,... p 2,k... p 2,Q Then the matrix W S,2 was used to classify image features into S classes. w, w,2.. W = w k, w k,2.. w S, w S,2 The learning process has been based upon the Kohonen s method to classify feature vectors with a selected result on Fig. 4(e). The calculation of a typical structure is done in every cluster by

Table : Evaluation of classification. No.of segment Class Mean distance Typical cluster Criteria value A 2 A 0.0650 6 6 A 3 B 0.0850 3 4 B 5 C 0.607 7 C 8 C 0.060 5 9 C 0 C computing distance of each pattern to weights of the corresponding neuron using relation d(i) = (p,j w i, ) 2 +(p 2,j w i,2 ) 2 (5) The feature vector with the least distance in the cluster points to the typical pattern. Evaluating the mean distance for every cluster by relation MSE = N d(i) (6) N i= where N stands for the number of pattern values in the selected class it is possible to detect the cluster compactness. The quality of the separation process is then evaluated using the mean distances of patterns in all classes and distances of their gravity centers. 5 Conclusion Further studies will be devoted to the use of different mathematical methods for obtaining compact cluster allowing the efficient classification. 6 Acknowledgements The work has been supported by the research grant of the Faculty of Chemical Engineering of the Institute of Chemical Technology, Prague No. MSM 604637306 References [] R.Nayak, L.C.Jain and B.K.H. Ting. Artificial Neural Networks in Biomedical Engineering: A Review. In Proceedings Asia-Pacific Conference on Advance Computation.200.QUT Digital Repository :http://eprints.qut.edu.au/480/. [2] Shripad Kulkarni. Discrete Fourier Transform(DFT) by using Vedic Mathematics. October 2008.http://www.vedicmathsindia.org/Vedic%20Math %20IT/5.DFT%20Using%20 Vedic%20Math%20Kulkarni.pdf. [3] Paulo Roberto, Nunes de Souza, Mára Regina and Labuto Fragoso da Silva. Fourier Transform Graphical Analysis: an Approach for Digital Image Processing. Universidade Federal do Espírito Santo.2009. http://w3.impa.br/ lvelho/sibgrapi2005/html/p3048 /p3048.pdf.

[4] Saeed V. Vaseghi. Multimedia Signal Processing: Theory and Applications in Speech, Music and Communications. John Wiley and Sons, 2007. ISBN: 978047006202. [5] Belle A. Shenoi. Introduction to Digital Signal Processing and Filter Design. John Wiley and Sons, 2005. ISBN: 978047464822. [6] Eeglossary. Neural Networks: Definition and Recommended Links. May 2009.http:// www.eeglossary.com/neural-networks.htm. [7] Martin T. Hagan, Howard B. Demuth and Mark H. Beale. Neural Network Design. PWS Pub., 996. ISBN: 9780534943325. [8] Unsupervised learning. Lecture 8.Artificial Neural Networks. Neg nevitsky, Pearson Education, 2002.www.neiu.edu/ mosztain/cs335/lecture08.ppt. [9] R. C. Gonzales, R. E. Woods and S. L. Eddins. Digital Image Processing using MATLAB. Prentice Hall, 2004. ISBN-3: 978-0300859. [0] Andrea Gavlasová, Aleš Procházka and Jaroslav Poẑivil. Functional Transforms in MR Image Segmentation. The 3rd International Symposium on Communications, Control and Signal Processing (ISCCSP), Malta, 2-4 March 2008.http://ieeexplore.ieee.org/xpl/ freeabs all.jsp?arnumber=4537437. Ing. Mohammadreza Yadollahi, Prof. Aleš Procházka Institute of Chemical Technology, Praha Department of Computing and Control Engineering Technická 905, 66 28 Praha 6 Phone: +420 220 444 70, +420 220 444 98 * Fax: +420 220 445 053 E-mail:Mohammadreza.Yadollahi@vscht.cz, A.Prochazka@ieee.org