CHAPTER 5 NEURAL NETWORK BASED CLASSIFICATION OF ELECTROGASTROGRAM SIGNALS

Size: px
Start display at page:

Download "CHAPTER 5 NEURAL NETWORK BASED CLASSIFICATION OF ELECTROGASTROGRAM SIGNALS"

Transcription

1 113 CHAPTER 5 NEURAL NETWORK BASED CLASSIFICATION OF ELECTROGASTROGRAM SIGNALS 5.1 INTRODUCTION In today s computing world, Neural Networks (NNs) fascinate the attention of the multi-disciplinarians like scientist, researchers and technologist. A neural network is used to learn patterns and relationships in data. NNs are acceptable or more convenient tool for solving problems in the field of biomedical engineering and in particular, to analyze the biological signals. NNs are well suited for real time applications such as pattern recognition or classification of biological signals because of their remarkable ability to derive meaning from complicated or imprecise data that could be used to extract patterns and trends that are too complex to be noticed by either humans or other computer techniques. The collective behavior of NNs is similar to human brain. It demonstrates the ability to learn, recall and generalize from training patterns or data. Neural network derive their computing power through its massively parallel-disturbed structure. They have immense ability to learn complex and non-linear relationships which include noisy or less precise information. They are well suited for solving problems in biomedical engineering and in particular in analysis of biomedical signals. NNs have advantage over conventional technologies in that they can solve complex problem that do not

2 114 have algorithmic solution or for which an algorithmic solution is too complex to be found. NNs are trained by example instead of rules and are automated. When used in medical diagnosis, they are not affected by factors such as human fatigue, emotional states and habituation. They are capable of rapid identification, analysis of condition and diagnosis in real time. In this thesis, neural networks models are developed for the classification of Electrogastrogram (EGG) signals. Investigation of digestive system disorders are carried out with EGG data using unsupervised learning network and supervised learning network as listed below. Adaptive Resonance Theory implemented neural network (ART1NN) Learning Vector Quantization (LVQ) Artificial Neural Network (ANN) with MRAN Algorithm 5.2 LITERATURE REVIEW Ahsan et al (2011) described the process of detecting different predefined hand gestures using Artificial Neural Network (ANN) for complex pattern recognition and classification tasks of EMG signals based on features. The authors used BPN with Levenberg-Marquardt training algorithm for the detection of gesture. Yu-Chien Shiau et al (2011) performed cardiac motion analysis using BPN network analysis for all serial images and also for all series of patient s images. Elsayad (2009) applied Learning Vector Quantization (LVQ) neural networks to classify arrhythmia from the Electrocardiogram (ECG) dataset. The experimental results recommend LVQ algorithm for further research in any biosignal.

3 115 Coyle (2009) illustrated the role of neural network in a prediction based preprocessing framework, referred to as Neural-Time-Series- Prediction-Preprocessing (NTSPP), in an Electroencephalogram (EEG)-based Brain-Computer Interface (BCI). The authors also mentioned that NTSPP can improve the potential for employing existing BCI methods with minimal subject-specific parameter tuning to deploy the BCI autonomously with six different classification approaches. Jingwen Tian et al (2009) presented a Web classification mining method based on Fuzzy Neural Network (FNN) with the Levenberg-Marquart optimizing algorithm to train fuzzy neural network to enhance the convergence rate and the classification accuracy. Jiaoying Huang et al (2008) performed Weak Biosignal Processing using Adaptive Wavelet Probabilistic Neural Network (AWPNN) by extracting the features from original signal, and then the Probabilistic Neural Network (PNN) is used to analyze the meaningful features and perform discrimination tasks. Abdulhamit Subasi (2005) tried a few of the modified BPNN algorithms like Resilient backpropagation, Levenberg-Marquardt backpropagation, Scaled conjugate gradient backpropagation for training the ANN because of slow momentum by the conventional backpropagation algorithm in the analysis of EEG signals. The author observed that the best performance was obtained for the training set, validation test set and separate test set with ANN architecture configuration Ramanathan et al (2004) classified lung sound using the neural classification with wavelet coefficients with ANN architecture configuration The authors reported that ANN architecture provided the best performance with an optimum of 40 neurons in the hidden layer of the architecture. Zhiyue Lin et al (1997) developed Multilayer FeedForward Neural Networks (MFNN) for the classification for digestive disorders with one hidden layer with maximum of five hidden nodes using Scaled Conjugate

4 116 Gradient backpropagation training algorithms and the performance is evaluated by computing the percentage of correct classification, sensitivity and specificity. With the architecture, 85% correct classification was obtained with 82 % sensitivity and 89 % specificity. The authors also mentioned that the MFNN can be further improved through better feature extraction of the EGG and best selection of feature combinations for the increase in correct classification of digestive system disorders. Zhiyue Lin et al (1997) proposed that network with 6 or 7 hidden neurons achieve the best result for the classification of EGG signals. It was found that slow convergence rate and critical user dependent parameters are the obstacles for the differentiation of normal and abnormal signal. Zhiyue Lin et al (1995) performed a comparison between gradient descent and conjugate gradient learning algorithms for classification of Electrogastrogram. It was concluded that scaled conjugate gradient (SCG) algorithm is a robust algorithm for the classification of the normal and abnormal EGG. It has moderate computational complexity and shows a super linear convergence rate. Zhiyue Lin et al (1994) proposed the classification of normal and abnormal EGG signals using back propagation network. It has been concluded that optimal BPN has been developed for the ARMA parameters which requires 22 input nodes, 15 hidden nodes and 2 output nodes. The main problem includes the difficulty in interpreting EGG data and extracting the useful information from EGG. Saibal Dutta et al (2011) used Learning Vector Quantization medical diagnostic tool for heart beat categorization. The main objective is to achieve an accurate, timely detection of cardiac arrhythmia for providing appropriate medical attention to a patient. The proposed scheme employs a feature extractor coupled with an Artificial Neural Network (ANN) classifier.

5 117 The feature extractor is based on cross-correlation approach, utilizing the cross-spectral density information in frequency domain. Curilem et al (2010) described a feature selection process applied to EGG processing. The data set is formed by 42 EGG records from functional dyspeptic (FD) patients and 22 from healthy controls. A wrapper configuration classifier was implemented to discriminate between both classes. The aim of this work is to compare artificial neural networks (ANN) and support vector machines (SVM) when acting as fitness functions of a genetic algorithm (GA) that performs a feature selection process over some features extracted from the EGG signals. De Gaetano A. et al (2009) designed a novel supervised neural network-based algorithm to reliably distinguish in electrocardiographic (ECG) records between normal and ischemic beats of the same patient. The basic idea behind this paper is to consider an ECG digital recording of two consecutive R-wave segments (RRR interval) as a noisy sample of an underlying function to be approximated by a fixed number of Radial Basis Functions (RBF). 5.3 NEURAL NETWORK MODEL Artificial Neural Networks (ANN) is systems that are deliberately constructed to make use of some organizational principle resembling those of the human brain. They represent the promising new generation of information processing systems. Neural Networks are good at task such as pattern matching and classification, optimization and data clustering. They have a large number of highly interconnected processing elements called neurons, which usually operate in parallel and are configures in regular architectures. The collective behavior of a NN, like a human brain, demonstrates the ability

6 118 to learn recall and generalize from training patterns or data. NNs are characterized by i) Patter of interconnection between neurons ii) Learning algorithm iii) Activation function. In a NN, each neuron is connected to the other neuron by means of directed communication link and with an associated weight. Each has an internal stare called as its activity level. Based on the signal flow direction they are classified as feed forward networks and feedback networks. The block diagram of a neuron is shown following Figure 5.1. Figure 5.1 Model of a Neuron

7 119 The three basic elements of the neuronal model are: i) A set of synapses or connecting links, each of which is characterized by a weight or strength of its own. A signal x j at the input of synapses j connected to the neuron k is multiplied by weight w kj. ii) An adder for summing the input signals. iii) An activation function for limiting the amplitude of the neuron. (5.2). Neuron k is described mathematically as in Equations (5.1) and k m u W y j 1 u b k k k kj X j (5.1) (5.2) where, X 1, X 2. X m : input W 1, W k,.w m : weights of neuron k b k : bias u k : output of the adder (.) : activation function y k : output of the neuron The weight used on the connections between layers has much significance in the working of the neural network. Weight assignments on connections between neurons not only indicate the strength of the signal that is being fed for aggregation but also the type of interaction between the two neurons. Initializing the network structure is a part of what is called the encoding phase of a network. It is possible to start with randomly chosen

8 120 values for weights and the weights are adjusted appropriately as the network is run through iterations. The net signal is further processed by an activation function or transfer (.). A sigmoidal transfer function given by Equation (5.3) is normally used. 2 u 1 1 exp( u ) (5.3) where, σ = slope parameter shown in Figure 5.2 The sigmoidal activation function used in the BPNN algorithm is Figure 5.2 Sigmoidal Activation Function

9 ADAPTIVE RESONANCE THEORY (ART1) NETWORK The first NN architecture considered for classification is the ART1NN which is designed for clustering binary vectors using unsupervised learning. This network is designed to control the degree of similarity of patterns placed on the same cluster (Raphel Feraud et al 2001). The network produces the clusters by itself if such clusters are identified in the input data, and stores the clustering information about patterns or features without prior information about the possible number and type of clusters. Essentially the network follows the leader after it generates the first cluster with the first input pattern it received. It then creates second cluster if the distance of the second cluster exceeds a certain threshold, otherwise the pattern is clustered with the first cluster and the same procedure is repeated for all sets of data Architecture of ART1 The architecture of an ART1 net (Sivanandam and Deepa, 2007) shown in Figure 5.3 consists of two fields of units the F 1 units and F 2 (cluster) units together with a reset unit to control the degree of similarity of patterns placed on the same cluster unit. The F 1 and F 2 layers are connected by two sets of weighted pathways. It is assumed that the ART1 net is being operated in a fast learning mode, in which the weights reach equilibrium during each learning trial. The architecture of ART1 consists of computational units and supplemental units.

10 122 Figure 5.3 Architecture of ART1 network Computational Units The architecture of the computational units for ART1 consists of F 1 units (input and interface units), F 2 units (cluster units), and a reset unit that implements user control over the degree of similarity of patterns placed on the same cluster. Each unit in the F 1 (a) layer is connected to the corresponding unit in the F 1 (b) layer. Each unit in the F 1 (a) and F 1 (b) is connected to the reset unit, which in turn is connected to every F 2 unit. Each unit in the F 1 (b) layer is connected to each unit in the F 2 (cluster) layer by two weighted pathways. The F 1 (b) unit X i is connected to the F 2 unit Y j by bottom-up weights b ij. Similarly, unit Y j is connected to unit X i by top-down weights t ji. The F 2 layer is the competitive layer in which only the uninhibited node with the largest net input has a nonzero activation.

11 ART1 Training Algorithm The following parameters are used in the ART1 algorithms L : Learning Trail ρ : Vigilance parameter n : Number of components in the input vector b : bottom-up weights ij t : top-down weights ji S : Binary input vector X : Activation vector for interface X : Norm of vector X (sum of the components X i ) The training algorithm of ART 1 network is as follows. Step 1 : Initialize parameters. L 1, 0 1 Initialize weights, 0 b ij L ( L 1 n) t ij ( 0) 1 Step 2 : When stopping condition is false, execute steps 3 to 14. Step 3 : For each training input, do steps 4 to 13. Step 4 : Set activation of all F 2 units to zero. Set activation of F 1 (a) units to input vector S.

12 124 Step 5 : Compute the norm of S using Equation (5.4). S S i (5.4) i Step 6 : Send input signal from F 1 (a) to F 1 (b) layer. Xi S i Step 7 : For each F2 node that is not inhibited, If Y J 1 then computey j using Equation (5.5). Y b j ijxi (5.5) i Step 8 : When reset is true, execute steps 9 to 12. Step 9 : Find J such that J j Y Y for all nodes j. If Y j 1, then all nodes are inhibited and this pattern cannot be clustered. Step 10 : Recompute activations, X of F 1 (b) using the Equation (5.6). X S t (5.6) i i Ji Step 11 : Compute the norm of vector, X using the Equation (5.7). Step 12 : Test for reset X X i (5.7) i If X / S ρ, then Y J 1and execute step 7.

13 125 If X / S ρ then proceed to step 13, Step 13 : Update weights for node J using the Equations (5.8) and (5.9). b ij new LX / L 1 X (5.8) i t Ji(new) X i (5.9) Step 14 : Test for stopping condition. The stopping condition corresponds to no units being reset. In the ART1 architecture, the typical value of learning trail (L) is fixed as 2 and for the different value of vigilance parameter ( 0 1), the classification of normal subjects and abnormal subjects is performed. Table 5.1 EGG Classification for Different in ART1 Network Classification Accuracy (%) EGG Types =0.2 =0.4 =0.6 =0.8 =1.0 Normal Abnormal As shown in Table 5.1, for the ρ = 0.6, the classification percentage is found to be maximum as 68% for normal and 71 % for the abnormal subjects. From the dataset of 1000 samples, five groups of 200 samples each is selected as shown in Table 5.2 and applied to the ART1 network for classification.

14 126 Table 5.2 EGG Classification using ART1 Network Group Normal Abnormal Classification Accuracy (%) Average As a result, ART1 network correctly classified 696 samples out of 1000 samples with a classification accuracy of 69.6 % which include normal subjects and abnormal subjects. To be consistent with the previous studies for comparison confusion matrix is formed and the performance measures are computed for data set of 500 samples Confusion Matrix for ART1NN Confusion matrix formed for the signals acquired in the laboratory setup with different composition as in Table 3.5 are tabulated in Table 5.3 and Table 5.4 for different sample sets.

15 Table. 5.3 Confusion matrix generated using ART1NN for 200 and 300 Samples 200 samples 300 samples Actual classes Actual classes Predicted Classes

16 Table. 5.4 Confusion matrix generated using ART1NN for 400 and 500 Samples 400 samples 500 samples Actual classes Actual classes Predicted Classes

17 129 Table 5.5 Performance Measures for ART1NN ART1NN S. No. Samples Precision Sensitivity Specificity F- measure Time (sec) Classification Accuracy % Precision, Sensitivity, Specificity, F-measure, Time and Classification Accuracy is listed in Table 5.5. For 500 sample set an average of 71% Sensitivity, 95% Specificity and 69.5% Classification Accuracy is observed using ART1 Neural Network. 5.5 LEARNING VECTOR QUANTIZATION (LVQ) NETWORK The second architecture that is used for classification is the LVQ network. One of the common applications of competitive learning is Adaptive Vector Quantization for data compression such as biosignal, speech and image data. In this approach, a given set of EGG data is grouped into n groups or templates so that later one may use an enclosed version of the corresponding template of any input vector to represent the vector as opposed to using the vector itself. Vector Quantization is a technique whereby the input space is divided into a number of distinct regions and for each region a code vector is defined. LVQ is defined for adaptive pattern classification. The class information is used to fine tune the code vector so as to improve the quality of the classifier decision regions. It is assumed that a set of training pattern with known classifications is provided along with an initial distribution of reference vectors.

18 130 Learning Vector Quantization (LVQ) is a method for training competitive layers in a supervised manner. A competitive layer automatically learns to classify input vectors. However, the classes that the competitive layer finds are dependent only on the distance between input vectors. If two input vectors are very similar, the competitive layer places them in the same class. There is no mechanism to determine whether or not any two input vectors are in the same class or different classes. LVQ networks learn to classify input vectors into target classes chosen by the user Architecture of LVQ Figure 5.4 depicts the architecture of LVQ network (Laurene Fausett 2004). The architecture of an LVQ net is similar to that of a Kohonen self-organizing map without a topological structure. In addition, each output unit is assigned to a known class. W 11 X 1 X 1 Y 1 W 12 W 21 W 22 X 2 X 2 Y 2 W 31 W 32 W 23 W 13 X 3 X 3 W 33 Y 3 Figure 5.4 Architecture of LVQ Network

19 131 LVQ network has a first competitive layer and a second linear layer. The competitive layer learns to classify input vectors. The linear layer transforms the competitive layer's classes into target classifications defined by the user. The classes learned by the competitive layer are referred as subclasses and the classes of the linear layer as target classes. Both the competitive and linear layers have one neuron per (sub or target) class. To be consistent with the previous studies for comparison confusion matrix is formed and the performance measures are computed for data set of 500 samples LVQ Training Algorithm The LVQ training algorithm is as follows, Step1 : Assign first m input vectors as reference input vectors. Initialize learning rate Step2 : While stopping condition is false, do steps 3 to 7. Step3 : For each training input vector X do steps 4 and 5. Step4 : Find the winner so that X W is minimum. j Step5 : Update W [if the winner represents the current class]. j If D = C j, then update using the Equation (5.10). W (new ) W (old) α X W (old) j j j (5.10) If D C j, then update using the Equation (5.11).

20 132 W (new ) W (old) α X W (old) j j j (5.11) Step6 : Reduce learning rate, as shown in Equation (5.12). α(new) α(old)/(1 α(old)) (5.12) Step7 : Test stopping condition value of The stopping condition is the learning rate (α) reaching a small Table 5.6 Comparison of LVQ network performance for varying α % of Training Sample % of Testing Sample Classification Accuracy (%) α = α = α = α = Time (sec) α = 0.9 α = 0.6 α = 0.3 α =

21 133 LVQ network is performed with case IV of 500 samples of EGG data from the acquired EGG database. It includes normal subjects and different type of abnormal subjects. The network is trained with different percentage of training vectors and the same is tested with different set of test vectors for varying learning rate of α=0.9, α=0.6, α=0.3 and α=0.1. Table 5.6 lists the percentage of correct classification along with the execution time for different percentage of training and testing data for different values of learning rate α. From Table 5.6, it is found that for the training vector of 60% and above, the efficiency is around 92 %. It is observed that the classification efficiency is independent of learning rate and the execution time increases as the percentage of training vector increases. So for the best performance of LVQ network, it is trained with 60% and above with α= Confusion Matrix for LVQ network The confusion matrix formed for the signals acquired in the laboratory setup with different composition as in Table 3.5 are tabulated in Table 5.7 and Table 5.8 for different sample sets.

22 Table. 5.7 Confusion matrix generated using LVQ network for 200 and 300 Samples 200 samples 300 samples Actual classes Actual classes Predicted Classes

23 Table. 5.8 Confusion matrix generated using LVQ network for 400 and 500 Samples 400 samples 500 samples Actual classes Actual classes Predicted Classes

24 136 Table 5.9 Performance Measures for LVQ network S. No. Samples LVQ network F- Precision Sensitivity Specificity measure Time (sec) Classification Accuracy % Precision, Sensitivity, Specificity, F-measure, Time and Classification Accuracy is listed in Table 5.9. For 500 sample set an average of 91% Sensitivity, 98.4% Specificity and 92 % Classification Accuracy is observed using LVQ network. 5.6 BPNN WITH MINIMAL RESOURCE ALLOCATION NETWORK ALGORITHM The determination of various parameters associated with NN is not straight forward and finding optimal configuration is very much time and memory consuming process. Therefore instead of selecting the architecture randomly or by trial and error method, the MRAN algorithm is used to find the minimum number of neurons to be fixed in the hidden layer for maximum efficiency in BPNN. The most widely used architecture of an NN is that of a Multilayer Perception (MLP) trained using Backpropagation (BP) algorithm. It is a gradient descent algorithm that tries to minimize the average squared error of the network.

25 Minimal Resource Allocation Network MRAN is a sequential learning radial basis function neural network which combines the growth criterion of the Resource Allocating Network (RAN) of Platt with a pruning strategy based on the relative contribution of each hidden unit to the overall network output. The resulting network leads toward a minimal topology for the RAN. Radial Basis Function Neural Networks (RBFNN) is found to be well suited for function approximation and pattern recognition due to its topological structure and its ability to reveal the proceeding of learning in an explicit manner (Tao 1993 and Musavi et al 1992). In the radial basis function (RBF) network implementation, the basis functions are usually chosen as gaussian and the number of hidden units (that is centres and widths of the gaussian functions) is fixed based on the properties of input data. The weight connecting the hidden and output units are estimated by a linear least squares method. The disadvantage with this approach is that it is not suitable for seqential learning and usually results in too many hidden units. A significant contribution that overcomes these drawbacks was made by Platt through the development of an algorithm that adds hidden units to the network based on the novelty of the new data. The algorithm is suitable for sequential learning and is based on the idea that the number of hidden units should correspond to the complexity of the underlying function as reflected in the observed data. The resulting network is called a Resource Allocating Network (RAN) and it starts with no hidden units and grows by allocating new hidden units based on the novality in the observations that arrive sequentially. If an observation has no novelty, then the existing parameters of the network are adjusted by an Least Mean Square (LMS) algorithm to fit that observation.

26 Minimal Resource Allocation Network (MRAN) Architecture The structure of RAN is the same as that of RBF network (Lu Yingwei, 1998) and is shown in Figure 5.5. Each hidden unit in the network has two parameters called a center (µ), and a width (σ) associated with it. w 11 µ 1, σ 1 v 11 Bias v 0 X 1 Y 1 X i Y k µ i, σ j X n Y m W np µ p, σ p V pm INPUT LAYER HIDDEN LAYER OUTPUT LAYER Figure 5.5 Architecture of MRAN Network The activation function of the hidden units is radially symmetric in the input space and the output of each hidden unit depends only on the radial distance between the input vector and the center parameter µ for that hidden unit. The weight between input and hidden layer is w 11 to w np and the weight between hidden and output layer is v 11 to v pm. The response of each hidden unit is scaled by its connecting weights to the output units and then summed to produce the overall network output. The output of neuron is given by Equations (5.13) and (5.14).

27 139 m p 0 v jkφk (X) k 1 j 1 f (x) v (5.13) and 1 2 φ k (X) exp X μ (5.14) 2 j σ j where, φ k (X) : response of the k th hidden unit v : connecting weight to the output unit jk μ : center of the j th hidden neuron j σ : width of the j th hidden neuron j v 0 : bias term. The learning process of RAN involves allocation of new hidden units as well as adaptation of network parameters. The network begins with no hidden units. As input output (x n,y n ) data are received during training, some of them are used for generating new hidden units. The decision as to whether an input output data should give rise to a new hidden unit depends on the data which is decided using the two conditions given by Equations (5.15) and (5.16). X n μ e (5.15) nr n e n y f ( x e (5.16) n n ) min

28 140 where, X n : Input data y n : Output data μ nr : Center which is nearest to x n e n : Thresholds to be selected appropriately e min : Thresholds to be selected appropriately If the above two conditions are satisfied then the data is deemed to have novelty and a new hidden unit is added. The first condition says that the input must be far away from all the centers and the second condition says that the error between the network output and the target output must be significant. The value of e min represents the desired approximation accuracy of the network output and the distance e n represents the scale of resolution in the input space. The algorithm begins with e n = e max. e max is chosen as the largest scale of interest in the input space, typically the entire input space of nonzero probability. The distance is e n decayed exponentially as e n =max{e max, γ n,e min } where 0< γ < 1 is a decay constant. The value for e n is decayed until it reaches e min. The exponential decaying of the distance criterion allows fewer basis functions with large widths (smoother basis functions) initially and with increasing number of observations more basic functions with smaller widths are allocated to fine tune the approximation.

29 141 Figure 5.6 Flow Chart of MRAN Algorithm The parameters associated with the new hidden unit are given by Equations (5.17) to (5.19). v 1 (5.17) k e n μ x k 1 n (5.18) σ k 1 k x n μ nr (5.19)

30 142 where k is an overlap factor that determines the amount of overlap of the responses of the hidden units in the input space. The value of e max and e min are 0.4 and 0.2 respectively. The flowchart of MRAN algorithm is shown in Figure 5.6. It gives the detail about the computation of network output values by comparing it with the actual value so that the error value can be obtained. If the error value satisfies the criterion for adding new neurons, then the new hidden neuron is added else the value of weight, center and width of the existing neuron is adjusted accordingly to satisfy the condition. When the criteria are satisfied by the hidden neurons for pruning then the training is done to prune the hidden neurons, if not the training ends Training of Minimal Resource Allocation Network (MRAN) The MRAN network is trained with the EGG database. The target is assigned as 1 for correct position and 0 is assigned for the wrong position of classification. MRAN algorithm determines the minimum number of Hidden Layer Neurons (HLN) required for the maximum efficiency. HLN=3 HLN=6 HLN=9 HLN=12 HLN=15 HLN=18 Classification Accuracy (%) Decay Constant(γ) Figure 5.7 Classification using MRAN Algorithm for varying Decay Constant

31 143 Figure 5.7 represents the consistency of MRAN network in the classification of disorders for different values of decay constant with respect to hidden layer neurons. It is observed that, when MRAN is trained with less than 9 HLN, the percentage of classification is found to be poor. When HLN is 15 and above, the classification is found to be saturated at 90 % for decay constant of 0.9. It is also found that the classification linearly increases with different value of decay constant for 15 number of HLN. The HLN number of is varied from 9 to 18 for fixing the HLN in the BPNN architecture to obtain maximum percentage of classification. Table 5.10 tabulates the different trials at which MRAN gives different number of neurons and its corresponding efficiency. From the MRAN algorithm, the required number of neurons in the hidden layer is determined for maximum efficiency. The network is tested with 350 samples. The table shows that network with 15 hidden neurons achieves maximum efficiency. It also shows the number of samples classified correctly under each disease. Table 5.10 Fixation of Number of HLN using MRAN Algorithm Trials No of Neurons in Hidden Layer Normal Number of Test EGG Classified ( 50 Each) Bradygastria Dyspepsia Abnormal Nausea Tachygastria Ulcer Vomiting Total Correct Classification (%)

32 144 Thus, the configuration of network is fixed as using MRAN algorithm. This architecture is then used for classification of digestive system disorders. Different training algorithms namely trainrp, trainoss, traingdx, trainlm,trainscg,trainbfg, traincgb,traincgp and traincgf defined in Appendix 4 are applied for classification. The performances of the training algorithm are compared with respect to percentage of correct classification Performance Comparison of Training Algorithms in BP-MRAN Network The performance comparison of different training algorithm for normal subjects, bradygastria subjects, dyspepsia subjects, nausea subjects, tachygastria subjects, ulcer subjects and vomiting subjects is shown in Figures 5.8 to 5.14 respectively. The graph is plotted between error goal and epochs. From all plots, it is observed that in the training algorithms trainrp, traingdx and traincgp, the epoch values increases for error goal of 0.1, 0.01 and Figure 5.8 Error vs Number of Epochs for Normal subjects

33 145 Figure 5.9 Error vs Number of Epochs for Bradygastria subjects Number of epochs Error=0.1 Error=0.01 Error=0.001 Error goal OSS RP GDX LM SCG BFG CGB CGP Figure 5.10 Error vs Number of Epochs for Dyspepsia Subjects 50 OSS Number of epochs Error=0.1 Error=0.01 Error=0.001 Error goal RP GDX LM SCG BFG CGB CGP Figure 5.11 Error vs Number of Epochs for Nausea Subjects

34 Number of epochs OSS 50 RP 40 GDX 30 LM 20 SCG 10 BFG 0 Error=0.1 Error=0.01 Error=0.001 Error goal CGB CGP Figure 5.12 Error vs Number of Epochs for Tachygastria Subjects Number of epochs Figure 5.13 Error vs Number of Epochs for Ulcer Subjects 50 OSS 40 RP 30 GDX LM 20 SCG 10 BFG 0 Error=0.1 Error=0.01 Error goal Error=0.001 CGB CGP Figure 5.14 Error vs Number of Epochs for Vomiting Subjects

35 147 Using this BP-MRAN archictecure, classification of EGG subject is performed with different set of learning rate and momentum factor value for different training algorithm. Maximum efficiency of each algorithm for varying learning rate and momentum factor is shown in Table Table 5.11 Classification Accuracy of EGG for Different Training Algorithms for varying α and β LR (α) MF (β) Correct Classification of EGG (%) OSS RP GDX LM * SCG BFG * CGB CGP CGF Learning Rate, -Momentum Factor,*-MF is not applicable From the Table 5.11, it is observed that the classification of EGG subjects with trainrp is 97 % for LR and MF of 0.6. The trainrp exhibits better performance when compared with other training algorithms. It is also observed that a maximum classification of 61 % with LR of 0.2 and MF of 0.8 is observed for Trainoss. The training algorithm traingdx achieved

36 148 maximum classification of 94 % for LR and MF of 0.4, where as the training algorithm trainlm achieved a maximum of 56 % classification for LR of 0.4 and MF of 0.6. The trainscg algorithm achieved maximum classification of 65 % with LR=0.4 and MF= 0.8. The training algorithm trainbfg showed maximum classification of 63 % for LR of 0.6 and MF of 0.8, the training algorithm traincgb achieved maximum classification of 62 % for LR and MF of 0.6, the training algorithm traincgp achieved maximum classification of 93 % for LR of 0.8 and MF of 0.4 and the training algorithm traincgf achieved maximum classification of 60 % for LR of 0.6 and MF of 0.8. Table 5.12 shows the efficiency of each training algorithm in BPNN during test phase. The targets are used as 0 s and 1 s. The total samples used for classification is 350. The table shows the number of samples classified correctly under each disorder when using BP-MRAN network. Table 5.12 Performance Comparison of Training Algorithm for BP- MRAN Network Training Algorithm Normal Number of Test EGG Classified ( 50 Each) Bradygastria Dyspepsia Abnormal Nausea Tachygastria Ulcer Vomiting Total Correct Classification (%) OSS RP GDX LM SCG BFG CGB CGP CGF

37 149 From Table 5.12, it is concluded that the training algorithms trainrp, traingdx and traincgp gives maximum efficiency compared to other algorithms. MRAN used in combination with the BPNN, decides the number of HLNs to achieve the maximum efficiency of classification. To be consistent with the previous studies for comparison confusion matrix is formed and the performance measures are computed for data set of 500 samples Confusion Matrix for BP-MRAN network Confusion matrix formed for the signals acquired in the laboratory setup with different composition as in Table 3.5 are tabulated in Table 5.13 and Table 5.14 for different sample sets. 5.7 COMPARISON OF NEURAL NETWORK ARCHITECTURES The performance for the three architectures used in this thesis to detect the abnormalities in the EGG signals is discussed here. The percentages of sensitivity, percentage of specificity and classification accuracy of three architectures are tabulated in Table The sensitivity and specificity confirms the classification accuracy of all architecture is plotted in Figure 5.15 and Figure 5.16 respectively. The classification accuracy for the three architectures is compared as shown in Figure For ART1NN, LVQNN and BP-MRAN network the classification accuracy obtained is 69.5%, 92.0% and 96% respectively.

38 Table Confusion matrix generated using BP-MRAN network for 200 and 300 Samples 200 samples 300 samples Actual classes Actual classes Predicted Classes

39 Table Confusion matrix generated using BP-MRAN network for 400 and 500 Samples 400 samples 500 samples Actual classes Actual classes Predicted Classes

40 152 Table 5.15 Performance Measures for BP-MRAN network S. Samples No. BP-MRAN network Precision Sensitivity Specificity F- Time Classification measure (sec) Accuracy % Precision, Sensitivity, Specificity, F-measure, Time and Classification Accuracy are listed in Table For 500 sample set an average of 94% Sensitivity, 98.5% Specificity and 96 % Classification Accuracy is observed using BP-MRAN network.

41 Table 5.16 Comparison of Neural Network Architectures 500 Samples S. No. Architecture Sensitivity % Specificity % Classification Accuracy % 1. ART1 NN LVQ NN BP-MRAN NN

42 154 % of Sensitivity ART1 NN LVQ NN BP-MRAN NN Architectures Figure 5.15 Sensitivity of different Architectures % Specificity ART1 NN LVQ NN BP-MRAN NN Architecures Figure 5.16 Specificity of different Architectures

43 155 % of Classification Accuracy ART1 NN LVQ NN BP-MRAN NN Architecures Figure 5.17 Classification Accuracy of different Architectures 5.8 CONCLUSION In this chapter, three architectures of ANN were trained and tested to classify EGG signals. ART1NN, an unsupervised network is used to classify the EGG as normal or abnormal. The LVQ network, a supervised method using competitive layers to improve the classifier decision is investigated. BPNN using supervised learning was implemented. To maximize the efficiency and minimize the computation time, the MRAN algorithm is applied to fix architecture. Nine training algorithms are used to train the BP- MRAN and their performance is compared. It is observed that trainrp, traingdx and traincgp give maximum classification of 96.28%, 94% and 92.57% respectively. For comparison with other methods, Sensitivity, Specificity analysis is also done and the maximum classification accuracy of 96 % is obtained for BP-MRAN trained with Resilient Backpropagation algorithm. 14% and 10% improvement is observed using BP-MRAN with trainrp when compared Chacon et al (2009) who used BPANN with trainrp and Curilem et al (2010) who used SVM with GA.

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Function approximation using RBF network. 10 basis functions and 25 data points.

Function approximation using RBF network. 10 basis functions and 25 data points. 1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining

More information

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION 6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm

More information

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

Data Mining. Neural Networks

Data Mining. Neural Networks Data Mining Neural Networks Goals for this Unit Basic understanding of Neural Networks and how they work Ability to use Neural Networks to solve real problems Understand when neural networks may be most

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Radial Basis Function Networks Adrian Horzyk Preface Radial Basis Function Networks (RBFN) are a kind of artificial neural networks that use radial basis functions (RBF) as activation

More information

Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

More information

CHAPTER VI BACK PROPAGATION ALGORITHM

CHAPTER VI BACK PROPAGATION ALGORITHM 6.1 Introduction CHAPTER VI BACK PROPAGATION ALGORITHM In the previous chapter, we analysed that multiple layer perceptrons are effectively applied to handle tricky problems if trained with a vastly accepted

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Fundamentals Adrian Horzyk Preface Before we can proceed to discuss specific complex methods we have to introduce basic concepts, principles, and models of computational intelligence

More information

Computational Intelligence Meets the NetFlix Prize

Computational Intelligence Meets the NetFlix Prize Computational Intelligence Meets the NetFlix Prize Ryan J. Meuth, Paul Robinette, Donald C. Wunsch II Abstract The NetFlix Prize is a research contest that will award $1 Million to the first group to improve

More information

Liquefaction Analysis in 3D based on Neural Network Algorithm

Liquefaction Analysis in 3D based on Neural Network Algorithm Liquefaction Analysis in 3D based on Neural Network Algorithm M. Tolon Istanbul Technical University, Turkey D. Ural Istanbul Technical University, Turkey SUMMARY: Simplified techniques based on in situ

More information

Neural Networks. Neural Network. Neural Network. Neural Network 2/21/2008. Andrew Kusiak. Intelligent Systems Laboratory Seamans Center

Neural Networks. Neural Network. Neural Network. Neural Network 2/21/2008. Andrew Kusiak. Intelligent Systems Laboratory Seamans Center Neural Networks Neural Network Input Andrew Kusiak Intelligent t Systems Laboratory 2139 Seamans Center Iowa City, IA 52242-1527 andrew-kusiak@uiowa.edu http://www.icaen.uiowa.edu/~ankusiak Tel. 319-335

More information

CHAPTER 7 MASS LOSS PREDICTION USING ARTIFICIAL NEURAL NETWORK (ANN)

CHAPTER 7 MASS LOSS PREDICTION USING ARTIFICIAL NEURAL NETWORK (ANN) 128 CHAPTER 7 MASS LOSS PREDICTION USING ARTIFICIAL NEURAL NETWORK (ANN) Various mathematical techniques like regression analysis and software tools have helped to develop a model using equation, which

More information

Pattern Classification Algorithms for Face Recognition

Pattern Classification Algorithms for Face Recognition Chapter 7 Pattern Classification Algorithms for Face Recognition 7.1 Introduction The best pattern recognizers in most instances are human beings. Yet we do not completely understand how the brain recognize

More information

Radial Basis Function Neural Network Classifier

Radial Basis Function Neural Network Classifier Recognition of Unconstrained Handwritten Numerals by a Radial Basis Function Neural Network Classifier Hwang, Young-Sup and Bang, Sung-Yang Department of Computer Science & Engineering Pohang University

More information

Lecture #11: The Perceptron

Lecture #11: The Perceptron Lecture #11: The Perceptron Mat Kallada STAT2450 - Introduction to Data Mining Outline for Today Welcome back! Assignment 3 The Perceptron Learning Method Perceptron Learning Rule Assignment 3 Will be

More information

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization More on Learning Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization Neural Net Learning Motivated by studies of the brain. A network of artificial

More information

Neural Network Weight Selection Using Genetic Algorithms

Neural Network Weight Selection Using Genetic Algorithms Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks

More information

Why MultiLayer Perceptron/Neural Network? Objective: Attributes:

Why MultiLayer Perceptron/Neural Network? Objective: Attributes: Why MultiLayer Perceptron/Neural Network? Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu Natural Language Processing CS 6320 Lecture 6 Neural Language Models Instructor: Sanda Harabagiu In this lecture We shall cover: Deep Neural Models for Natural Language Processing Introduce Feed Forward

More information

5 Learning hypothesis classes (16 points)

5 Learning hypothesis classes (16 points) 5 Learning hypothesis classes (16 points) Consider a classification problem with two real valued inputs. For each of the following algorithms, specify all of the separators below that it could have generated

More information

Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation

Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation Learning Learning agents Inductive learning Different Learning Scenarios Evaluation Slides based on Slides by Russell/Norvig, Ronald Williams, and Torsten Reil Material from Russell & Norvig, chapters

More information

In this assignment, we investigated the use of neural networks for supervised classification

In this assignment, we investigated the use of neural networks for supervised classification Paul Couchman Fabien Imbault Ronan Tigreat Gorka Urchegui Tellechea Classification assignment (group 6) Image processing MSc Embedded Systems March 2003 Classification includes a broad range of decision-theoric

More information

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska Classification Lecture Notes cse352 Neural Networks Professor Anita Wasilewska Neural Networks Classification Introduction INPUT: classification data, i.e. it contains an classification (class) attribute

More information

Chapter 7: Competitive learning, clustering, and self-organizing maps

Chapter 7: Competitive learning, clustering, and self-organizing maps Chapter 7: Competitive learning, clustering, and self-organizing maps António R. C. Paiva EEL 6814 Spring 2008 Outline Competitive learning Clustering Self-Organizing Maps What is competition in neural

More information

COMBINING NEURAL NETWORKS FOR SKIN DETECTION

COMBINING NEURAL NETWORKS FOR SKIN DETECTION COMBINING NEURAL NETWORKS FOR SKIN DETECTION Chelsia Amy Doukim 1, Jamal Ahmad Dargham 1, Ali Chekima 1 and Sigeru Omatu 2 1 School of Engineering and Information Technology, Universiti Malaysia Sabah,

More information

Research on Evaluation Method of Product Style Semantics Based on Neural Network

Research on Evaluation Method of Product Style Semantics Based on Neural Network Research Journal of Applied Sciences, Engineering and Technology 6(23): 4330-4335, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 28, 2012 Accepted:

More information

Radial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR)

Radial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR) Radial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR) Yaobin Qin qinxx143@umn.edu Supervisor: Pro.lilja Department of Electrical and Computer Engineering Abstract

More information

Radial basis function networks are special-designed,

Radial basis function networks are special-designed, Using Self-organizing Incremental Neural Network (SOINN) For Radial Basis Function Networks Jie Lu, Furao Shen, and Jinxi Zhao Abstract This paper presents a batch learning algorithm and an online learning

More information

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION 75 CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION 6.1 INTRODUCTION Counter propagation network (CPN) was developed by Robert Hecht-Nielsen as a means to combine an unsupervised Kohonen

More information

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016 CS 4510/9010 Applied Machine Learning 1 Neural Nets Paula Matuszek Fall 2016 Neural Nets, the very short version 2 A neural net consists of layers of nodes, or neurons, each of which has an activation

More information

Planar Robot Arm Performance: Analysis with Feedforward Neural Networks

Planar Robot Arm Performance: Analysis with Feedforward Neural Networks Planar Robot Arm Performance: Analysis with Feedforward Neural Networks Abraham Antonio López Villarreal, Samuel González-López, Luis Arturo Medina Muñoz Technological Institute of Nogales Sonora Mexico

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Dynamic Analysis of Structures Using Neural Networks

Dynamic Analysis of Structures Using Neural Networks Dynamic Analysis of Structures Using Neural Networks Alireza Lavaei Academic member, Islamic Azad University, Boroujerd Branch, Iran Alireza Lohrasbi Academic member, Islamic Azad University, Boroujerd

More information

Figure (5) Kohonen Self-Organized Map

Figure (5) Kohonen Self-Organized Map 2- KOHONEN SELF-ORGANIZING MAPS (SOM) - The self-organizing neural networks assume a topological structure among the cluster units. - There are m cluster units, arranged in a one- or two-dimensional array;

More information

Artificial Neural Networks MLP, RBF & GMDH

Artificial Neural Networks MLP, RBF & GMDH Artificial Neural Networks MLP, RBF & GMDH Jan Drchal drchajan@fel.cvut.cz Computational Intelligence Group Department of Computer Science and Engineering Faculty of Electrical Engineering Czech Technical

More information

Machine Learning in Biology

Machine Learning in Biology Università degli studi di Padova Machine Learning in Biology Luca Silvestrin (Dottorando, XXIII ciclo) Supervised learning Contents Class-conditional probability density Linear and quadratic discriminant

More information

Early tube leak detection system for steam boiler at KEV power plant

Early tube leak detection system for steam boiler at KEV power plant Early tube leak detection system for steam boiler at KEV power plant Firas B. Ismail 1a,, Deshvin Singh 1, N. Maisurah 1 and Abu Bakar B. Musa 1 1 Power Generation Research Centre, College of Engineering,

More information

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar

More information

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R. Lecture 24: Learning 3 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture Continuation of Neural Networks Artificial Neural Networks Compose of nodes/units connected by links Each link has a numeric

More information

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks Ritika Luthra Research Scholar Chandigarh University Gulshan Goyal Associate Professor Chandigarh University ABSTRACT Image Skeletonization

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

A neural network that classifies glass either as window or non-window depending on the glass chemistry.

A neural network that classifies glass either as window or non-window depending on the glass chemistry. A neural network that classifies glass either as window or non-window depending on the glass chemistry. Djaber Maouche Department of Electrical Electronic Engineering Cukurova University Adana, Turkey

More information

Neural Networks and Deep Learning

Neural Networks and Deep Learning Neural Networks and Deep Learning Example Learning Problem Example Learning Problem Celebrity Faces in the Wild Machine Learning Pipeline Raw data Feature extract. Feature computation Inference: prediction,

More information

Notes on Multilayer, Feedforward Neural Networks

Notes on Multilayer, Feedforward Neural Networks Notes on Multilayer, Feedforward Neural Networks CS425/528: Machine Learning Fall 2012 Prepared by: Lynne E. Parker [Material in these notes was gleaned from various sources, including E. Alpaydin s book

More information

A Novel Pruning Algorithm for Optimizing Feedforward Neural Network of Classification Problems

A Novel Pruning Algorithm for Optimizing Feedforward Neural Network of Classification Problems Chapter 5 A Novel Pruning Algorithm for Optimizing Feedforward Neural Network of Classification Problems 5.1 Introduction Many researchers have proposed pruning algorithms in numerous ways to optimize

More information

Online Neural Network Training for Automatic Ischemia Episode Detection

Online Neural Network Training for Automatic Ischemia Episode Detection Online Neural Network Training for Automatic Ischemia Episode Detection D.K. Tasoulis,, L. Vladutu 2, V.P. Plagianakos 3,, A. Bezerianos 2, and M.N. Vrahatis, Department of Mathematics and University of

More information

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM Annals of the University of Petroşani, Economics, 12(4), 2012, 185-192 185 IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM MIRCEA PETRINI * ABSTACT: This paper presents some simple techniques to improve

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing ECG782: Multidimensional Digital Signal Processing Object Recognition http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Knowledge Representation Statistical Pattern Recognition Neural Networks Boosting

More information

4. Feedforward neural networks. 4.1 Feedforward neural network structure

4. Feedforward neural networks. 4.1 Feedforward neural network structure 4. Feedforward neural networks 4.1 Feedforward neural network structure Feedforward neural network is one of the most common network architectures. Its structure and some basic preprocessing issues required

More information

CMPT 882 Week 3 Summary

CMPT 882 Week 3 Summary CMPT 882 Week 3 Summary! Artificial Neural Networks (ANNs) are networks of interconnected simple units that are based on a greatly simplified model of the brain. ANNs are useful learning tools by being

More information

Seismic regionalization based on an artificial neural network

Seismic regionalization based on an artificial neural network Seismic regionalization based on an artificial neural network *Jaime García-Pérez 1) and René Riaño 2) 1), 2) Instituto de Ingeniería, UNAM, CU, Coyoacán, México D.F., 014510, Mexico 1) jgap@pumas.ii.unam.mx

More information

Unsupervised Learning

Unsupervised Learning Networks for Pattern Recognition, 2014 Networks for Single Linkage K-Means Soft DBSCAN PCA Networks for Kohonen Maps Linear Vector Quantization Networks for Problems/Approaches in Machine Learning Supervised

More information

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK FOR IMAGE RESTORATION

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK FOR IMAGE RESTORATION 135 CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK FOR IMAGE RESTORATION 6.1 INTRODUCTION Neural networks have high fault tolerance and potential for adaptive training. A Full Counter Propagation Neural

More information

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism) Artificial Neural Networks Analogy to biological neural systems, the most robust learning systems we know. Attempt to: Understand natural biological systems through computational modeling. Model intelligent

More information

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1 Big Data Methods Chapter 5: Machine learning Big Data Methods, Chapter 5, Slide 1 5.1 Introduction to machine learning What is machine learning? Concerned with the study and development of algorithms that

More information

Ensemble methods in machine learning. Example. Neural networks. Neural networks

Ensemble methods in machine learning. Example. Neural networks. Neural networks Ensemble methods in machine learning Bootstrap aggregating (bagging) train an ensemble of models based on randomly resampled versions of the training set, then take a majority vote Example What if you

More information

Optimizing Number of Hidden Nodes for Artificial Neural Network using Competitive Learning Approach

Optimizing Number of Hidden Nodes for Artificial Neural Network using Competitive Learning Approach Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 5, May 2015, pg.358

More information

MLPQNA-LEMON Multi Layer Perceptron neural network trained by Quasi Newton or Levenberg-Marquardt optimization algorithms

MLPQNA-LEMON Multi Layer Perceptron neural network trained by Quasi Newton or Levenberg-Marquardt optimization algorithms MLPQNA-LEMON Multi Layer Perceptron neural network trained by Quasi Newton or Levenberg-Marquardt optimization algorithms 1 Introduction In supervised Machine Learning (ML) we have a set of data points

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information

PERFORMANCE COMPARISON OF BACK PROPAGATION AND RADIAL BASIS FUNCTION WITH MOVING AVERAGE FILTERING AND WAVELET DENOISING ON FETAL ECG EXTRACTION

PERFORMANCE COMPARISON OF BACK PROPAGATION AND RADIAL BASIS FUNCTION WITH MOVING AVERAGE FILTERING AND WAVELET DENOISING ON FETAL ECG EXTRACTION I J C T A, 9(28) 2016, pp. 431-437 International Science Press PERFORMANCE COMPARISON OF BACK PROPAGATION AND RADIAL BASIS FUNCTION WITH MOVING AVERAGE FILTERING AND WAVELET DENOISING ON FETAL ECG EXTRACTION

More information

Proceedings of the 2016 International Conference on Industrial Engineering and Operations Management Detroit, Michigan, USA, September 23-25, 2016

Proceedings of the 2016 International Conference on Industrial Engineering and Operations Management Detroit, Michigan, USA, September 23-25, 2016 Neural Network Viscosity Models for Multi-Component Liquid Mixtures Adel Elneihoum, Hesham Alhumade, Ibrahim Alhajri, Walid El Garwi, Ali Elkamel Department of Chemical Engineering, University of Waterloo

More information

Research Article International Journals of Advanced Research in Computer Science and Software Engineering ISSN: X (Volume-7, Issue-6)

Research Article International Journals of Advanced Research in Computer Science and Software Engineering ISSN: X (Volume-7, Issue-6) International Journals of Advanced Research in Computer Science and Software Engineering Research Article June 17 Artificial Neural Network in Classification A Comparison Dr. J. Jegathesh Amalraj * Assistant

More information

Neural Networks (pp )

Neural Networks (pp ) Notation: Means pencil-and-paper QUIZ Means coding QUIZ Neural Networks (pp. 106-121) The first artificial neural network (ANN) was the (single-layer) perceptron, a simplified model of a biological neuron.

More information

II. ARTIFICIAL NEURAL NETWORK

II. ARTIFICIAL NEURAL NETWORK Applications of Artificial Neural Networks in Power Systems: A Review Harsh Sareen 1, Palak Grover 2 1, 2 HMR Institute of Technology and Management Hamidpur New Delhi, India Abstract: A standout amongst

More information

Opening the Black Box Data Driven Visualizaion of Neural N

Opening the Black Box Data Driven Visualizaion of Neural N Opening the Black Box Data Driven Visualizaion of Neural Networks September 20, 2006 Aritificial Neural Networks Limitations of ANNs Use of Visualization (ANNs) mimic the processes found in biological

More information

For Monday. Read chapter 18, sections Homework:

For Monday. Read chapter 18, sections Homework: For Monday Read chapter 18, sections 10-12 The material in section 8 and 9 is interesting, but we won t take time to cover it this semester Homework: Chapter 18, exercise 25 a-b Program 4 Model Neuron

More information

A Study of Various Training Algorithms on Neural Network for Angle based Triangular Problem

A Study of Various Training Algorithms on Neural Network for Angle based Triangular Problem A Study of Various Training Algorithms on Neural Network for Angle based Triangular Problem Amarpal Singh M.Tech (CS&E) Amity University Noida, India Piyush Saxena M.Tech (CS&E) Amity University Noida,

More information

EE 589 INTRODUCTION TO ARTIFICIAL NETWORK REPORT OF THE TERM PROJECT REAL TIME ODOR RECOGNATION SYSTEM FATMA ÖZYURT SANCAR

EE 589 INTRODUCTION TO ARTIFICIAL NETWORK REPORT OF THE TERM PROJECT REAL TIME ODOR RECOGNATION SYSTEM FATMA ÖZYURT SANCAR EE 589 INTRODUCTION TO ARTIFICIAL NETWORK REPORT OF THE TERM PROJECT REAL TIME ODOR RECOGNATION SYSTEM FATMA ÖZYURT SANCAR 1.Introductıon. 2.Multi Layer Perception.. 3.Fuzzy C-Means Clustering.. 4.Real

More information

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation Farrukh Jabeen Due Date: November 2, 2009. Neural Networks: Backpropation Assignment # 5 The "Backpropagation" method is one of the most popular methods of "learning" by a neural network. Read the class

More information

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving

More information

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016 CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2016 A2/Midterm: Admin Grades/solutions will be posted after class. Assignment 4: Posted, due November 14. Extra office hours:

More information

Exercise: Training Simple MLP by Backpropagation. Using Netlab.

Exercise: Training Simple MLP by Backpropagation. Using Netlab. Exercise: Training Simple MLP by Backpropagation. Using Netlab. Petr Pošík December, 27 File list This document is an explanation text to the following script: demomlpklin.m script implementing the beckpropagation

More information

3 Nonlinear Regression

3 Nonlinear Regression CSC 4 / CSC D / CSC C 3 Sometimes linear models are not sufficient to capture the real-world phenomena, and thus nonlinear models are necessary. In regression, all such models will have the same basic

More information

Multilayer Feed-forward networks

Multilayer Feed-forward networks Multi Feed-forward networks 1. Computational models of McCulloch and Pitts proposed a binary threshold unit as a computational model for artificial neuron. This first type of neuron has been generalized

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Neural Computation : Lecture 14 John A. Bullinaria, 2015 1. The RBF Mapping 2. The RBF Network Architecture 3. Computational Power of RBF Networks 4. Training

More information

RIMT IET, Mandi Gobindgarh Abstract - In this paper, analysis the speed of sending message in Healthcare standard 7 with the use of back

RIMT IET, Mandi Gobindgarh Abstract - In this paper, analysis the speed of sending message in Healthcare standard 7 with the use of back Global Journal of Computer Science and Technology Neural & Artificial Intelligence Volume 13 Issue 3 Version 1.0 Year 2013 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global

More information

Network Traffic Measurements and Analysis

Network Traffic Measurements and Analysis DEIB - Politecnico di Milano Fall, 2017 Sources Hastie, Tibshirani, Friedman: The Elements of Statistical Learning James, Witten, Hastie, Tibshirani: An Introduction to Statistical Learning Andrew Ng:

More information

Channel Performance Improvement through FF and RBF Neural Network based Equalization

Channel Performance Improvement through FF and RBF Neural Network based Equalization Channel Performance Improvement through FF and RBF Neural Network based Equalization Manish Mahajan 1, Deepak Pancholi 2, A.C. Tiwari 3 Research Scholar 1, Asst. Professor 2, Professor 3 Lakshmi Narain

More information

WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES?

WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES? WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES? Initially considered for this model was a feed forward neural network. Essentially, this means connections between units do not form

More information

The Automatic Musicologist

The Automatic Musicologist The Automatic Musicologist Douglas Turnbull Department of Computer Science and Engineering University of California, San Diego UCSD AI Seminar April 12, 2004 Based on the paper: Fast Recognition of Musical

More information

Clustering with Reinforcement Learning

Clustering with Reinforcement Learning Clustering with Reinforcement Learning Wesam Barbakh and Colin Fyfe, The University of Paisley, Scotland. email:wesam.barbakh,colin.fyfe@paisley.ac.uk Abstract We show how a previously derived method of

More information

Chapter 7 UNSUPERVISED LEARNING TECHNIQUES FOR MAMMOGRAM CLASSIFICATION

Chapter 7 UNSUPERVISED LEARNING TECHNIQUES FOR MAMMOGRAM CLASSIFICATION UNSUPERVISED LEARNING TECHNIQUES FOR MAMMOGRAM CLASSIFICATION Supervised and unsupervised learning are the two prominent machine learning algorithms used in pattern recognition and classification. In this

More information

Akarsh Pokkunuru EECS Department Contractive Auto-Encoders: Explicit Invariance During Feature Extraction

Akarsh Pokkunuru EECS Department Contractive Auto-Encoders: Explicit Invariance During Feature Extraction Akarsh Pokkunuru EECS Department 03-16-2017 Contractive Auto-Encoders: Explicit Invariance During Feature Extraction 1 AGENDA Introduction to Auto-encoders Types of Auto-encoders Analysis of different

More information

Chapter 4. Adaptive Self-tuning : A Neural Network approach. 4.1 Introduction

Chapter 4. Adaptive Self-tuning : A Neural Network approach. 4.1 Introduction Chapter 4 Adaptive Self-tuning : A Neural Network approach 4.1 Introduction Machine learning is a method of solving real world problems by employing the hidden knowledge present in the past data or data

More information

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter Review: Final Exam Model for a Learning Step Learner initially Environm ent Teacher Compare s pe c ia l Information Control Correct Learning criteria Feedback changed Learner after Learning Learning by

More information

Neural Networks CMSC475/675

Neural Networks CMSC475/675 Introduction to Neural Networks CMSC475/675 Chapter 1 Introduction Why ANN Introduction Some tasks can be done easily (effortlessly) by humans but are hard by conventional paradigms on Von Neumann machine

More information

Efficient Object Tracking Using K means and Radial Basis Function

Efficient Object Tracking Using K means and Radial Basis Function Efficient Object Tracing Using K means and Radial Basis Function Mr. Pradeep K. Deshmuh, Ms. Yogini Gholap University of Pune Department of Post Graduate Computer Engineering, JSPM S Rajarshi Shahu College

More information

An Evaluation of Statistical Models for Programmatic TV Bid Clearance Predictions

An Evaluation of Statistical Models for Programmatic TV Bid Clearance Predictions Lappeenranta University of Technology School of Business and Management Degree Program in Computer Science Shaghayegh Royaee An Evaluation of Statistical Models for Programmatic TV Bid Clearance Predictions

More information

Rough Set Approach to Unsupervised Neural Network based Pattern Classifier

Rough Set Approach to Unsupervised Neural Network based Pattern Classifier Rough Set Approach to Unsupervised Neural based Pattern Classifier Ashwin Kothari, Member IAENG, Avinash Keskar, Shreesha Srinath, and Rakesh Chalsani Abstract Early Convergence, input feature space with

More information

IN recent years, neural networks have attracted considerable attention

IN recent years, neural networks have attracted considerable attention Multilayer Perceptron: Architecture Optimization and Training Hassan Ramchoun, Mohammed Amine Janati Idrissi, Youssef Ghanou, Mohamed Ettaouil Modeling and Scientific Computing Laboratory, Faculty of Science

More information

Cse634 DATA MINING TEST REVIEW. Professor Anita Wasilewska Computer Science Department Stony Brook University

Cse634 DATA MINING TEST REVIEW. Professor Anita Wasilewska Computer Science Department Stony Brook University Cse634 DATA MINING TEST REVIEW Professor Anita Wasilewska Computer Science Department Stony Brook University Preprocessing stage Preprocessing: includes all the operations that have to be performed before

More information

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5 Artificial Neural Networks Lecture Notes Part 5 About this file: If you have trouble reading the contents of this file, or in case of transcription errors, email gi0062@bcmail.brooklyn.cuny.edu Acknowledgments:

More information

Neural Network and Deep Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina

Neural Network and Deep Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina Neural Network and Deep Learning Early history of deep learning Deep learning dates back to 1940s: known as cybernetics in the 1940s-60s, connectionism in the 1980s-90s, and under the current name starting

More information