Adaptation Of Vigilance Factor And Choice Parameter In Fuzzy ART System

Size: px
Start display at page:

Download "Adaptation Of Vigilance Factor And Choice Parameter In Fuzzy ART System"

Transcription

1 Adaptation Of Vigilance Factor And Choice Parameter In Fuzzy ART System M. R. Meybodi P. Bahri Soft Computing Laboratory Department of Computer Engineering Amirkabir University of Technology Tehran, Iran Abstract In adaptive resonance theory network, the choice of vigilance factor (VF) and choice parameter (CP) affects the performance of the network, such as the number of classes into which the data are classified. These parameters are typically chosen and adapted using human udgment, experience, and heuristic rules. Rather than choosing and optimizing these parameters manually, we use learning automata to automatically adapt these parameters. In an earlier paper [Bahri99] we examined the ability of P-model learning automata to only adapt the vigilance factor of fuzzy art network. In this paper we further study the effectiveness of LA in adaptation of VF and CP. This time we try to adapt both VF and CP simultaneously using different models of learning automata: P-model, Q-model, and S- model. The feasibility of the proposed method is shown through simulation on three problems: the circle in the square, nested spirals, and gaussian distributed two groups. Keywords. Neural Networks, Fuzzy ART, Vigilance Factor, Choice Parameter, Learning Automata. 1. Introduction. Fuzzy ART is a self organizing neural architecture which is capable of fast learning its tasks in any non-stationary environment. The architecture combines neural network and fuzzy logic and achieves excellent operation properties. One of the properties of the network is that it learns its tasks in an unsupervised mode, at the same time it can be trained in an supervised mode to adapt to the input /output environments. In our implementation of fuzzy ART the network has three layers, as well as an orienting subsystem. The in these layers process input, and the orienting subsystem guides the search for the fittest category that represents the input. The identity of the category of the input is determined by the third layer whose prototypes best fit the input. Two of the critical entities that determine the dynamics of the network are vigilance factor and choice parameter. The appropriate selection of these two parameters have large effect on the convergence of the algorithm. For example, if the vigilance parameter is too small, too much data compression may result and too broad classification made, and if it is too large, too many will be gened and good classification may not result. If the choice parameter is too small, the network may converge too early, and if it is too large, too many will be gened resulting in slower operation.

2 It is not easy to choose appropriate values for these parameters for a particular problem. These parameters are usually determined by trial and error and using past experience. To the author`s knowledge two different methods for adaptation of vigilance factor only have been reported in the literature. The first method is due to Arabshahi, et al.[arabshi96]. This method is based on the assumption that the number of clusters are known in advance, and for this reason is not relevant to us and will not be considered further. The second method is due to Fu[Fu94]. This method which is applied to ART-II neural network is a fuzzy adaptive vigilance algorithm, with vigilance factor optimally tailored in signal processing under noisy environment. By such an adaptation they have overcome the weakness of fixed VF which may cause the spurious memory. They have investigated the relationship between attractive basin and VF in ART-II, and analytically proved the features of self-stability of ART-II model and illustd the way that fuzzy adaptive VF is to adust the attractive basin. This method has been implemented and compared with methods presented in this paper. In this paper, by interconnection of learning automata and fuzzy ART, we apply learning automata for determining the choice parameter and vigilance factor. Three algorithms for adaptation of these parameters based on learning automata are presented. The first algorithm is used to adapt VF only and the second algorithm is used for simultaneous adaptation of VF and CP. In the third algorithm we associate a different vigilance factor to each long term memory trace and using a collection of learning automata try to find the best VF for each long term memory trace. Through simulation we have found that a 20 percent higher performance for fuzzy ART neural network in terms of recognition, reection, and the size of network gened can be obtained if vigilance factor and choice parameter is adapted simultaneously. In order to evaluate the performance of the proposed scheme, simulations are carried out on the following problems. The circle in the square: Adapted from [Carpenter95A], is an experiment in which the task of the network is to differentiate between the points inside circle and those outside, as shown in figure 1-c. The nested spirals: Adapted also from [Carpenter95A], is an experiment which is highly nonlinear. The task of the network is to differentiate between the points of the two nested spirals shown in figure 1-d We have added aliasing to the experiment, that is for each original point from either group we have added a gaussian noise of variance 0.15 and mean of zero in both X and Y directions sepaly, and submitted the point for recognition to the network, taking the original point as the reference point. The aliased points are shown in figure 1-a and 1-b. a) b) c) d) figure -1 figure - 2 Gaussian distributed two groups around a circle: Taken from [Carpenter95A] is an experiment in which there are two groups gaussian distributed around the central point and each part of the groups situated alternately. Each group has three representations. The benchmark is shown in figure 2. In this case both training and test samples are noisy. The rest of the paper is organized as follows. Section two explains the learning automaton. Section three discuses the augmented fuzzy ARTMAP in some detail. Section four shows the details of our methods, and section five discuses simulation results. Last section is the conclusion. 2. Learning Automata Learning automata operating in unknown random environments have been used as models of learning systems. These automata choose an action at each instant from a finite action set, observe the reaction of the environment to the action chosen and modify the selection of the next action on the basis of the response of the environment. A learning automaton is a quintuple.

3 α,β, Φ,F,G where: 1) α = (α 1,..., α R ) is the set of actions that it must choose from. 2) Φ = (Φ 1,..., Φ s ) is the set of states. 3) β is the set of inputs 4) F: Φ β Φ is a map called the transition map. It defines the transition of the state of the automaton on receiving an input, F may be stochastic. 5) G: Φ α is the output map and determines the action taken by the automaton if it is in state Φ. The selected action serves as the input to the environment which in turn emits a stochastic response β(n) at the time n. β(n) is an element of β and is the feedback response of the environment to the automaton. Deping on the nature of set β the automata can be classified into three classes. If β ={0,1} we have P model automata, if the input to the automata can assume multiple discrete values then we have Q model learning automata and if the input has a continuous range then we call the automata S model learning automata. On the basis of the response β(n), the state of the automaton Φ(n) is updated and a new action chosen at the time (n+1). It is desired that as a result of interaction with the environment the automaton arrives at the action which presents it with the minimum penalty response in an expected sense. If the probability of the transition from one state to another state and probabilities of correspondence of action and state are fixed, the automaton is called fixed structure automata and otherwise the automaton is called variable structure automata. Variable-structure automata which will be used in this paper is represented by sextuple β, φ, α, P, G, T, where β is a set of input actions, φ is a set of internal states, α is a set of outputs, P denotes the action probability vector governing the choice of the action at each stage k, G is the output mapping, and T is learning algorithm. The learning algorithm is used to modify the state probability vector. It is evident that the crucial factor affecting the performance of the variable structure learning automata, is the learning algorithm for updating the action probabilities. Various learning algorithms have been reported in the literature [Thatacher89]. Let α i be the action chosen at time k as a sample realization from distribution P(k). For the P model automata the linear reward-inaction algorithm LR I and linear ward-penalty algorithms LR P are among the earliest schemes in the literature. In an L scheme the recurrence equation for updating P is defined as R I p ( k) + θ (1 p (k)) p ( k) = p ( k) - p ( k) θ if i = if i if β is zero and P will remains unchanged if β is one(i is the chosen action). The parameter θ is called step length, it determines the amount of increases (decreases) of the action probabilities. In linear rewardpenalty algorithm L scheme the recurrence equation for updating P is defined as R P (4) p ( k) + θ (1 p (k)) p ( k) = p ( k) - p ( k) θ if i = if i if β =0 (5) p ( k) (1 - γ ) p ( k) = ( k) + (1 p p γ (k)) if i if i = if β = 1 (6) The parameters γ and θ represent step lengths. They determine the amount of increases (decreases) of the action probabilities. For Q model learning automata a linear algorithm is given below.

4 p ( k + 1) = p ( k) a(1 β ( k)) p ( k) if α( n) α p ( k + 1) = p ( k) + a(1 β ( k)) p ( k) if α( n) = α For S model learning automata a typical learning algorithm is as follows: i i p ( k + 1) = p ( k) + β ( k)[ a /( r 1) ap ( k)] [ 1 β ( k)] ap ( k) if α( n) α p ( k + 1) = p ( k) β ( k) ap ( k) + (1 β ( k)) a(1 p ( k)) if α( n) = α For more information about the theory and applications of learning automata refer to [Thatachar89], [Mars83], and [Mars98]. 3. Fuzzy ART Fuzzy art is a self organizing neural network that consists of three layers of and a supervising subsystem. The first layer, called F0, receives the input to be recognized. It preprocesses the input by a kind of normalization called complement coding. Suppose the original input is X=(x 1,x 2,,x m ) Then the complement coded of this vector will be X a =(x 1,x 2,,x m,1-x 1,1-x 2,,1-x m ). Doing this, the inputs are automatically normalized with respect to vector norm. The norm of X a denoted by X a is equal to M which is obtained by summing up all the elements of vector. After preprocessing, the new vector is submitted to the second layer called F1. This is the critical layer in which many actions is done. This layer first ss X a through an adaptive set of weight (called long term memory) to the third layer (F2). The weight of each link in this set is non increasing function of the bu td input. The weights are denoted by W=[W i ] (shown in figure 3 and 4 by W i = Wi )where i {1,2, 2m}, {1,2,..,n}. n is the number of the of the F2 layer. Each of the F2 node values are computed according to the following formula X a W T ( X a ) = (7) α + W. denotes the vector norm, is the fuzzy and (minimum), W is the th column of vector W, and α is called the choice parameter, which is one of the factors that determines the dynamics of the network. figure 3 figure - 4 F2 is a competition layer, that is, each time during the iteration of the network the node with maximum value is chosen. Afterwards the top down expectations are sent to F1 through weights W td, which is a vector of weights with the chosen category J denoted by W J. At F1 the following quantity is computed.

5 Xa WJ (8) Xa and compared with the value of vigilance factor ρ. If this value is greater than or equal to ρ, resonance occurs and the category that J represents is the recognized category. Otherwise the supervising subsystem initiates a reset to F2, the previously chosen are put aside and a new node with maximum value is chosen, and the corresponding expectations are sent. Sometimes for all in the F2 layer reset occurs. if this happens during training a new F2 node is gened with weight equal to input which automatically entails satisfaction of vigilance criteria, and if this happens during normal operation the input is classified as not recognized. 4. The proposed method In the proposed methods, we use variable structure learning automata for adusting the vigilance factor and/or the choice parameter. The interconnection of learning automata and fuzzy ART is shown in figure 4. The learning Automata according to some criteria adusts the vigilance factor and/or choice parameter of the fuzzy ART. The actions of the automata correspond to the values of vigilance factor or choice parameter and the input to the automata is some function of the error in the output of fuzzy ART. The method is described by the following algorithms. The first algorithm is used to adapt VF only and Algorithm ART_LA(Model) Algorithm ART_Simultaneous_LA_!(Model) β= 0 β= 0 s bottom up signals s bottom up signals for I=0 to number of F2 do for I=0 to number of F2 do compute F2 node with maximum value compute F2 node with maximum value s top down expectations s top down expectations If vigilance criteria is met Then If vigilance criteria is met Then If input is classified correctly Then If input is classified correctly Then Reinforcement(Model,VF, β) else Reinforcement(Model,VF, β) Reinforcement(Model,VF, 1) Reinforcement(Model,CP, β) else else discard current F2 node Reinforcement(Model,VF,1) if(model=q_model) then Reinforcement(Model,VF, 1) Reinforcement(Model, 0.6) β= i / number of F2 else {for} If no correct classification is made Then discard current F2 node Reinforcement( Model,VF, 1) if(model=q_model) then Algorithm Reinforcement(Model, VF, 0.6) Reinforcement(Model, CP, 0.6) Algorithm 1: Algorithm for adaptation of VF β= i / number of F2 { for} If no correct classification is made Then Reinforcement( Model, VF, 1) Reinforcement( Model, CP, 1) Algorithm Algorithm 2: Algorithm for adaptation of VF and CP

6 Algorithm ART_Simultaneous_LA_2 s bottom up signals for I=0 to number of F2 do J=index of F2 node with maximum value s top down expectations for K=0 to A_Const do choose a value for the th vigilance factor if vigilance criteria met then if input is correctly classified then Reinforcement(J, 0) Break inner loop else Reinforcement(J, 1) else continue the inner loop if correct classification is made then break the outer loop else discard current F2 node Algorithm Algorithm - 3 Another algorithm for simultaneous adaptation of VF and CP the second algorithm is used for simultaneous adaptation of VF and CP. In the third algorithm we have one learning automata for each F2 node which determines the vigilance factor of each group of weights emanating from all F1 to one of F2. The rational behind having one vigilance factor for each group is to better cover the region with prototypes. In the following three paragraphs we describe the first algorithm. The second algorithm has a similar description. The first algorithm is much like the standard fuzzy art algorithm except that the vigilance factor varies among distinct and predetermined values between zero and one. After preprocessing the input (i.e. Complement coding ), the resulting input values which are now in the second layer (i.e. F1 ) are sent to the third layer (i.e. F2 ) through bottom up adaptive weights. In this layer, the values of the are determined using formula ( 5 ). Then the algorithm s searching for the best prototype ( Top down weights which are equal to the bottom up weights ). Using formula ( 6 ) which is compared to the current value of the vigilance factor, and using the known type of the input, the current value of the vigilance factor is tested to be the most correct value representing the characteristics of the problem space. If the vigilance criteria is met, and if the recognition is correct a positive reinforcement is made and the weights are adusted and the algorithm breaks the search loop and s. But if an incorrect assignment is made, the algorithm breaks the search loop and terminates. In case that the vigilance criteria is not met, the current F2 node is discarded and search continues. At the of search if no recognition ( correct or incorrect ) is made, the algorithm dreates a new F2 node ( a new prototype ) accompanying a negative reinforcement. The above description is for the P model LA. For the learning with Q model LA the overall algorithm is nearly the same except that we use three levels for β ( reinforcement reward penalty parameter ) i.e. 0, 0.6, and 1. We use zero in the case of correct classification, 1 in the case of incorrect classification, and 0.6 when the vigilance criteria is not met each time during search. The value 0.9 is chosen in order to punish a little the choice of the current VF and favor direct access, i.e. the first chosen F2 node be the correct selected prototype. In S model LA we have chosen β to be equal to ( number of iterations of the search ) divided by (number of F2 ) the motivation behind choosing such a β is again to favor direct access and prevent more search. The third algorithm is a little different from the standard algorithm. This algorithm in addition to the

7 search loop has another loop, which is ited each time that the search is done, at most equal to a constant multiplied by the number of classes. The constant is chosen here to be 1.5. The algorithm has another difference from standard algorithm and it is that for each F2 node we have chosen a sepa vigilance factor. This is done because we want to better approximate the input space with prototypes, and determine the size of basin of attraction for each prototype sepaly. At each iterations of the main loop an F2 node with maximum index is chosen, afterwards the inner loop s with choosing a VF value for this node according to the vigilance probabilities for the node. If the vigilance criteria is met and the class J is the correct class, a positive reinforcement is made and the algorithm goes out of both loops. Otherwise a negative reinforcement is made and the algorithm continues. If the inner loop is unsuccessful ( no classification ), the F2 node J is discarded and search for another prototype continues. 5 Simulations We have conducted two sets of experiments. In the first set of experiments we have adapted the vigilance factor only using different models of learning automata and compared the results with Fu s [Fu94] method. In the second set of experiment both vigilance factor and choice parameter are adapted simultaneously, again using different models of learning automata. For these experiments as the actions of the LA s we choose 11 different values for VF, and 11 different values for CP. The values for VF range from 0.66 to 0.96 in equally spaced intervals. The choice parameter values vary from 0.1 too 100 with increasing interval magnitudes, that is, the points are dense in the vicinity of zero and rare in the vicinity of 100. Each point of the curves given below is obtained by holding CP constant and adapting VF. We have used 450 points for training and 450 other points for testing. Figure 5a, 5b, and 5c show the results of experiments for the noisy circle in the square problem with noiseless training file. The number of gened by Fu s method is higher than the number of gened by our methods. Among the P, Q, and S models the P model has produced the least number of. With respect to the of reection of inputs, Q model has performed the best and Fu`s method the worst. The Fu`s method has nearly produced the highest of recognition. Figure 6a, 6b, and 6c show the result for the noisy nested spirals. For this problem Fu s method has produced the highest number of and also highest of reection. Q, S, and P models come after Fu s method in terms of number of gened. Q model has the least of reection, and P and S model have more or less the same with respect to reection but with respect to recognition at all points they have the same performance. figure 5 - a figure 5 - b Figure 7a, 7b and 7c show the results of experiments for gaussian distributed groups. For this problem Q model has gened the highest number of, after Q model come the S and P models. The Fu s method in terms of number of gened is best and in terms of recognition is worst. Regarding other performance measures the proposed methods perform comparatively equal or better than the Fu s method. Looking at figures 5 (a - c), it can be seen that our methods have outperformed fuzzy method in all three criteria. The number of gened is the lowest for P model, after P model come S and Q

8 models. The performance regarding reection and recognition s are nearly the same for the three proposed models, all being better than the fuzzy method. figure 5 - c figure 6 - a figure 6 - b figure 6 - c Figures 6 (a - c) show the results for the nested spirals. Again much better performance is obtained for our method regarding all criteria. The number of gened is minimal for the P model, after which come S, and Q models, and lastly comes the fuzzy method. Reection s are the highest for the fuzzy method and the least for S and Q models, the highest recognition is obtained for the Q model. Figures 7 (a - c) show the results of the experiment for the gaussian distributed two groups problem. In the case of number of gened the score is the lowest for the fuzzy method, and next comes P, S, and Q models. Fuzzy adaptation method is especially tailor maid for noisy environments and for this reason it performs good for this problem. Reection is the lowest for fuzzy method and the P model, having large fluctuations in the case of S and Q models. figure 7 - a figure 7 - b

9 figure 7 - c Summarizing the results of the above mentioned figures we can conclude that in most cases when the training file is not noisy, our method performs better than the fuzzy adaptation method. Also the figures show a lot of variation in the degree of performance when the choice parameter is varied. This motivated us to adapt choice parameter and vigilance factor simultaneously. Tables 1- a through 3- d show the results obtained for the simultaneous adaptation of vigilance factor and choice parameter using algorithm 2 with P, Q, and S model LA, and algorithm 3 (which uses P model LA). We also observed that algorithm 2 with P model automata, and algorithm 3 were sensitive to the values of step lengths. So we conducted the experiments with different values of the step lengths. Tables 1 (a - d) indicates that the number of gened is highest for algorithm 3, and lowest for S, and Q models. The recognition s and reection s are nearly the same for all cases. Tables 2 (a - d) show that the number of gened for algorithm 3 and algorithm 2 with P model LA is the highest. For the S model the recognition is higher and the reection s are the same. In tables 3 (a - d) we see that the recognition is much higher for algorithm 3 ( about 16 percent higher), after which comes P model. The reection s are nearly zero in all cases and the number of gened is higher in the case of algorithm 3 and P model. Table 1- a Algorithm 2- P model- Circle in the square Table 1- b Algorithm 2- Q model- Circle in the square LA Parameters re re Conclusion In this paper we have studied the effectiveness of different models of learning automata in adaptation of VF and CP in fuzzy ART. The effectiveness of the proposed methods have been shown through simulation on three problems: the circle in the square, nested spirals, and gaussian distributed two groups. The result of simulations indicate that if both VF and CP are adapted simultaneously, we get higher performance than when only VF is adapted. The results of simulations also show that : One of the proposed algorithms ( Algorithm3 ) produces high of recognition especially when the environment is noisy. In order to get faster response with reasonable of recognition and reection, simultaneous adaptation of VF and CP using S model LA is better.

10 Adaptation of VF using learning automata perform better than the methods reported in the litrature. Table 1- c Algorithm2- Smodel- Circle in the square Re Table 1- d Algorithm 3 Circle in the square re Table 2- a Algorithm2- P model- nested spirals re Table 2- b Algorithm2 S model- nested spirals re Table 2- c Algorithm2- Q model- nested spirals Nodes re Table 2- d Algorithm3- nested spirals re Table 3- a Algorithm2- P model- gaussian distributed two groups re Table 3- b Algorithm2- S model- gaussian distributed two groups re

11 Table 3- c Table 3- d Algorithm2- Q model- gaussian Algorithm3 - gaussian distributed two groups distributed two groups re. Rate re References [Thatachar89] M. A. L. Thatachar and K. S. Narra, Learning Automata: An Introduction, Printice Hall inc, [Carpenter95a] G. A. Carpenter and W. D., ARTMAP: ART_MAP: A Neural Network Architecture For Obect Recognition By Evidence Accumulation, IEEE Transaction on Neural Networks, Vol. 6, No. 4, July [Carpenter95b] G. A. Carpenter, S. Grossberg and J. H. Reynolds, A Fuzzy ARTMAP Nonparametric Probability Estimator For Nonstationary Pattern Recognition Problems, IEEE Transaction on Neural Networks Vol.6, No. 6, November [Carpenter92] G. Carpenter, S. Grossberg, N. Markuzon, J. Reynolds and D. Rosed, Fuzzy ARTMAP: A Neural Network Architeture for Incremental Supervised Learning Of Analog Multidimensional Maps, IEEE Transaction on Neural Networks Vol.3, [Arabshahi96] P. Arabshahi et. Al., Fuzzy Parameter Adaptation in Optimazation: Some Neural Net Training examples, IEEE Computational Science & Engineering, Spring 1996, pp [Bahri99] P. Bahri and M. R. Meybodi,, A Method for Adaptation of Vigilance Factor and Choice Parameter in Fuzzy Art System, Proceedings of The 7th Iranian Conference on Electrical Engineering Iran Telecommunication Research Center, Tehran, Iran., May 1999, pp, [Healy97] M. J. Healy and T.P. Caudell, Aquiring Rule Sets as a Product of Learning in a Logical Neural Architecture, IEEE Transaction on Neural Networks, May [Fu94] L. Fu and J. Zhan, Fuzzy Adapting Vigilance Parameter of ARTII Neural Nets, IEEE World Congress on Computational Intelligence,Vol.3, 1994, pp [Hashim86] A. Hashim, S. Amir, and P. Mars, Application of Learning Automata to Data Compression, In Adaptive and Learning systems, K. S. Narra (Ed), New York: Plenum press, pp , [Mars83] P. Mars, K. S. Narra, and M. Chrystall,. Learning Automata Control of Computer Communication Networks, Proc. of Third Yale Workshop on Applications of Adaptive Systems Theory,Yale University., [Mars98] P. Mars, J. R. Chen,and R. Nambir Learning Algorithms: Theory and Application in Signal Processing, Control, and Communication, CRC press, New York,1998. [Meybodi98] M. R. Meybodi and H. Beigy, New Class of Learning Automata Based Scheme For Adaptation of Backpropagation Algorithm Parameters, Proceedings of EUFIT-98, Achen, Germany, pp ,1998.

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains Ahmad Ali Abin, Mehran Fotouhi, Shohreh Kasaei, Senior Member, IEEE Sharif University of Technology, Tehran, Iran abin@ce.sharif.edu,

More information

Associative Cellular Learning Automata and its Applications

Associative Cellular Learning Automata and its Applications Associative Cellular Learning Automata and its Applications Meysam Ahangaran and Nasrin Taghizadeh and Hamid Beigy Department of Computer Engineering, Sharif University of Technology, Tehran, Iran ahangaran@iust.ac.ir,

More information

A Learning Automata-based Memetic Algorithm

A Learning Automata-based Memetic Algorithm A Learning Automata-based Memetic Algorithm M. Rezapoor Mirsaleh and M. R. Meybodi 2,3 Soft Computing Laboratory, Computer Engineering and Information Technology Department, Amirkabir University of Technology,

More information

A Fuzzy ARTMAP Based Classification Technique of Natural Textures

A Fuzzy ARTMAP Based Classification Technique of Natural Textures A Fuzzy ARTMAP Based Classification Technique of Natural Textures Dimitrios Charalampidis Orlando, Florida 328 16 dcl9339@pegasus.cc.ucf.edu Michael Georgiopoulos michaelg @pegasus.cc.ucf.edu Takis Kasparis

More information

Learning Automata Based Algorithms for Finding Minimum Weakly Connected Dominating Set in Stochastic Graphs

Learning Automata Based Algorithms for Finding Minimum Weakly Connected Dominating Set in Stochastic Graphs Learning Automata Based Algorithms for Finding Minimum Weakly Connected Dominating Set in Stochastic Graphs Javad Akbari Torkestani Department of Computer Engineering, Islamic Azad University, Arak Branch,

More information

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS BOGDAN M.WILAMOWSKI University of Wyoming RICHARD C. JAEGER Auburn University ABSTRACT: It is shown that by introducing special

More information

Computational Intelligence Meets the NetFlix Prize

Computational Intelligence Meets the NetFlix Prize Computational Intelligence Meets the NetFlix Prize Ryan J. Meuth, Paul Robinette, Donald C. Wunsch II Abstract The NetFlix Prize is a research contest that will award $1 Million to the first group to improve

More information

A Closed Asynchronous Dynamic Model of Cellular Learning Automata and its Application to Peer-to-Peer Networks

A Closed Asynchronous Dynamic Model of Cellular Learning Automata and its Application to Peer-to-Peer Networks A Closed Asynchronous Dynamic Model of Cellular Learning Automata and its Application to Peer-to-Peer Networks Ali Mohammad Saghiri *, Mohammad Reza Meybodi Soft Computing Laboratory, Computer Engineering

More information

Efficient Object Extraction Using Fuzzy Cardinality Based Thresholding and Hopfield Network

Efficient Object Extraction Using Fuzzy Cardinality Based Thresholding and Hopfield Network Efficient Object Extraction Using Fuzzy Cardinality Based Thresholding and Hopfield Network S. Bhattacharyya U. Maulik S. Bandyopadhyay Dept. of Information Technology Dept. of Comp. Sc. and Tech. Machine

More information

Fuzzy-Kernel Learning Vector Quantization

Fuzzy-Kernel Learning Vector Quantization Fuzzy-Kernel Learning Vector Quantization Daoqiang Zhang 1, Songcan Chen 1 and Zhi-Hua Zhou 2 1 Department of Computer Science and Engineering Nanjing University of Aeronautics and Astronautics Nanjing

More information

Adaptive Petri Net Based on Irregular Cellular Learning Automata and Its Application in Vertex Coloring Problem

Adaptive Petri Net Based on Irregular Cellular Learning Automata and Its Application in Vertex Coloring Problem 1 Adaptive Petri Net Based on Irregular Cellular Learning Automata and Its Application in Vertex Coloring Problem S. Mehdi Vahidipour, Mohammad Reza Meybodi and Mehdi Esnaashari Abstract An adaptive Petri

More information

Adaptive edge detection via image statistic features and hybrid model of fuzzy cellular automata and cellular learning automata

Adaptive edge detection via image statistic features and hybrid model of fuzzy cellular automata and cellular learning automata 2009 International Conference on Information and Multimedia Technology Adaptive edge detection via image statistic features and hybrid model of fuzzy cellular automata and cellular learning automata R.Enayatifar

More information

A Weighted Majority Voting based on Normalized Mutual Information for Cluster Analysis

A Weighted Majority Voting based on Normalized Mutual Information for Cluster Analysis A Weighted Majority Voting based on Normalized Mutual Information for Cluster Analysis Meshal Shutaywi and Nezamoddin N. Kachouie Department of Mathematical Sciences, Florida Institute of Technology Abstract

More information

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically Supervised Learning in Neural Networks (Part 1) A prescribed set of well-defined rules for the solution of a learning problem is called a learning algorithm. Variety of learning algorithms are existing,

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

AppART + Growing Neural Gas = high performance hybrid neural network for function approximation

AppART + Growing Neural Gas = high performance hybrid neural network for function approximation 1 AppART + Growing Neural Gas = high performance hybrid neural network for function approximation Luis Martí Ý Þ, Alberto Policriti Ý, Luciano García Þ and Raynel Lazo Þ Ý DIMI, Università degli Studi

More information

CELLULAR automata (CA) are mathematical models for

CELLULAR automata (CA) are mathematical models for 1 Cellular Learning Automata with Multiple Learning Automata in Each Cell and its Applications Hamid Beigy and M R Meybodi Abstract The cellular learning automata, which is a combination of cellular automata

More information

Unsupervised Learning : Clustering

Unsupervised Learning : Clustering Unsupervised Learning : Clustering Things to be Addressed Traditional Learning Models. Cluster Analysis K-means Clustering Algorithm Drawbacks of traditional clustering algorithms. Clustering as a complex

More information

A New Discrete Binary Particle Swarm Optimization based on Learning Automata

A New Discrete Binary Particle Swarm Optimization based on Learning Automata A New Discrete Binary Particle Swarm Optimization based on Learning Automata R. Rastegar M. R. Meybodi K. Badie Soft Computing Lab Soft Computing Lab Information Technology Computer Eng. Department Computer

More information

Semi-Supervised Clustering with Partial Background Information

Semi-Supervised Clustering with Partial Background Information Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject

More information

A modified and fast Perceptron learning rule and its use for Tag Recommendations in Social Bookmarking Systems

A modified and fast Perceptron learning rule and its use for Tag Recommendations in Social Bookmarking Systems A modified and fast Perceptron learning rule and its use for Tag Recommendations in Social Bookmarking Systems Anestis Gkanogiannis and Theodore Kalamboukis Department of Informatics Athens University

More information

HOT asax: A Novel Adaptive Symbolic Representation for Time Series Discords Discovery

HOT asax: A Novel Adaptive Symbolic Representation for Time Series Discords Discovery HOT asax: A Novel Adaptive Symbolic Representation for Time Series Discords Discovery Ninh D. Pham, Quang Loc Le, Tran Khanh Dang Faculty of Computer Science and Engineering, HCM University of Technology,

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

Seismic regionalization based on an artificial neural network

Seismic regionalization based on an artificial neural network Seismic regionalization based on an artificial neural network *Jaime García-Pérez 1) and René Riaño 2) 1), 2) Instituto de Ingeniería, UNAM, CU, Coyoacán, México D.F., 014510, Mexico 1) jgap@pumas.ii.unam.mx

More information

Dynamic Traffic Pattern Classification Using Artificial Neural Networks

Dynamic Traffic Pattern Classification Using Artificial Neural Networks 14 TRANSPORTATION RESEARCH RECORD 1399 Dynamic Traffic Pattern Classification Using Artificial Neural Networks }IUYI HUA AND ARDESHIR FAGHRI Because of the difficulty of modeling the traffic conditions

More information

Properties of learning of a Fuzzy ART Variant

Properties of learning of a Fuzzy ART Variant NN 38 PERGAMON Neural Networks 2 (999) 837 85 Neural Networks wwwelseviercom/locate/neunet Properties of learning of a Fuzzy ART Variant M Georgiopoulos a, *, I Dagher a, GL Heileman b, G Bebis c a Department

More information

SELF-ORGANIZED clustering is a powerful tool whenever

SELF-ORGANIZED clustering is a powerful tool whenever 544 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 9, NO. 3, MAY 1998 Comparative Analysis of Fuzzy ART and ART-2A Network Clustering Performance Thomas Frank, Karl-Friedrich Kraiss, and Torsten Kuhlen Abstract

More information

New wavelet based ART network for texture classification

New wavelet based ART network for texture classification University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 1996 New wavelet based ART network for texture classification Jiazhao

More information

A novel supervised learning algorithm and its use for Spam Detection in Social Bookmarking Systems

A novel supervised learning algorithm and its use for Spam Detection in Social Bookmarking Systems A novel supervised learning algorithm and its use for Spam Detection in Social Bookmarking Systems Anestis Gkanogiannis and Theodore Kalamboukis Department of Informatics Athens University of Economics

More information

10703 Deep Reinforcement Learning and Control

10703 Deep Reinforcement Learning and Control 10703 Deep Reinforcement Learning and Control Russ Salakhutdinov Machine Learning Department rsalakhu@cs.cmu.edu Policy Gradient I Used Materials Disclaimer: Much of the material and slides for this lecture

More information

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION 6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods

Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods Franz Hamilton Faculty Advisor: Dr Timothy Sauer January 5, 2011 Abstract Differential equation modeling is central

More information

Default ARTMAP 2. Gregory P. Amis and Gail A. Carpenter. IJCNN 07, Orlando CAS/CNS Technical Report TR

Default ARTMAP 2. Gregory P. Amis and Gail A. Carpenter. IJCNN 07, Orlando CAS/CNS Technical Report TR IJCNN 07, Orlando CS/CNS Technical Report TR-2007-003 1 Default RTMP 2 Gregory P. mis and Gail. Carpenter bstract Default RTMP combines winner-take-all category node activation during training, distributed

More information

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section An Algorithm for Incremental Construction of Feedforward Networks of Threshold Units with Real Valued Inputs Dhananjay S. Phatak Electrical Engineering Department State University of New York, Binghamton,

More information

Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction

Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction Stefan Müller, Gerhard Rigoll, Andreas Kosmala and Denis Mazurenok Department of Computer Science, Faculty of

More information

Designing Interval Type-2 Fuzzy Controllers by Sarsa Learning

Designing Interval Type-2 Fuzzy Controllers by Sarsa Learning Designing Interval Type-2 Fuzzy Controllers by Sarsa Learning Nooshin Nasri Mohajeri*, Mohammad Bagher Naghibi Sistani** * Ferdowsi University of Mashhad, Mashhad, Iran, noushinnasri@ieee.org ** Ferdowsi

More information

The Cross-Entropy Method

The Cross-Entropy Method The Cross-Entropy Method Guy Weichenberg 7 September 2003 Introduction This report is a summary of the theory underlying the Cross-Entropy (CE) method, as discussed in the tutorial by de Boer, Kroese,

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation

Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation .--- Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Networ and Fuzzy Simulation Abstract - - - - Keywords: Many optimization problems contain fuzzy information. Possibility

More information

Gauss-Sigmoid Neural Network

Gauss-Sigmoid Neural Network Gauss-Sigmoid Neural Network Katsunari SHIBATA and Koji ITO Tokyo Institute of Technology, Yokohama, JAPAN shibata@ito.dis.titech.ac.jp Abstract- Recently RBF(Radial Basis Function)-based networks have

More information

Comparing Dropout Nets to Sum-Product Networks for Predicting Molecular Activity

Comparing Dropout Nets to Sum-Product Networks for Predicting Molecular Activity 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

A New Version of K-random Walks Algorithm in Peer-to-Peer Networks Utilizing Learning Automata

A New Version of K-random Walks Algorithm in Peer-to-Peer Networks Utilizing Learning Automata A New Version of K-random Walks Algorithm in Peer-to-Peer Networks Utilizing Learning Automata Mahdi Ghorbani Dept. of electrical, computer and IT engineering Qazvin Branch, Islamic Azad University Qazvin,

More information

Preprocessing of Stream Data using Attribute Selection based on Survival of the Fittest

Preprocessing of Stream Data using Attribute Selection based on Survival of the Fittest Preprocessing of Stream Data using Attribute Selection based on Survival of the Fittest Bhakti V. Gavali 1, Prof. Vivekanand Reddy 2 1 Department of Computer Science and Engineering, Visvesvaraya Technological

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

A New Type of ART2 Architecture and Application to Color Image Segmentation

A New Type of ART2 Architecture and Application to Color Image Segmentation A New Type of ART2 Architecture and Application to Color Image Segmentation Jiaoyan Ai 1,BrianFunt 2, and Lilong Shi 2 1 Guangxi University, China shinin@vip.163.com 2 Simon Fraser University, Canada Abstract.

More information

5 Learning hypothesis classes (16 points)

5 Learning hypothesis classes (16 points) 5 Learning hypothesis classes (16 points) Consider a classification problem with two real valued inputs. For each of the following algorithms, specify all of the separators below that it could have generated

More information

Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications

Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications D.A. Karras 1 and V. Zorkadis 2 1 University of Piraeus, Dept. of Business Administration,

More information

Dynamic Clustering of Data with Modified K-Means Algorithm

Dynamic Clustering of Data with Modified K-Means Algorithm 2012 International Conference on Information and Computer Networks (ICICN 2012) IPCSIT vol. 27 (2012) (2012) IACSIT Press, Singapore Dynamic Clustering of Data with Modified K-Means Algorithm Ahamed Shafeeq

More information

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models Gleidson Pegoretti da Silva, Masaki Nakagawa Department of Computer and Information Sciences Tokyo University

More information

Using Machine Learning to Optimize Storage Systems

Using Machine Learning to Optimize Storage Systems Using Machine Learning to Optimize Storage Systems Dr. Kiran Gunnam 1 Outline 1. Overview 2. Building Flash Models using Logistic Regression. 3. Storage Object classification 4. Storage Allocation recommendation

More information

3 Nonlinear Regression

3 Nonlinear Regression CSC 4 / CSC D / CSC C 3 Sometimes linear models are not sufficient to capture the real-world phenomena, and thus nonlinear models are necessary. In regression, all such models will have the same basic

More information

Instantaneously trained neural networks with complex inputs

Instantaneously trained neural networks with complex inputs Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2003 Instantaneously trained neural networks with complex inputs Pritam Rajagopal Louisiana State University and Agricultural

More information

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska Classification Lecture Notes cse352 Neural Networks Professor Anita Wasilewska Neural Networks Classification Introduction INPUT: classification data, i.e. it contains an classification (class) attribute

More information

FUZZY KERNEL K-MEDOIDS ALGORITHM FOR MULTICLASS MULTIDIMENSIONAL DATA CLASSIFICATION

FUZZY KERNEL K-MEDOIDS ALGORITHM FOR MULTICLASS MULTIDIMENSIONAL DATA CLASSIFICATION FUZZY KERNEL K-MEDOIDS ALGORITHM FOR MULTICLASS MULTIDIMENSIONAL DATA CLASSIFICATION 1 ZUHERMAN RUSTAM, 2 AINI SURI TALITA 1 Senior Lecturer, Department of Mathematics, Faculty of Mathematics and Natural

More information

Lecture 21 : A Hybrid: Deep Learning and Graphical Models

Lecture 21 : A Hybrid: Deep Learning and Graphical Models 10-708: Probabilistic Graphical Models, Spring 2018 Lecture 21 : A Hybrid: Deep Learning and Graphical Models Lecturer: Kayhan Batmanghelich Scribes: Paul Liang, Anirudha Rayasam 1 Introduction and Motivation

More information

Particle Swarm Optimization applied to Pattern Recognition

Particle Swarm Optimization applied to Pattern Recognition Particle Swarm Optimization applied to Pattern Recognition by Abel Mengistu Advisor: Dr. Raheel Ahmad CS Senior Research 2011 Manchester College May, 2011-1 - Table of Contents Introduction... - 3 - Objectives...

More information

Application of Support Vector Machine Algorithm in Spam Filtering

Application of Support Vector Machine Algorithm in  Spam Filtering Application of Support Vector Machine Algorithm in E-Mail Spam Filtering Julia Bluszcz, Daria Fitisova, Alexander Hamann, Alexey Trifonov, Advisor: Patrick Jähnichen Abstract The problem of spam classification

More information

Estimating the Information Rate of Noisy Two-Dimensional Constrained Channels

Estimating the Information Rate of Noisy Two-Dimensional Constrained Channels Estimating the Information Rate of Noisy Two-Dimensional Constrained Channels Mehdi Molkaraie and Hans-Andrea Loeliger Dept. of Information Technology and Electrical Engineering ETH Zurich, Switzerland

More information

Data Preprocessing. Why Data Preprocessing? MIT-652 Data Mining Applications. Chapter 3: Data Preprocessing. Multi-Dimensional Measure of Data Quality

Data Preprocessing. Why Data Preprocessing? MIT-652 Data Mining Applications. Chapter 3: Data Preprocessing. Multi-Dimensional Measure of Data Quality Why Data Preprocessing? Data in the real world is dirty incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate data e.g., occupation = noisy: containing

More information

FUZZY LAPART SUPERVISED LEARNING THROUGH INFERENCING FOR STABLE CATEGORY RECOGNITION

FUZZY LAPART SUPERVISED LEARNING THROUGH INFERENCING FOR STABLE CATEGORY RECOGNITION FUZZY LAPART SUPERVISED LEARNING THROUGH INFERENCING FOR STABLE CATEGORY RECOGNITION Gabs00 Han, Fredric M Ham, and *Laurene V Fausett Floida Institute of Technology Depument of Electrical ancl Computer

More information

A Self-Organizing Binary System*

A Self-Organizing Binary System* 212 1959 PROCEEDINGS OF THE EASTERN JOINT COMPUTER CONFERENCE A Self-Organizing Binary System* RICHARD L. MATTSONt INTRODUCTION ANY STIMULUS to a system such as described in this paper can be coded into

More information

Improving Image Segmentation Quality Via Graph Theory

Improving Image Segmentation Quality Via Graph Theory International Symposium on Computers & Informatics (ISCI 05) Improving Image Segmentation Quality Via Graph Theory Xiangxiang Li, Songhao Zhu School of Automatic, Nanjing University of Post and Telecommunications,

More information

Rough Set Approach to Unsupervised Neural Network based Pattern Classifier

Rough Set Approach to Unsupervised Neural Network based Pattern Classifier Rough Set Approach to Unsupervised Neural based Pattern Classifier Ashwin Kothari, Member IAENG, Avinash Keskar, Shreesha Srinath, and Rakesh Chalsani Abstract Early Convergence, input feature space with

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

A Learning Automata based Heuristic Algorithm for Solving the Minimum Spanning Tree Problem in Stochastic Graphs

A Learning Automata based Heuristic Algorithm for Solving the Minimum Spanning Tree Problem in Stochastic Graphs بسم االله الرحمن الرحيم (الهي همه چيز را به تو مي سپارم ياريم كن) A Learning Automata based Heuristic Algorithm for Solving the Minimum Spanning Tree Problem in Stochastic Graphs Javad Akbari Torkestani

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks

CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks Part IV 1 Function approximation MLP is both a pattern classifier and a function approximator As a function approximator,

More information

Image Mining: frameworks and techniques

Image Mining: frameworks and techniques Image Mining: frameworks and techniques Madhumathi.k 1, Dr.Antony Selvadoss Thanamani 2 M.Phil, Department of computer science, NGM College, Pollachi, Coimbatore, India 1 HOD Department of Computer Science,

More information

Modeling with Uncertainty Interval Computations Using Fuzzy Sets

Modeling with Uncertainty Interval Computations Using Fuzzy Sets Modeling with Uncertainty Interval Computations Using Fuzzy Sets J. Honda, R. Tankelevich Department of Mathematical and Computer Sciences, Colorado School of Mines, Golden, CO, U.S.A. Abstract A new method

More information

10-701/15-781, Fall 2006, Final

10-701/15-781, Fall 2006, Final -7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly

More information

Adaptive Resonance Theory (ART): An Introduction

Adaptive Resonance Theory (ART): An Introduction Missouri University of Science and Technology Scholars' Mine Computer Science Faculty Research & Creative Works Computer Science 1-1-1995 Adaptive Resonance Theory (ART): An Introduction Lucien G. Heins

More information

Stability Assessment of Electric Power Systems using Growing Neural Gas and Self-Organizing Maps

Stability Assessment of Electric Power Systems using Growing Neural Gas and Self-Organizing Maps Stability Assessment of Electric Power Systems using Growing Gas and Self-Organizing Maps Christian Rehtanz, Carsten Leder University of Dortmund, 44221 Dortmund, Germany Abstract. Liberalized competitive

More information

CS6716 Pattern Recognition

CS6716 Pattern Recognition CS6716 Pattern Recognition Prototype Methods Aaron Bobick School of Interactive Computing Administrivia Problem 2b was extended to March 25. Done? PS3 will be out this real soon (tonight) due April 10.

More information

Unsupervised Learning: Clustering

Unsupervised Learning: Clustering Unsupervised Learning: Clustering Vibhav Gogate The University of Texas at Dallas Slides adapted from Carlos Guestrin, Dan Klein & Luke Zettlemoyer Machine Learning Supervised Learning Unsupervised Learning

More information

A Syntactic Methodology for Automatic Diagnosis by Analysis of Continuous Time Measurements Using Hierarchical Signal Representations

A Syntactic Methodology for Automatic Diagnosis by Analysis of Continuous Time Measurements Using Hierarchical Signal Representations IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 33, NO. 6, DECEMBER 2003 951 A Syntactic Methodology for Automatic Diagnosis by Analysis of Continuous Time Measurements Using

More information

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung POLYTECHNIC UNIVERSITY Department of Computer and Information Science VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung Abstract: Techniques for reducing the variance in Monte Carlo

More information

A Survey on Postive and Unlabelled Learning

A Survey on Postive and Unlabelled Learning A Survey on Postive and Unlabelled Learning Gang Li Computer & Information Sciences University of Delaware ligang@udel.edu Abstract In this paper we survey the main algorithms used in positive and unlabeled

More information

Two New Computational Methods to Evaluate Limit Cycles in Fixed-Point Digital Filters

Two New Computational Methods to Evaluate Limit Cycles in Fixed-Point Digital Filters Two New Computational Methods to Evaluate Limit Cycles in Fixed-Point Digital Filters M. Utrilla-Manso; F. López-Ferreras; H.Gómez-Moreno;P. Martín-Martín;P.L. López- Espí Department of Teoría de la Señal

More information

Alternatives to Direct Supervision

Alternatives to Direct Supervision CreativeAI: Deep Learning for Graphics Alternatives to Direct Supervision Niloy Mitra Iasonas Kokkinos Paul Guerrero Nils Thuerey Tobias Ritschel UCL UCL UCL TUM UCL Timetable Theory and Basics State of

More information

A FUZZY LOGIC BASED METHOD FOR EDGE DETECTION

A FUZZY LOGIC BASED METHOD FOR EDGE DETECTION Bulletin of the Transilvania University of Braşov Series I: Engineering Sciences Vol. 4 (53) No. 1-2011 A FUZZY LOGIC BASED METHOD FOR EDGE DETECTION C. SULIMAN 1 C. BOLDIŞOR 1 R. BĂZĂVAN 2 F. MOLDOVEANU

More information

Neuro-Dynamic Programming An Overview

Neuro-Dynamic Programming An Overview 1 Neuro-Dynamic Programming An Overview Dimitri Bertsekas Dept. of Electrical Engineering and Computer Science M.I.T. May 2006 2 BELLMAN AND THE DUAL CURSES Dynamic Programming (DP) is very broadly applicable,

More information

Applied Soft Computing

Applied Soft Computing Applied Soft Computing 11 (2011) 4064 4077 Contents lists available at ScienceDirect Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc Learning automata-based algorithms for solving

More information

Ensembles of Neural Networks for Forecasting of Time Series of Spacecraft Telemetry

Ensembles of Neural Networks for Forecasting of Time Series of Spacecraft Telemetry ISSN 1060-992X, Optical Memory and Neural Networks, 2017, Vol. 26, No. 1, pp. 47 54. Allerton Press, Inc., 2017. Ensembles of Neural Networks for Forecasting of Time Series of Spacecraft Telemetry E. E.

More information

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review Gautam Kunapuli Machine Learning Data is identically and independently distributed Goal is to learn a function that maps to Data is generated using an unknown function Learn a hypothesis that minimizes

More information

Optimising OSPF Routing for Link Failure Scenarios

Optimising OSPF Routing for Link Failure Scenarios Optimising OSPF Routing for Link Failure Scenarios Sadiq M. Sait, Mohammed H. Sqalli, Syed Asadullah Computer Engineering Department King Fahd University of Petroleum & Minerals Dhahran 31261, Saudi Arabia

More information

Motivation. Technical Background

Motivation. Technical Background Handling Outliers through Agglomerative Clustering with Full Model Maximum Likelihood Estimation, with Application to Flow Cytometry Mark Gordon, Justin Li, Kevin Matzen, Bryce Wiedenbeck Motivation Clustering

More information

Sketchable Histograms of Oriented Gradients for Object Detection

Sketchable Histograms of Oriented Gradients for Object Detection Sketchable Histograms of Oriented Gradients for Object Detection No Author Given No Institute Given Abstract. In this paper we investigate a new representation approach for visual object recognition. The

More information

The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms

The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms Somayyeh Nalan-Ahmadabad and Sehraneh Ghaemi Abstract In this paper, pole placement with integral

More information

String Vector based KNN for Text Categorization

String Vector based KNN for Text Categorization 458 String Vector based KNN for Text Categorization Taeho Jo Department of Computer and Information Communication Engineering Hongik University Sejong, South Korea tjo018@hongik.ac.kr Abstract This research

More information

A SURVEY ON CLUSTERING ALGORITHMS Ms. Kirti M. Patil 1 and Dr. Jagdish W. Bakal 2

A SURVEY ON CLUSTERING ALGORITHMS Ms. Kirti M. Patil 1 and Dr. Jagdish W. Bakal 2 Ms. Kirti M. Patil 1 and Dr. Jagdish W. Bakal 2 1 P.G. Scholar, Department of Computer Engineering, ARMIET, Mumbai University, India 2 Principal of, S.S.J.C.O.E, Mumbai University, India ABSTRACT Now a

More information

Processing Missing Values with Self-Organized Maps

Processing Missing Values with Self-Organized Maps Processing Missing Values with Self-Organized Maps David Sommer, Tobias Grimm, Martin Golz University of Applied Sciences Schmalkalden Department of Computer Science D-98574 Schmalkalden, Germany Phone:

More information

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes On the Complexity of the Policy Improvement Algorithm for Markov Decision Processes Mary Melekopoglou Anne Condon Computer Sciences Department University of Wisconsin - Madison 0 West Dayton Street Madison,

More information

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving

More information

Design of an Automated Data Entry System for Handwritten Forms

Design of an Automated Data Entry System for Handwritten Forms Design of an Automated Data Entry System for Handwritten Forms Lim Woan Ning, Marzuki Khalid* and Rubiyah Yusof Centre for Artificial Intelligence and Robotics (CAIRO) Faculty of Electrical Engineering,

More information

Clustering CS 550: Machine Learning

Clustering CS 550: Machine Learning Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

THE HALF-EDGE DATA STRUCTURE MODELING AND ANIMATION

THE HALF-EDGE DATA STRUCTURE MODELING AND ANIMATION THE HALF-EDGE DATA STRUCTURE MODELING AND ANIMATION Dan Englesson danen344@student.liu.se Sunday 12th April, 2011 Abstract In this lab assignment which was done in the course TNM079, Modeling and animation,

More information

Clustering with Reinforcement Learning

Clustering with Reinforcement Learning Clustering with Reinforcement Learning Wesam Barbakh and Colin Fyfe, The University of Paisley, Scotland. email:wesam.barbakh,colin.fyfe@paisley.ac.uk Abstract We show how a previously derived method of

More information