Neural Optimization of Evolutionary Algorithm Parameters. Hiral Patel

Size: px
Start display at page:

Download "Neural Optimization of Evolutionary Algorithm Parameters. Hiral Patel"

Transcription

1 Neural Optimization of Evolutionary Algorithm Parameters Hiral Patel December 6, 23

2 Abstract This paper presents a novel idea of using an unsupervised neural network to optimize the on-line parameters of an Evolutionary Algorithm with specific attention paid to Genetic Algorithms. The results show a marked improvement in the output of the Neural Optimized Genetic Algorithm. Further research in this field may prove to be fruitful. Keywords Evolutionary Algorithms, Genetic Algorithms, Unsupervised Neural Network, Hebbian Learning, Parameter Optimization 1

3 Contents.1 Introduction What are Evolutionary Algorithms? What are Neural Networks? Hebbian Learning Why go through all the trouble? Experimental Model Binary Knapsack Problem GA Characteristics Neural Network Architecture Unsupervised Learning Process Results and Conclusion Introduction There are many applications where Evolutionary Algorithms(EAs) are used in conjunction with Neural Networks. Some of these involve evolving a neural network architecture, training a neural network, or even optimizing the parameters of a neural network. Yet there is little work being done the other way around. There are virtually no applications of using neural networks to optimize EAs. In this paper we look at the novel idea of using a neural network to optimize an EA. There are however, other methods of optimizing Evolutionary Algorithms. Primarily, these methods focus on optimizing mutation using different distributions of random numbers or machine learning is used to optimize the same. Little work has been done in the field of optimizing the parameters of an EA. In this paper, we present a method of optimizing a subset of EAs called Genetic Algorithms. The purpose of choosing them will become apparent once the Experimental Model has been presented..1.1 What are Evolutionary Algorithms? Evolutionary Algorithms are a set of recursive problem-solving computational procedures that utilize Darwin s evolutionary cycle (Figure 1). Given the idea of evolution, it can be surmised that a set of individuals within a population 2

4 will reach perfection over infinite time given persistent environmental conditions. In using this approach, we must define the environmental conditions, the size of a population, what constitutes an individual, and a method of selection, reproduction, and competition. There are many methods in Evolutionary Algorithms which can be used to define them, but the principle forms are Evolutionary Programming, Evolutionary Strategies, Genetic Programming, and Genetic Algorithms. Figure 1: Evolutionary Cycle Evolutionary Programming was first developed primarily to evolve finitestate machines (Figure 2). A finite-state machine is a mechanism for defining the specific behavior of a process given a finite set of symbols and internal states. The individuals of the population will be picked from a finite alphabet derived from the symbols and states. The population size can be arbitrary. The primary method of selection can be performed by several general techniques but the best individuals of a population are retained and the rest discarded. The competition is performed with a fitness function which is based on the behavioral output of each individual. The individual closest to the desired output will have the highest fitness. The process of reproduction only applies variation(mutation) to each individual in an effort to form a better individual. Figure 2: Example finite state machine which detects a sequence of three 1 s. The transitions occur with given input/output values. 3

5 Evolutionary Strategies(ES) are similar to Evolutionary Programming in that they also use variation of the search space to evolve a better individual. ES also use the same general techniques for population size, selection, and competition. The difference lies in the search space. ES use real-valued vectors as the individuals. The vectors are varied using random numbers that have a Gaussian or Normal distribution. Genetic Algorithms are in a class of their own. They use fixed-length bit strings to represent each individual. The process of selection and competition use the same general techniques as the other EAs. The process of reproduction uses crossover and mutation. Crossover involves selecting a father and a mother and then selecting a crossover point which will determine at which point the operation occurs. So once the point is selected, everything to the right is selected from the mother and the left from the father. Mutation is fairly simple and all that occurs is the current bit value is switched if mutation is to occur for a given bit position (Figure 3). Figure 3: Example crossover and mutation with individuals of a Genetic Algorithm adapted from [BFM] Genetic Programming can be thought of as a subset of GAs. Instead of using bit strings, parse trees are used to represent each individual. The individuals are actual programs which are evolved using crossover and mutation. Crossover occurs by choosing a subtree from the mother and the rest of the individual is chosen from the father (Figure 4). The mutation is fairly similar. A subtree is chosen and is replaced by a randomly generated subtree. The process of selection and competition use the same general techniques as the other EAs. All forms of EAs described above are based on modern evolutionary theory. They all share the same general procedure. The initial population can be ran- 4

6 Figure 4: Example crossover and mutation with individuals of a GP adapted from [BFM] domly selected or initialized based on available knowledge. Then individuals are evaluated based on the fitness function. This function determines the worth of the individual in the environment and it is generally dependent on the problem being solved. Next, a group of individuals from the population is selected for reproduction based on a certain criteria applied to their evaluation results. In this phase, recombination (exchange of information between two parents via crossover) and mutation occur depending on which specialized form of the EA is utilized. Once the children are produced, they are also evaluated based on the same fitness function as the parents. Finally, competition ensues to select the surviving individuals that will continue to the next evolutionary cycle. The criteria for the competition can vary depending on the selected technique but generally, the individuals with the best evaluation results are the survivors. This cycle continues until a predetermined termination condition is reached. EAs are primarily used to solve problems which involve non-differentiable, multimodal, noisydiscontinuousus, and other unusual surfaces..1.2 What are Neural Networks? Neurocomputing is concerned with information processing. Neural Networks are used to perform the task using a learning process. They are loosely based on what is known about how the neurons in a human brain function. Please refer to a biology text for further details. The basis for most of the work in 5

7 the field of neurocomputing has been derived from the original McCulloch-Pitts neuron (Figure 5). This neuron is a simple two-state device (on or off). The inputs to the neuron are weighted and summed. Let u = x 1 w 1 + x 2 w 2 + x 3 w 3 + x 4 w 4 + x 5 w x n w n The result is then feed to the activation element which in this case is a simple function y. { 1 for u θ y = for u < θ The θ value indicates the activation threshold. Figure 5: General architecture of a McCulloch-Pitts neuron Using this two state device, simple logic functions can be implemented. There is no training involved, just changing the weights and activation threshold values can yield a simple OR function (Figure 6). As you can see this simple neuron can have many applications. Of course most neural networks are based on slightly more complex neuron architectures. One of these is the simple perceptron. A perceptron has to be trained in order for it to be used for information processing. For example, if the simple OR function was to be implemented using a single neuron, it must be trained so the weight of each input can be varied until the neuron produces the desired output. In this case, we have four training patterns, given a two variable OR function. So we have our input vector as follows: [ ] 1 1 X = 1 1 Given random selection of the weights, our weights are as follows: W = [.5.5 ] 6

8 Figure 6: General architecture of a McCulloch-Pitts neuron Now we have the desired values for the four input vectors as follows: D = [ ] The learning rule of the perceptron is applied as such that the error is minimized every training epoch. Each epoch, every input combination in the training set is presented to the network and the output is compared to the desired value. The error value is multiplied by the input vector and the learning rate and the weights are updated as follows: w(k + 1) = w(k) + µe(k)x j (k) k the discrete-time index Given : j the index of the training vector µ the learning rate The error function e(k) is calculated based on the activation function y as follows: n e(k) = d j (k) y( w j (k)x j (k)) There are many different activation functions that can be used. Some of them are linear (Pure Linear, Hard Limiter, Symmetric Hard Limiter) and others are non-linear (Binary Sigmoid and Hyperbolic Tangent Sigmoid). For the purpose of this introduction, only the Hard Limiter function will be described. The function has binary output ( or 1). Given the result of the summed output of the matrix multiplication of the input and the weight matrix, the output is 1 if the summation is greater than or equal to 1, otherwise. { 1 for u 1 y = for u < 1 j=1 7

9 Using the above example, here are the weight updates for the first epoch: Let k = 1,i = 1,µ = 1 and w(1) = [.5.5]. e(1) = 1 y( [.5.5 ] [ ] 1 ) = 1 Since the error is zero, there is no need to update the weights, w(2) = w(1). On to the next input vector, k = 2 and i = 2. e(2) = 1 y( [.5.5 ] [ ] 1 ) = 1 w(3) = [.5 w(3) = w(2) + µ e(2) x 2 (2).5 ] [ ] T ranspose = [ ] The error was not zero so the weights were updated. Here is the updates for the third and forth inputs: e(3) = 1 y( [ ] [ ] ) = 1 1 w(4) = w(3) + µ e(3) x 3 (3) w(4) = [ ] [ ] T ranspose = [ ] 1 e(4) = 1 y( [ ] [ ] ) = Since the error is zero again, no need to update the weights for the last input. This completes the training for the first epoch. As can be seen, if we were to calculate the error for the four input vectors, they would be zero and this signifies the network has been trained. In general, complex networks never have an error of zero. There is however, a predefined error threshold value or maximum epoch value which terminates the training process..1.3 Hebbian Learning The above was an example of a supervised learning process. Meaning, when the desired output is known, the network can be trained to produce that output. When the desired output is not known, the learning process is said to be unsupervised. In such a process, Hebbian learning can be used to train the network (Figure 7). The major difference with Hebbian learning is the use of a learning signal, which can be a combination of any inputs, outputs, or other data. With this signal, the weights can be updated such that good inputs are learned and bad inputs are forgotten. The forgetting signal can also be used to stave off exponential growth of the weights as the learning process continues. The weight update equations are as follows: 8

10 Figure 7: Example Neuron with Hebbian Learning Rule adapted from [HK1] w(k + 1) = w(k) + µ [l(k)x j (k) αw(k)] Given : k the discrete-time index j the index of the training vector µ the learning rate α the forgetting factor l(k) learning signal.1.4 Why go through all the trouble? Well, this may seem like an awful lot of work just to find a good set of parameters for something which is a stochastic process anyways. Some may argue this is a fruitless endeavor but the primary goal is to determine if adjusting the parameters during subsequent generations of the the evolutionary cycle produces better results. So how can this be done? With the use of a Neural Network, it may be possible to find a pattern within a set of good subsequent parameter values such that the EA can avoid local minima and premature convergence while still increasing the overall convergence rate. 9

11 .2 Experimental Model The experimental model was based on a Genetic Algorithm designed to solve the binary knapsack problem. The description of the problem can be found below. The goal of the GA was to find a set of items which fit in the knapsack while maximizing the profit. The GA was chosen instead of another form of an EA since it has many parameters that can be optimized..2.1 Binary Knapsack Problem Definition: Given a knapsack of capacity C, a set of N items with corresponding profits p i and volumes v i, where C, N, p i, and v i are positive floating point numbers, associate binary variables s i with every item indicating selection (1 means the item is selected to be in the knapsack, means it s not). The knapsack problem is to find those values for s i which will maximize profit ( N i=1 s i p i ) within the volume constraint ( N i=1 s i v i C)..2.2 GA Characteristics The population size and reproduction size can be varied and they need to be varied so different datasets can be used. The selection process was divided into three levels. In the first level, the two individuals with the best fitness mated, then the next two with the best fitness and so on. This produced 1 2 N children, where N is the population size. The rest of the children were produced using an elitist strategy, the individual with the highest fitness mated the most. The process of competition was simple, individuals with the highest fitness survived to the next generation. The reproduction process involved crossover and mutation. The idea of bit cells was used to further add more parameters to the GA. A bit cell is defined by the cell divider. For example, given the number of items is 1, each individual of the GA has 1 genes. Each gene represents an item that can be put in the knapsack. So each gene, has a profit and volume value associated with it. So if the cell divider was set to 2, the number of genes would be divided by the cell divider value to yield the bit cell size of 5. Meaning there are two bit cells which have a size of 5 each. Thus, the constraint put on the cell divider was that it must divide into the number of genes evenly. The mutation and crossover operations operated on each individual which in turn resulted in operations on each cell, therefore, a cell probability was associated with mutation and crossover. This gave four extra parameters to optimize. The list of parameters can be found below. The population size and reproduction size were kept constant as to minimize overhead associated with the resize operation. 1. Crossover Probability(CP) 2. Cell Crossover Probability(CCP) 3. Crossover Cell Divider(CCD) 4. Mutation Probability(MP) 1

12 5. Cell Mutation Probability(CMP) 6. Bit Mutation Probability(BMP) 7. Mutation Cell Divider(MCD) The fitness function was simple, it returned the total profit value of the given configuration of the knapsack. Note that each individual of the GA is a possible configuration of the knapsack. If the volume of the knapsack was more then the specified constraint of the dataset, the fitness function returned the negated profit value. This pruned out any individual which did not meet the volume constraint..2.3 Neural Network Architecture The neural network architecture was defined as a 7 neuron, fully-connected, single layer network with pure linear activation function (Figure 8). Each neuron had 7 inputs, the parameters listed above. The initial condition before the GA began its evolutionary cycle was defined such that the output of the Neural Network was equivalent to the input. The input was the default parameter values found in the GA configuration. The initial training of the network was supervised since the network was trained to output the same values as its input..2.4 Unsupervised Learning Process The learning process was divided into three states: steady-state (STATE 1), local minima excitation (STATE 2), and single bit search (STATE 3). The names are self explanatory. The network initially starts out in STATE 1. The network stays in this state until a better input value is realized but that won t happen since the output is always the same as its current input and since the next generations parameter values are this generation s outputs, they will never change. This presents us with a problem. The way around this was to measure the relative change in slope of the best fitness value in subsequent generations. Hence, when the slope value reached a predefined threshold value S min, generally this signaled an approaching local minima, the inputs were perturbed. The perturbation was nothing but a multiplication of random selection of a value which was positively or negatively displaced from 1. For example, the random selection would be between 1.5 or.95, if the displacement percentage(dp ) was chosen to be.5. So the inputs would change in value up or down depending on the defined displacement direction probability(p D) by the displacement percentage. This forces the network to either learn the new parameter values or to stay away from them. The above also staves off zeroing of the weights by the forgetting factor by allowing the probability of increasing the input value to be slightly higher then lowering it. For example, if P D was set to.4, the probability of multiplying by 1.5 is 6% versus 4% by.95. The learning rule used to discover if a new parameter set was to be learned or not was determined by calculating the change in the population s variance, 11

13 Figure 8: Neural Network Architecture mean, and best fitness value. The following rule was used to determine if an input was to be learned. The change was calculated between the input values from the last presentation and the input values from the current presentation to the neural network. if there is change in variance - λ =.1 if the change in mean is negative if there is change in best fitness - learn input values (λ = 1) else 12

14 else - move away from input values (λ =.2) else if the change in mean is positive - learn input values (λ = 1) - no learning takes place (λ = ) The learning signal was then applied according to the above rule. The values of the learning signal were defined using the following guidelines: 1. Crossover Probability should be learned slower then mutation probability 2. Cell Crossover Probability should be learned slower then cell mutation probability 3. Crossover Cell Divider should be learned as the average bit changes during crossover stray away from a predefined nominal value 4. Mutation Probability should be learned gradually but slower than Cell Mutation Probability 5. Cell Mutation Probability should be learned gradually but faster then Bit Mutation Probability 6. Bit Mutation Probability should be learned the slowest 7. Mutation Cell Divider should be learned depending on the change in variance, mean, and best fitness. Here are the learning signal and forgetting factors used for STATE 1 with each neuron number denoted with a subscript. The neuron number directly corresponds to the parameter in the numbered list above. l 1 (k) = λ.1 l 2 (k) = λ.1 e CP CCP L (CP CCP L) (AV GCBIT CCD) CP CCP L l 3 (k) = λ.15 e (CP CCP L) (AV GCBIT CCD) l 4 (k) = λ.5 l 5 (k) = λ.5 l 6 (k) = λ.5 ( MAXF IT NESS+ MEAN+ V ARIANCE ) l 7 (k) = λ.15 e f 1 (k) = α.1 f 2 (k) = α.1 e CP CCP L (CP CCP L) (AV GCBIT CCD) CP CCP L f 3 (k) = α.15 e (CP CCP L) (AV GCBIT CCD) f 4 (k) = α.5 f 5 (k) = α.5 f 6 (k) = α.5 ( MAXF IT NESS+ MEAN+ V ARIANCE ) f 7 (k) = α.15 e 13

15 Given : L The number of genes per individual AV GCBIT Average number of bit changes during crossover operation α =.5 M AXF IT N ESS The change in best fitness in subsequent presentations. M EAN The change in the population mean in subsequent presentations. V ARIAN CE The change in the population variance in subsequent presentations. The weights in this state are updated as such: w(k + 1) = w(k) + µ [l(k)(x j (k) y j (k 1)) α(x j (k) x j (k 1))] As can be seen from the above equation, the learning signal consists of the learning coefficients multiplied to the difference between the current input and output of the previous input s presentation to the network. The forgetting factor helps to minimize huge shifts between the new and previous input values by subtracting the difference. The weight updating is slightly different for STATE 2 then STATE 1. This state is named local minima excitation because there is no learning signal. Only the forgetting factor is used to update the weights. Hence, this will lower all the probabilities until the condition for exiting this state has been meet. Here, the forgetting factor is as follows: f 1 (k) = µ.1 f 2 (k) = µ.1 e f 3 (k) = µ.15 e f 4 (k) = µ.5 f 5 (k) = µ.5 f 6 (k) = µ.5 f 7 (k) = µ.15 e µ =.1 CP CCP L (CP CCP L) (AV GCBIT CCD) CP CCP L (CP CCP L) (AV GCBIT CCD) ( MAXF IT NESS+ MEAN+ V ARIANCE ) STATE 2 weight update: w(k + 1) = w(k) + [(1 µ)w(k)] The last state, STATE 3, has no learning at all. It simply sets the parameters such that only single bit changes occur. Meaning, by the time STATE 3 is reached, only way to really get any better results is through single bit search. As can be seen from the state diagram in Figure 9 the way to reach this state is when the parameter values have reached a low bounds or if the change in best fitness value is less then a threshold value (δ min ) over a predetermined length of generations (δ gen ). The idea behind the state machine is to accelerate the convergence until it becomes necessary to use single bit changes. This is accomplished by STATE 1 and STATE 2 along with the perturbations of the inputs. Specifically, if while in STATE 1, a local minima is hit, then the machine goes to STATE 2. This is where the weight matrix is manipulated until there is a change in the best fitness. At this point, the machine goes back to STATE 14

16 Figure 9: State Diagram 1 and learns the new parameters which allowed it to exit STATE 2. Thus, these transitions allow for the network to adapt during subsequent generations. The perturbations on the other hand help to avoid local minima by trying to predict when the local minima is approaching. The threshold value(s min ) used to determine if the inputs are to be perturbed is just a percent value which indicates the relative change in slope with respect to the last change in slope. For all the test runs, the inputs were perturbed if the change in slope was less then 4%. So after each input is presented to the network and the learning process updates the weights as necessary. The output of the network is taken as the parameter values for the next generation of the GA. y(k) = w(k + 1) x(k) The above equation generally is within the limits we specified, but the output y(k) went through boundary checks to ensure the parameter values were correct. This also required the bounds be specified and in this case, they were just set to highest possible probability and lowest possible probability values. Note that y(k) is not a single value, it is a vector with seven values, one value for each of the corresponding parameters that were listed above. There is one other consideration that must be taken into account. The number of time the neural network is queried can also have a dramatic impact on its ability to produce good output. When presenting the input to the neural network every generation of a GA, the result was premature convergence of neural network output. Thus, a parameter which controls how often the neural network was queried was specified (N gen ). 15

17 .3 Results and Conclusion The test runs were conducted with two datasets, one with 1 items and the other with 1 items. Each dataset had three parameter files that were used. Each was run with and without the Neural Optimizations. The neural network parameters used for each dataset are in Table 1 and 2. The three parameter files were configured such that the first had high probability of crossover and mutation. The third simulated single bit changes. The second was in between the first and third and attempted to have moderate crossover and mutation but with higher crossover than mutation. The results of dataset one can be seen in Figures 1, 13, 16. They were run for 1 generations and the figures show the best fitness achieved at each generation. As you can see, the neural optimized runs converge much faster then the unoptimized runs. The best figure to show the neural optimizations ability to avoid local minima and converge faster is shown in figure 1. As the local minima was being approached close to generation 7, the crossover probability (CP CCP ) was gradually lowered, and the mutation probability (MP CMP ) was increased slightly. This was the perturbation which forced the network to adapt, resulting in a gradual decrease in crossover and mutation. Note that by generation 1, the slope of the fitness function had increased. Hence, it avoided a local minima. This effect can also be seen in the other two test runs probability curves (Figures 14, 15, 17, and 18) but they are not as pronounced as the results shown in the best fitness curve of the first test. The results for the second dataset (Figures 19, 22, 25) also show a significant increase in the rate of convergence compared to the unoptimized results. The same three tests were run except only for 25 generations due to computational limitations. Again, the best results were obtained in the first test where the initial mutation and crossover values were set to their highest. The neural network seems to work best when starting out with high mutation and crossover and then gradually decreasing it to find a set of values which promote the most growth in the fitness value. At which point, the values are increased gradually until they are decreased again by the local minima excitation state. This allows for an oscillation in the probabilities such that parameter values which promote the most growth are found over subsequent generations. Thus, allowing for faster convergence. These oscillations can be observed in the probability curves in Figures 2, 21, 23, 24, 26, and 27. As is obvious from the above results, there was a marked improvement in the output of the Neural Optimized GA. This leads us to believe there is a pattern within a set of good subsequent parameter values such that an EA can avoid local minima and premature convergence while still increasing the overall convergence rate. This is but a small example of how EAs and Neural Networks can be used together to solve many problems. Further research may be required to find an optimal learning strategy but even the simple one employed in this paper shows there is great potential in using Neural Optimized GAs. 16

18 Table 1: Neural Network Parameters for Dataset 1 Parameter Name Parameter Value δ gen 2 δ min 5 N gen 2 DP.1 P D.4 S min 4 Table 2: Neural Network Parameters for Dataset 2 Parameter Name Parameter Value δ gen 8 δ min 5 N gen 2 DP.8 P D.4 S min Fitness value 2 15 With Hebb STATE 3 Without Hebb Figure 1: Dataset 1 - Test 1 - Best Fitness Curve 17

19 1.9 Probability/Normalized Fitness CP w/hebb CP Fitness Figure 11: Dataset 1 - Test 1 - Crossover Probability Curve Probability/Normalized Fitness MP w/hebb MP Fitness Figure 12: Dataset 1 - Test 1 - Mutation Probability Curve 18

20 3 25 Fitness value 2 15 With Hebb STATE 3 Without Hebb Figure 13: Dataset 1 - Test Probability/Normalized Fitness CP w/hebb CP Fitness Figure 14: Dataset 1 - Test 2 - Crossover Probability Curve 19

21 .9.8 Probability/Normalized Fitness MP w/hebb MP Fitness Figure 15: Dataset 1 - Test 2 - Mutation Probability Curve 3 25 Fitness value 2 15 With Hebb STATE 3 Without Hebb Figure 16: Dataset 1 - Test 3 2

22 .9.8 Probability/Normalized Fitness CP w/hebb CP Fitness Figure 17: Dataset 1 - Test 3 - Crossover Probability Curve Probability/Normalized Fitness MP w/hebb MP Fitness Figure 18: Dataset 1 - Test 3 - Mutation Probability Curve 21

23 With Hebb STATE 3 Without Hebb Fitness value Figure 19: Dataset 2 - Test Probability/Normalized Fitness CP w/hebb CP Fitness Figure 2: Dataset 2 - Test 1 - Crossover Probability Curve 22

24 1.9 Probability/Normalized Fitness MP w/hebb MP Fitness Figure 21: Dataset 2 - Test 1 - Mutation Probability Curve Fitness value With Hebb STATE 3 Without Hebb Figure 22: Dataset 2 - Test 2 23

25 1.9 Probability/Normalized Fitness CP w/hebb CP Fitness Figure 23: Dataset 2 - Test 2 - Crossover Probability Curve Probability/Normalized Fitness MP w/hebb MP Fitness Figure 24: Dataset 2 - Test 2 - Mutation Probability Curve 24

26 With Hebb STATE 3 Without Hebb Fitness value Figure 25: Dataset 2 - Test Probability/Normalized Fitness CP w/hebb CP Fitness Figure 26: Dataset 2 - Test 3 - Crossover Probability Curve 25

27 1.9 Probability/Normalized Fitness MP w/hebb MP Fitness Figure 27: Dataset 2 - Test 3 - Mutation Probability Curve 26

28 Bibliography [BFM] T BACK, DB FOGEL, and T MICHALEWICZ. Evolutionary Computation 1. Institute of Physics Publishing, 2. [HK1] Fredric M. Ham and Ivica Kostanic. Principles of Neurocomputing for Science and Engineering. McGraw-Hill Companies, Inc., Baltimore, Maryland, U.S.A.,

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter Review: Final Exam Model for a Learning Step Learner initially Environm ent Teacher Compare s pe c ia l Information Control Correct Learning criteria Feedback changed Learner after Learning Learning by

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

More information

For Monday. Read chapter 18, sections Homework:

For Monday. Read chapter 18, sections Homework: For Monday Read chapter 18, sections 10-12 The material in section 8 and 9 is interesting, but we won t take time to cover it this semester Homework: Chapter 18, exercise 25 a-b Program 4 Model Neuron

More information

GENETIC ALGORITHM with Hands-On exercise

GENETIC ALGORITHM with Hands-On exercise GENETIC ALGORITHM with Hands-On exercise Adopted From Lecture by Michael Negnevitsky, Electrical Engineering & Computer Science University of Tasmania 1 Objective To understand the processes ie. GAs Basic

More information

Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?)

Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?) SKIP - May 2004 Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?) S. G. Hohmann, Electronic Vision(s), Kirchhoff Institut für Physik, Universität Heidelberg Hardware Neuronale Netzwerke

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Fundamentals Adrian Horzyk Preface Before we can proceed to discuss specific complex methods we have to introduce basic concepts, principles, and models of computational intelligence

More information

Mutations for Permutations

Mutations for Permutations Mutations for Permutations Insert mutation: Pick two allele values at random Move the second to follow the first, shifting the rest along to accommodate Note: this preserves most of the order and adjacency

More information

Evolutionary Algorithms. CS Evolutionary Algorithms 1

Evolutionary Algorithms. CS Evolutionary Algorithms 1 Evolutionary Algorithms CS 478 - Evolutionary Algorithms 1 Evolutionary Computation/Algorithms Genetic Algorithms l Simulate natural evolution of structures via selection and reproduction, based on performance

More information

Character Recognition Using Convolutional Neural Networks

Character Recognition Using Convolutional Neural Networks Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract

More information

Chapter 5 Components for Evolution of Modular Artificial Neural Networks

Chapter 5 Components for Evolution of Modular Artificial Neural Networks Chapter 5 Components for Evolution of Modular Artificial Neural Networks 5.1 Introduction In this chapter, the methods and components used for modular evolution of Artificial Neural Networks (ANNs) are

More information

The Continuous Genetic Algorithm. Universidad de los Andes-CODENSA

The Continuous Genetic Algorithm. Universidad de los Andes-CODENSA The Continuous Genetic Algorithm Universidad de los Andes-CODENSA 1. Components of a Continuous Genetic Algorithm The flowchart in figure1 provides a big picture overview of a continuous GA.. Figure 1.

More information

Incorporating Known Pathways into Gene Clustering Algorithms for Genetic Expression Data

Incorporating Known Pathways into Gene Clustering Algorithms for Genetic Expression Data Incorporating Known Pathways into Gene Clustering Algorithms for Genetic Expression Data Ryan Atallah, John Ryan, David Aeschlimann December 14, 2013 Abstract In this project, we study the problem of classifying

More information

Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you?

Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you? Gurjit Randhawa Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you? This would be nice! Can it be done? A blind generate

More information

The Genetic Algorithm for finding the maxima of single-variable functions

The Genetic Algorithm for finding the maxima of single-variable functions Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding

More information

Artificial Neural Network based Curve Prediction

Artificial Neural Network based Curve Prediction Artificial Neural Network based Curve Prediction LECTURE COURSE: AUSGEWÄHLTE OPTIMIERUNGSVERFAHREN FÜR INGENIEURE SUPERVISOR: PROF. CHRISTIAN HAFNER STUDENTS: ANTHONY HSIAO, MICHAEL BOESCH Abstract We

More information

A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS

A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS Jim Gasvoda and Qin Ding Department of Computer Science, Pennsylvania State University at Harrisburg, Middletown, PA 17057, USA {jmg289, qding}@psu.edu

More information

The Binary Genetic Algorithm. Universidad de los Andes-CODENSA

The Binary Genetic Algorithm. Universidad de los Andes-CODENSA The Binary Genetic Algorithm Universidad de los Andes-CODENSA 1. Genetic Algorithms: Natural Selection on a Computer Figure 1 shows the analogy between biological i l evolution and a binary GA. Both start

More information

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ OUTLINE Optimisation problems Optimisation & search Two Examples The knapsack problem

More information

The Parallel Software Design Process. Parallel Software Design

The Parallel Software Design Process. Parallel Software Design Parallel Software Design The Parallel Software Design Process Deborah Stacey, Chair Dept. of Comp. & Info Sci., University of Guelph dastacey@uoguelph.ca Why Parallel? Why NOT Parallel? Why Talk about

More information

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Genetic Algorithms for Vision and Pattern Recognition

Genetic Algorithms for Vision and Pattern Recognition Genetic Algorithms for Vision and Pattern Recognition Faiz Ul Wahab 11/8/2014 1 Objective To solve for optimization of computer vision problems using genetic algorithms 11/8/2014 2 Timeline Problem: Computer

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Approximation Algorithms and Heuristics November 6, 2015 École Centrale Paris, Châtenay-Malabry, France Dimo Brockhoff INRIA Lille Nord Europe 2 Exercise: The Knapsack Problem

More information

CONVOLUTIONAL NEURAL NETWORK OPTIMIZATION USING GENETIC ALGORITHMS

CONVOLUTIONAL NEURAL NETWORK OPTIMIZATION USING GENETIC ALGORITHMS CONVOLUTIONAL NEURAL NETWORK OPTIMIZATION USING GENETIC ALGORITHMS Thesis Submitted to The School of Engineering of the UNIVERSITY OF DAYTON In Partial Fulfillment of the Requirements for The Degree of

More information

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS In: Journal of Applied Statistical Science Volume 18, Number 3, pp. 1 7 ISSN: 1067-5817 c 2011 Nova Science Publishers, Inc. MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS Füsun Akman

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

Using Genetic Algorithms to Solve the Box Stacking Problem

Using Genetic Algorithms to Solve the Box Stacking Problem Using Genetic Algorithms to Solve the Box Stacking Problem Jenniffer Estrada, Kris Lee, Ryan Edgar October 7th, 2010 Abstract The box stacking or strip stacking problem is exceedingly difficult to solve

More information

ATI Material Do Not Duplicate ATI Material. www. ATIcourses.com. www. ATIcourses.com

ATI Material Do Not Duplicate ATI Material. www. ATIcourses.com. www. ATIcourses.com ATI Material Material Do Not Duplicate ATI Material Boost Your Skills with On-Site Courses Tailored to Your Needs www.aticourses.com The Applied Technology Institute specializes in training programs for

More information

Applied Cloning Techniques for a Genetic Algorithm Used in Evolvable Hardware Design

Applied Cloning Techniques for a Genetic Algorithm Used in Evolvable Hardware Design Applied Cloning Techniques for a Genetic Algorithm Used in Evolvable Hardware Design Viet C. Trinh vtrinh@isl.ucf.edu Gregory A. Holifield greg.holifield@us.army.mil School of Electrical Engineering and

More information

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION Nedim TUTKUN nedimtutkun@gmail.com Outlines Unconstrained Optimization Ackley s Function GA Approach for Ackley s Function Nonlinear Programming Penalty

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Approximation Algorithms and Heuristics November 21, 2016 École Centrale Paris, Châtenay-Malabry, France Dimo Brockhoff Inria Saclay Ile-de-France 2 Exercise: The Knapsack

More information

Genetic Algorithms. Kang Zheng Karl Schober

Genetic Algorithms. Kang Zheng Karl Schober Genetic Algorithms Kang Zheng Karl Schober Genetic algorithm What is Genetic algorithm? A genetic algorithm (or GA) is a search technique used in computing to find true or approximate solutions to optimization

More information

Outline. CS 6776 Evolutionary Computation. Numerical Optimization. Fitness Function. ,x 2. ) = x 2 1. , x , 5.0 x 1.

Outline. CS 6776 Evolutionary Computation. Numerical Optimization. Fitness Function. ,x 2. ) = x 2 1. , x , 5.0 x 1. Outline CS 6776 Evolutionary Computation January 21, 2014 Problem modeling includes representation design and Fitness Function definition. Fitness function: Unconstrained optimization/modeling Constrained

More information

1 Lab + Hwk 5: Particle Swarm Optimization

1 Lab + Hwk 5: Particle Swarm Optimization 1 Lab + Hwk 5: Particle Swarm Optimization This laboratory requires the following equipment: C programming tools (gcc, make), already installed in GR B001 Webots simulation software Webots User Guide Webots

More information

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem An Evolutionary Algorithm for the Multi-objective Shortest Path Problem Fangguo He Huan Qi Qiong Fan Institute of Systems Engineering, Huazhong University of Science & Technology, Wuhan 430074, P. R. China

More information

Experimental Comparison of Different Techniques to Generate Adaptive Sequences

Experimental Comparison of Different Techniques to Generate Adaptive Sequences Experimental Comparison of Different Techniques to Generate Adaptive Sequences Carlos Molinero 1, Manuel Núñez 1 and Robert M. Hierons 2 1 Departamento de Sistemas Informáticos y Computación, Universidad

More information

Genetic Algorithms Variations and Implementation Issues

Genetic Algorithms Variations and Implementation Issues Genetic Algorithms Variations and Implementation Issues CS 431 Advanced Topics in AI Classic Genetic Algorithms GAs as proposed by Holland had the following properties: Randomly generated population Binary

More information

Gen := 0. Create Initial Random Population. Termination Criterion Satisfied? Yes. Evaluate fitness of each individual in population.

Gen := 0. Create Initial Random Population. Termination Criterion Satisfied? Yes. Evaluate fitness of each individual in population. An Experimental Comparison of Genetic Programming and Inductive Logic Programming on Learning Recursive List Functions Lappoon R. Tang Mary Elaine Cali Raymond J. Mooney Department of Computer Sciences

More information

CHAPTER 4 GENETIC ALGORITHM

CHAPTER 4 GENETIC ALGORITHM 69 CHAPTER 4 GENETIC ALGORITHM 4.1 INTRODUCTION Genetic Algorithms (GAs) were first proposed by John Holland (Holland 1975) whose ideas were applied and expanded on by Goldberg (Goldberg 1989). GAs is

More information

The k-means Algorithm and Genetic Algorithm

The k-means Algorithm and Genetic Algorithm The k-means Algorithm and Genetic Algorithm k-means algorithm Genetic algorithm Rough set approach Fuzzy set approaches Chapter 8 2 The K-Means Algorithm The K-Means algorithm is a simple yet effective

More information

CS5401 FS2015 Exam 1 Key

CS5401 FS2015 Exam 1 Key CS5401 FS2015 Exam 1 Key This is a closed-book, closed-notes exam. The only items you are allowed to use are writing implements. Mark each sheet of paper you use with your name and the string cs5401fs2015

More information

An Evolutionary Approximation to Contrastive Divergence in Convolutional Restricted Boltzmann Machines

An Evolutionary Approximation to Contrastive Divergence in Convolutional Restricted Boltzmann Machines Wright State University CORE Scholar Browse all Theses and Dissertations Theses and Dissertations 2014 An Evolutionary Approximation to Contrastive Divergence in Convolutional Restricted Boltzmann Machines

More information

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Chapter 5 A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Graph Matching has attracted the exploration of applying new computing paradigms because of the large number of applications

More information

Genetic Algorithms. PHY 604: Computational Methods in Physics and Astrophysics II

Genetic Algorithms. PHY 604: Computational Methods in Physics and Astrophysics II Genetic Algorithms Genetic Algorithms Iterative method for doing optimization Inspiration from biology General idea (see Pang or Wikipedia for more details): Create a collection of organisms/individuals

More information

FEATURE GENERATION USING GENETIC PROGRAMMING BASED ON FISHER CRITERION

FEATURE GENERATION USING GENETIC PROGRAMMING BASED ON FISHER CRITERION FEATURE GENERATION USING GENETIC PROGRAMMING BASED ON FISHER CRITERION Hong Guo, Qing Zhang and Asoke K. Nandi Signal Processing and Communications Group, Department of Electrical Engineering and Electronics,

More information

A Comparative Study of Linear Encoding in Genetic Programming

A Comparative Study of Linear Encoding in Genetic Programming 2011 Ninth International Conference on ICT and Knowledge A Comparative Study of Linear Encoding in Genetic Programming Yuttana Suttasupa, Suppat Rungraungsilp, Suwat Pinyopan, Pravit Wungchusunti, Prabhas

More information

3.6.2 Generating admissible heuristics from relaxed problems

3.6.2 Generating admissible heuristics from relaxed problems 3.6.2 Generating admissible heuristics from relaxed problems To come up with heuristic functions one can study relaxed problems from which some restrictions of the original problem have been removed The

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

Neural Network Weight Selection Using Genetic Algorithms

Neural Network Weight Selection Using Genetic Algorithms Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks

More information

Evolutionary Computation. Chao Lan

Evolutionary Computation. Chao Lan Evolutionary Computation Chao Lan Outline Introduction Genetic Algorithm Evolutionary Strategy Genetic Programming Introduction Evolutionary strategy can jointly optimize multiple variables. - e.g., max

More information

Chapter 9: Genetic Algorithms

Chapter 9: Genetic Algorithms Computational Intelligence: Second Edition Contents Compact Overview First proposed by Fraser in 1957 Later by Bremermann in 1962 and Reed et al in 1967 Popularized by Holland in 1975 Genetic algorithms

More information

A Framework for adaptive focused web crawling and information retrieval using genetic algorithms

A Framework for adaptive focused web crawling and information retrieval using genetic algorithms A Framework for adaptive focused web crawling and information retrieval using genetic algorithms Kevin Sebastian Dept of Computer Science, BITS Pilani kevseb1993@gmail.com 1 Abstract The web is undeniably

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

2. Neural network basics

2. Neural network basics 2. Neural network basics Next commonalities among different neural networks are discussed in order to get started and show which structural parts or concepts appear in almost all networks. It is presented

More information

Introduction to Genetic Algorithms

Introduction to Genetic Algorithms Advanced Topics in Image Analysis and Machine Learning Introduction to Genetic Algorithms Week 3 Faculty of Information Science and Engineering Ritsumeikan University Today s class outline Genetic Algorithms

More information

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm Acta Technica 61, No. 4A/2016, 189 200 c 2017 Institute of Thermomechanics CAS, v.v.i. Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm Jianrong Bu 1, Junyan

More information

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) 1 2 Outline Local search techniques and optimization Hill-climbing

More information

1 Lab 5: Particle Swarm Optimization

1 Lab 5: Particle Swarm Optimization 1 Lab 5: Particle Swarm Optimization This laboratory requires the following: (The development tools are installed in GR B0 01 already): C development tools (gcc, make, etc.) Webots simulation software

More information

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) 1 2 Outline Local search techniques and optimization Hill-climbing

More information

Lecture 6: Genetic Algorithm. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved

Lecture 6: Genetic Algorithm. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved Lecture 6: Genetic Algorithm An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved Lec06/1 Search and optimization again Given a problem, the set of all possible

More information

Multilayer Feed-forward networks

Multilayer Feed-forward networks Multi Feed-forward networks 1. Computational models of McCulloch and Pitts proposed a binary threshold unit as a computational model for artificial neuron. This first type of neuron has been generalized

More information

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS 6.1 Introduction Gradient-based algorithms have some weaknesses relative to engineering optimization. Specifically, it is difficult to use gradient-based algorithms

More information

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) 1 Outline Local search techniques and optimization Hill-climbing

More information

Machine Evolution. Machine Evolution. Let s look at. Machine Evolution. Machine Evolution. Machine Evolution. Machine Evolution

Machine Evolution. Machine Evolution. Let s look at. Machine Evolution. Machine Evolution. Machine Evolution. Machine Evolution Let s look at As you will see later in this course, neural networks can learn, that is, adapt to given constraints. For example, NNs can approximate a given function. In biology, such learning corresponds

More information

V.Petridis, S. Kazarlis and A. Papaikonomou

V.Petridis, S. Kazarlis and A. Papaikonomou Proceedings of IJCNN 93, p.p. 276-279, Oct. 993, Nagoya, Japan. A GENETIC ALGORITHM FOR TRAINING RECURRENT NEURAL NETWORKS V.Petridis, S. Kazarlis and A. Papaikonomou Dept. of Electrical Eng. Faculty of

More information

Genetic Fourier Descriptor for the Detection of Rotational Symmetry

Genetic Fourier Descriptor for the Detection of Rotational Symmetry 1 Genetic Fourier Descriptor for the Detection of Rotational Symmetry Raymond K. K. Yip Department of Information and Applied Technology, Hong Kong Institute of Education 10 Lo Ping Road, Tai Po, New Territories,

More information

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

International Journal of Digital Application & Contemporary research Website:   (Volume 1, Issue 7, February 2013) Performance Analysis of GA and PSO over Economic Load Dispatch Problem Sakshi Rajpoot sakshirajpoot1988@gmail.com Dr. Sandeep Bhongade sandeepbhongade@rediffmail.com Abstract Economic Load dispatch problem

More information

Artificial Neural Networks (Feedforward Nets)

Artificial Neural Networks (Feedforward Nets) Artificial Neural Networks (Feedforward Nets) y w 03-1 w 13 y 1 w 23 y 2 w 01 w 21 w 22 w 02-1 w 11 w 12-1 x 1 x 2 6.034 - Spring 1 Single Perceptron Unit y w 0 w 1 w n w 2 w 3 x 0 =1 x 1 x 2 x 3... x

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

Neural Nets. General Model Building

Neural Nets. General Model Building Neural Nets To give you an idea of how new this material is, let s do a little history lesson. The origins of neural nets are typically dated back to the early 1940 s and work by two physiologists, McCulloch

More information

METAHEURISTIC. Jacques A. Ferland Department of Informatique and Recherche Opérationnelle Université de Montréal.

METAHEURISTIC. Jacques A. Ferland Department of Informatique and Recherche Opérationnelle Université de Montréal. METAHEURISTIC Jacques A. Ferland Department of Informatique and Recherche Opérationnelle Université de Montréal ferland@iro.umontreal.ca March 2015 Overview Heuristic Constructive Techniques: Generate

More information

Literature Review On Implementing Binary Knapsack problem

Literature Review On Implementing Binary Knapsack problem Literature Review On Implementing Binary Knapsack problem Ms. Niyati Raj, Prof. Jahnavi Vitthalpura PG student Department of Information Technology, L.D. College of Engineering, Ahmedabad, India Assistant

More information

An Application of Genetic Algorithm for Auto-body Panel Die-design Case Library Based on Grid

An Application of Genetic Algorithm for Auto-body Panel Die-design Case Library Based on Grid An Application of Genetic Algorithm for Auto-body Panel Die-design Case Library Based on Grid Demin Wang 2, Hong Zhu 1, and Xin Liu 2 1 College of Computer Science and Technology, Jilin University, Changchun

More information

CHAPTER VI BACK PROPAGATION ALGORITHM

CHAPTER VI BACK PROPAGATION ALGORITHM 6.1 Introduction CHAPTER VI BACK PROPAGATION ALGORITHM In the previous chapter, we analysed that multiple layer perceptrons are effectively applied to handle tricky problems if trained with a vastly accepted

More information

Linear Models. Lecture Outline: Numeric Prediction: Linear Regression. Linear Classification. The Perceptron. Support Vector Machines

Linear Models. Lecture Outline: Numeric Prediction: Linear Regression. Linear Classification. The Perceptron. Support Vector Machines Linear Models Lecture Outline: Numeric Prediction: Linear Regression Linear Classification The Perceptron Support Vector Machines Reading: Chapter 4.6 Witten and Frank, 2nd ed. Chapter 4 of Mitchell Solving

More information

Heuristic Optimisation

Heuristic Optimisation Heuristic Optimisation Part 10: Genetic Algorithm Basics Sándor Zoltán Németh http://web.mat.bham.ac.uk/s.z.nemeth s.nemeth@bham.ac.uk University of Birmingham S Z Németh (s.nemeth@bham.ac.uk) Heuristic

More information

Automated Test Data Generation and Optimization Scheme Using Genetic Algorithm

Automated Test Data Generation and Optimization Scheme Using Genetic Algorithm 2011 International Conference on Software and Computer Applications IPCSIT vol.9 (2011) (2011) IACSIT Press, Singapore Automated Test Data Generation and Optimization Scheme Using Genetic Algorithm Roshni

More information

Final Project Report: Learning optimal parameters of Graph-Based Image Segmentation

Final Project Report: Learning optimal parameters of Graph-Based Image Segmentation Final Project Report: Learning optimal parameters of Graph-Based Image Segmentation Stefan Zickler szickler@cs.cmu.edu Abstract The performance of many modern image segmentation algorithms depends greatly

More information

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Evolutionary Computation Algorithms for Cryptanalysis: A Study Evolutionary Computation Algorithms for Cryptanalysis: A Study Poonam Garg Information Technology and Management Dept. Institute of Management Technology Ghaziabad, India pgarg@imt.edu Abstract The cryptanalysis

More information

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset.

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset. Glossary of data mining terms: Accuracy Accuracy is an important factor in assessing the success of data mining. When applied to data, accuracy refers to the rate of correct values in the data. When applied

More information

Perceptron: This is convolution!

Perceptron: This is convolution! Perceptron: This is convolution! v v v Shared weights v Filter = local perceptron. Also called kernel. By pooling responses at different locations, we gain robustness to the exact spatial location of image

More information

Similarity Templates or Schemata. CS 571 Evolutionary Computation

Similarity Templates or Schemata. CS 571 Evolutionary Computation Similarity Templates or Schemata CS 571 Evolutionary Computation Similarities among Strings in a Population A GA has a population of strings (solutions) that change from generation to generation. What

More information

A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP

A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP Wael Raef Alkhayri Fahed Al duwairi High School Aljabereyah, Kuwait Suhail Sami Owais Applied Science Private University Amman,

More information

Computational Intelligence

Computational Intelligence Computational Intelligence Module 6 Evolutionary Computation Ajith Abraham Ph.D. Q What is the most powerful problem solver in the Universe? ΑThe (human) brain that created the wheel, New York, wars and

More information

Genetic Programming. Charles Chilaka. Department of Computational Science Memorial University of Newfoundland

Genetic Programming. Charles Chilaka. Department of Computational Science Memorial University of Newfoundland Genetic Programming Charles Chilaka Department of Computational Science Memorial University of Newfoundland Class Project for Bio 4241 March 27, 2014 Charles Chilaka (MUN) Genetic algorithms and programming

More information

Outline. Motivation. Introduction of GAs. Genetic Algorithm 9/7/2017. Motivation Genetic algorithms An illustrative example Hypothesis space search

Outline. Motivation. Introduction of GAs. Genetic Algorithm 9/7/2017. Motivation Genetic algorithms An illustrative example Hypothesis space search Outline Genetic Algorithm Motivation Genetic algorithms An illustrative example Hypothesis space search Motivation Evolution is known to be a successful, robust method for adaptation within biological

More information

A Genetic Algorithm Framework

A Genetic Algorithm Framework Fast, good, cheap. Pick any two. The Project Triangle 3 A Genetic Algorithm Framework In this chapter, we develop a genetic algorithm based framework to address the problem of designing optimal networks

More information

Neural Networks CMSC475/675

Neural Networks CMSC475/675 Introduction to Neural Networks CMSC475/675 Chapter 1 Introduction Why ANN Introduction Some tasks can be done easily (effortlessly) by humans but are hard by conventional paradigms on Von Neumann machine

More information

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis and Dimensioning II Department of Electronics and Communications Engineering Tampere University of Technology, Tampere, Finland March 19, 2014

More information

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization adfa, p. 1, 2011. Springer-Verlag Berlin Heidelberg 2011 Devang Agarwal and Deepak Sharma Department of Mechanical

More information

HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY

HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A

More information

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism) Artificial Neural Networks Analogy to biological neural systems, the most robust learning systems we know. Attempt to: Understand natural biological systems through computational modeling. Model intelligent

More information

A More Stable Approach To LISP Tree GP

A More Stable Approach To LISP Tree GP A More Stable Approach To LISP Tree GP Joseph Doliner August 15, 2008 Abstract In this paper we begin by familiarising ourselves with the basic concepts of Evolutionary Computing and how it can be used

More information

Genetic Algorithm for Finding Shortest Path in a Network

Genetic Algorithm for Finding Shortest Path in a Network Intern. J. Fuzzy Mathematical Archive Vol. 2, 2013, 43-48 ISSN: 2320 3242 (P), 2320 3250 (online) Published on 26 August 2013 www.researchmathsci.org International Journal of Genetic Algorithm for Finding

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 8 Artificial neural networks: Unsupervised learning Introduction Hebbian learning Generalised Hebbian learning algorithm Competitive learning Self-organising computational map: Kohonen network

More information

Simple Model Selection Cross Validation Regularization Neural Networks

Simple Model Selection Cross Validation Regularization Neural Networks Neural Nets: Many possible refs e.g., Mitchell Chapter 4 Simple Model Selection Cross Validation Regularization Neural Networks Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February

More information

Time Series prediction with Feed-Forward Neural Networks -A Beginners Guide and Tutorial for Neuroph. Laura E. Carter-Greaves

Time Series prediction with Feed-Forward Neural Networks -A Beginners Guide and Tutorial for Neuroph. Laura E. Carter-Greaves http://neuroph.sourceforge.net 1 Introduction Time Series prediction with Feed-Forward Neural Networks -A Beginners Guide and Tutorial for Neuroph Laura E. Carter-Greaves Neural networks have been applied

More information