Genetic Algorithms and the Evolution of Neural Networks for Language Processing

Size: px
Start display at page:

Download "Genetic Algorithms and the Evolution of Neural Networks for Language Processing"

Transcription

1 Genetic Algorithms and the Evolution of Neural Networks for Language Processing Jaime J. Dávila Hampshire College, School of Cognitive Science Amherst, MA Abstract One approach used by researchers trying to develop computer systems capable of understanding natural languages is that of training a neural network (NN) for the task. For this type of approach, one of the key questions becomes how to best configure NN parameters such as topology, learning rates, training data, and other. How to choose values for these parameters is still an open question, especially since the effect these variables have on each other is not completely understood. Genetic algorithms (GA) are particularly well suited for finding optimal combinations of parameters, since they make no assumption about the problem being solved. Different NN configurations are coded as genomes, which have a fitness function based on how well they can solve a particular task. Genomes are paired and recombined in the hope that the offspring of good solutions will be even better. In this paper I present ways in which I have used Genetic Algorithms (GAs) to find which NN parameter values produce better performance for a particular natural language task. In addition to this, the system has been modified and studied in order to evaluate ways in which coding methods in the GA and the NN can affect performance. In the case of GA coding, an evaluation method based on schema theory is presented. This methodology can help determine optimal balances between different evolutionary operators such as crossover and mutation, based on the effect they have on the structures being processed by the GA. In the case of NN coding, the effect of different ways of representing words and sentences at the output layer is examined with both binary and floating point schemes. Keywords: Neural Networks, Genetic Algorithms, Natural Language Processing. 1 Introduction One approach used by researchers trying to develop computer systems capable of understanding natural languages is that of training a neural network (NN) for the task. For this type of approach, one of the key questions becomes how to best configure NN parameters such as topology, learning rates, training data, and other. How to choose values for these parameters is still an open question, especially since the effect these variables have on each other is not completely understood. Genetic algorithms (GA) are particularly well suited for finding optimal combinations of parameters, since they make no assumption about the problem being solved, and find solutions by combining exploitation of known good solutions with exploration of new solutions. GAs are modeled after the process of natural selection. We start with a randomly selected population of elements, each of which represents a possible solution to the problem being solved. Each of these elements, called a genome, is formed by a string of values (called genes) that code the free variables in the experiment. The fitness of each genome is determined by how well the solution it codes solves the problem at hand. From the point of view of a genetic algorithm trying to optimize NN configurations, particular values for these variables constitute particular schemata. The rest of this paper is organized as follows; section 2 presents some of the ways in which other researchers have used GA to optimize NN. Section 3 presents the natural language task I

2 have been working with. Section 4 will present a method of coding NN topologies at the layer level of abstraction. In section 5 I present a mathematical model for evaluating a GA coding scheme s ability to efficiently process the genetic schemata in its population. Section 6 uses this model to make predictions for three different ways of coding NNs for the natural language task I present in section 3. Section 7 analizes results for these three coding schemes. Section 8 deals with different ways of representing sentences at the NN output layer. Section 9 then discusses the empirical results of using this different sentence coding scheme. Finally, section 10 describes possible future research and section 11 presents conclusions. 2 GA for NN Optimization In recent years researchers have used genetic algorithm techniques to evolve neural network topologies. Although these researchers have had the same aim in mind (namely, the evolution of topologies that are better able to solve a particular problem), the approaches they used varied greatly. For example, de Garis (1996) evolved NN by having a series of growth commands give instructions on how to grow connections among nodes. Each node in the network processed signals that told it how to extend its synapses. When two different synapses reached each other, a new node was formed. The genetic algorithm was responsible for evolving the sequence of growth commands that controlled how the network developed. Fullmer and Miikkulainen (1991) developed a GA coding system where pieces of a genotype went unused, imitating biological DNA processing. Only information stored between a Start marker and an End marker was used to generate networks. The number of correctly configured Start-End markers defined how many hidden nodes the network would have. In addition, information between these Start-End markers defined how the nodes were connected to each other. The meaning conveyed by each position in the used part of the genome depended on its distance from its corresponding Start symbol. For example, the genome shown in figure 1 would generate two nodes, one for string S,a,1,b,5,a,-2,E and another for string S,b,0,a,3,E, which wraps around the end of the genome. Node a had an initial activation of 1(because of substring S,a,1), is connected to node b with a weight of 5 (because of substring b, 5), and to itself with a weight of -2 (because of substring a, -2). Node b had an initial activation of 0 (because of substring S,b,0) and a connection to node a with a weight of 3(because of substring a,3). Figure 1: example of coding used by Fullmer & Miikulainen (Fullmer and Miikkulainen (1991)). The network evolved by this process was used to control a virtual creature s movements in a square field, avoiding bad objects and coming into contact with good objects. The GA continued to run until a network that could solve the problem evolved. The number of generations needed until this network was found varied between 7 and 304 for objects that could be identified before hitting them, and between 15 and 414 generations when recognizing the object required traveling around it looking for a characteristic view.

3 Kitano (1994) used GA to evolve a sequence of graph generation rules, as opposed to directly coding network topology. Each genome defined a sequence of rules used to rewrite an element of the graph. When these rules were applied until only terminal symbols remained, the graph defined a connectivity matrix which was then used to configure a NN. For example, if we were developing a network with two nodes, a genome might code rules [S AB][A 01][B 10]. When these three rules are applied we end up with a 2*2 matrix than defines the connectivity between the two nodes in the network. To be sure, the above examples do not represent a complete list of researchers who have used GA to optimize NN. A more complete review can be found, for example, in Yao (1993). 3 A Natural Language Processing Task for a NN This task was originally presented in Dávila (1999). As an overview, a network is asked to receive a sentence one word at a time, and to incrementally build a description of the sentence in its output nodes. For example, if the sentence the boy ran in the park is entered, the network should respond by indicating that the boy is a noun phrase, and it acts as the agent of the verb ran. The network should also indicate that in the park is a prepositional phrase modifying the verb ran. Entering a word into the network amounts to activating a single node that represents the given word at the input layer, and at the same time activating those semantic nodes that reflect the meaning of the word being entered. For example, to enter the word boy, a node that represents that word is activated, as well as nodes that indicate that the word being entered is a proper noun, singular, concrete, and human. In addition, an ID node is set to a value that would allow the network to distinguish john from other words that might have the same semantic identity, such as girl. An example of such activation is shown in figure 2. Figure 2: Input layer presentation of boy (not all input nodes are shown). The language used in this research is composed of ten nouns: boy, girl, john, mary, horse, duck, car, boat, park, river. Available semantic nodes are: human, animal, or mechanical (three mutually exclusive nodes); animate or inanimate (represented by one node, active if the noun is animate); proper (active if true, inactive otherwise); and one ID node. Verbs are entered in a similar way; the node representing that verb is activated, simultaneously with semantic nodes that convey the meaning of that verb. Semantic nodes available for verbs are: present or past tense (two mutually exclusive nodes), auxiliary verb, movement verb, sound producing verb, sound receiving verb, visual receiving verb, and a verb ID node used to distinguish verbs that would be identical otherwise (for example, ran and swam would be identical without this last node). In total, there are twelve main verbs (saw, swam, swimming, ran, runs, running, is, was, raced, floated, said, heard) and two auxiliary verbs (is, was). For example, figure 3 shows how the verb runs is entered in the network; the individual node for runs is activated, as well as semantic nodes representing a verb of movement and a present tense verb. In addition to nouns and verbs, the language has three adjectives (fast, blue, red ), one article ( the ), one adverb ( fast ), and three prepositions( with, in, after ). Each of these is entered in the network by activating an individual node for the word, plus an additional node that indicates which type of word (adjective, article, adverb, or preposition) is being entered. After each word is entered, the NN is expected to produce a representation of what it understands about the sentence up to that point. For example, after the network sees the boy (entered

4 Figure 3: Input layer presentation of runs (not all input nodes are shown). one word at a time; first the and then boy ) it should indicate that it has detected a noun phrase that uses an article, and that the noun of this phrase is boy. Boy in this case is represented by activating output nodes that code human, animate, concrete, and an additional ID node that distinguishes boy from other words that otherwise would have identical semantics (such as girl ). An example of such an activation is shown in figure 4. Figure 4: output layer presentation of the boy (not all output nodes are shown). If the network, immediately after having been shown the boy at the input layer, is shown runs, it should respond by indicating that it has detected a verb phrase with runs as its verb. Indicating that the verb of this verb phrase is runs is done by activating semantic nodes for verb of movement and present tense. In addition, a node ID is activated so that runs can be differentiated from other verbs that would otherwise have identical semantics. At this point the network should also indicate that the first noun phrase detected is the agent of the first verb phrase detected. This is done by activating a np1-agent-of-vp1 node in the output layer. In the manner described above, then, the network should continue to indicate its understanding of the sentence being entered until an end-of-sentence marker is seen. The fitness of a network is determined by computing the sum of squared errors for all output nodes during the processing of all sentences in the language. 4 Genetic Definition of NN Topology Each NN in this system has 75 hidden nodes between the input and output layers. These 75 nodes are divided into N hidden layers, where N is a number between 1 and 30. The exact number of hidden layers is determined by the first gene of the corresponding genome. This position stores a random floating point number, with a value between 0 and 1. To determine how many hidden layers a network has, the value of this gene is multiplied by 30, and rounded to the next highest integer. If the result of this rounding up is 31, the network uses 30 hidden layers. The number of hidden nodes in each of these hidden layers is also determined by the network s corresponding genome. The genome has 30 genes used to code the relative worth of each of the possible hidden layers. Once the number of hidden layers is determined to be N using the process described above, the N layers with the highest relative worth are identified. The 75 available

5 hidden nodes are distributed among each of these N hidden layers according to each layer s worth relative to the sum of all N worth values. Figure 5: sample of genes For example, if a genome had genes 1-30 as illustrated in figure 5, and it had already been determined that it would have five hidden layers (as described above), the five layers to use are those indicated in bold. Since the sum of these five genes is 4.6, the first hidden layer would have (75*.91/4.6 = ) 14 nodes. The other four hidden layers would be allocated hidden nodes in the same way. The connections between layers are also determined by the network s genome. For each of the thirty possible layers, there is a gene that indicates where the layer takes its input from. Each of these genes stores a random floating point value between 0 and 1. To determine where each hidden layer takes its input from, its takes-its-input from gene value is multiplied by N+2 (where N is the number of hidden layers this network will have, as determined by the procedure outlined previously), and rounded to the nearest integer. The resulting number points to which layer this one takes its input from. We multiply by N+2 to allow a hidden layer to take its input from any of the N hidden layers, as well as either the input or the output layer. A value of 1 would mean the layer takes its input from the input layer. A value of N+2 would mean the layer takes its input form the output layer. For values between 2 and N+1, the layer would take its input from the layer with the (N-1)th highest relative worth. Figure 6: sample of genes For example, if the same genome used for the example above had genes as illustrated in figure 6, we would look at the corresponding 5 takes-input-from genes, shown in bold in figure 6. Multiplying each of the selected genes by 6, we would obtain 4.14, 1.44,.06, 2.22, and This would mean that hidden layer 1 would take its input from hidden layer 4, hidden layer 2 would take its input from hidden layer 1, hidden layer 3 would take its input from the input layer, hidden layer 4 would take its input from hidden layer 2, and hidden layer 2 would take its input from hidden layer 1. Where each layer sends its output is determined in a similar way, using positions of the genotype. Each of these genes stores a random floating point value between 0 and 1. To determine where each layer sends its output, its sends-output-to gene value is multiplied by N+1 and rounded to the nearest integer. The resulting number points to which other layer this one will send its output to. We multiply by N+1 to allow for hidden layers sending their output to any of the N hidden layers, as well as to the output layer. A value of N+1 would mean the layer sends its output to the output layer. For values between 1 and N, the layer sends its output to the layer with the Nth highest relative worth. No layer sends its output back to the input layer. Results for the experiment described above are presented in table 1. The four topologies evolved by the GA system described above outperform commonly used NN topologies such as

6 Topology Average Best Worst Type I Type II Type III Type IV SRN fully connected FGS N-P Table 1: Percent of language correctly processed after training with 20% of complete language, for both evolved and commonly used topologies. Simple Recurrent Networks (SRN), Frasconi-Gori-Soda networks, and Narendra-Parthasarathy networks. Although some of these commonly used topologies managed to outperform some evolved topologies in the best of cases, on average the evolved topologies performed better by more than 10%. In addition, previously used topologies demonstrate a higher sensitivity to initial conditions. The worst performance for previously used topologies is more than 45% lower than the worst performance for evolved topologies. Details about the characteristics of the evolved networks can be found in Dávila (1999). 5 Schema Disruption Computation If we view the evolution of NN as a process with the main goal of defining connections between any two nodes, then we can determine their ability to combine building blocks by estimating how likely it is for evolutionary operations to disrupt connection definitions; the less likely it is for connection definitions to be disturbed, the easier it is for the algorithm to combine building blocks present in the current population. An operation like crossover can disrupt a connection definition every time a crossover point is selected between two genes that, taken together, define a connection between nodes of a network. Therefore, how likely it is for crossover to cause this disruption can be estimated by the distance between genes that combine to define any particular connection. If a particular connection is defined by alleles in genes g i and g j, then the bigger the distance between g i and g j, the bigger the chance that the connection will be disrupted by a crossover operation. Taking a sum of the distance between genes that can define a connection we obtain a total disruption index (TDI) of C N k=0 i=0 j=0 N ( i j DC(k, i) DC(k, j)) (1) where N is the number of genes, and DC(k, x) equals to a number between 0 and 1 which indicates what is the probability that gene x is involved in defining connection k. Notice that this number reflects a global probability of disruption for the complete network, as opposed to for any particular connection. This is different from what was originally presented in Dávila (2000), and is motivated by the fact that the more connections a network has, the more likely it is to suffer disruptions. 6 Making Predictions with the TDI Computation Under the GA coding method presented in section 4, which from now I will call SYSTEM-A, the existence of a connection k between nodes i & j depends on the number of layers that the hidden nodes are divided into (gene 0), which layers contain nodes i & j (genes 1-30), where the layer with

7 node j takes its input from (a gene from among genes 31-60, depending on the values of j, gene 0, and the nodes/layer distribution determined by genes 1-30), and where the layer with node i sends its output to (a gene from among genes 61-90, again depending on the values of i, gene 0, and the nodes/layer distribution determined by genes 1-30). As a comparison, the same type of network topology could be evolved by using the following coding scheme, which I will call SYSTEM-B. Each network still has 75 hidden nodes, but they are always divided into 30 hidden layers (that is, there is no gene used to determine how many hidden layers to use). The distribution of these 75 nodes into 30 layers is done based on 30 relative worth values in the genome. To determine how many nodes each layer has, the relative worth for a specific layer is divided by the sum of relative worth of all 30 layers, and then multiplied by 75. The connections between layers are also determined by the network s genome. For each of the thirty layers, there is a gene that indicates where the layer takes its input from. Each of these genes stores a random floating point value between 0 and 1. To determine where each hidden layer takes its input from, its takes-its-input from gene value is multiplied by 32 and rounded to the nearest integer. The resulting number points to which layer this one takes its input from. A value of 1 would mean the layer takes its input from the input layer. A value of 32 would mean the layer takes its input form the output layer. For values between 2 and N+1, the layer would take its input from the (N-1)th hidden layer. If we assume that DC(k,x) = 1 for all values of k and x, then TDI(SYSTEM-A) = TDI(SYSTEM-B). In reality, though, DC(k,x) does not always return 1, and in fact it tends to return smaller values under SYSTEM-B than under SYSTEM-A. Notice, for example, that under SYSTEM-A the set of gene sequences that would allow node n to be in layer L has a higher cardinality than under SYSTEM-B, given that SYSTEM-B always has 30 hidden layers, while under SYSTEM-A the number of hidden layers is determined by a gene. This will, in turn, affect which genes are involved in defining a connection between two given nodes, which affects DC(k,x). Under standard random distributions, considering actual values for DC(k,x) would give TDI(SYSTEM-A) > TDI(SYSTEM-B). What this would mean is that it is easier for a particularly good schema to be disrupted by crossover under SYSTEM-A than under SYSTEM-B. Aside from the effect of DC(k,x), of course, the actual positioning of the genes and how they map into phenome characteristics also has an effect on the disruption caused by crossover operations. Take, for example, a system with the same types of genes as SYSTEM-B, but where the position of the genes has been altered. Instead of having 30 worth values followed by 30 takes-input-from values and then 30 sends-output-to values, SYSTEM-C arranges genes so that the worth, takes-input-from and sends-output-to genes for any one particular layer are in three consecutive positions (positions 3*L, 3*L+1, and 3*L+2, where L is the hidden layer number). Because a gene with a particular functionality will have the same effect on resulting phenomes regardless of its position in the genome, we can discard terms DC(k,i) and DC(k,j) in equation (1), and obtain C N N T DI (.) = ( i j ) (2) k=0 i=0 j=0 Under this definition, TDI (SYSTEM-B) > TDI (SYSTEM-C), since SYSTEM-C minimizes the distance between the relative-worth and takes-input-from genes. This means that crossover is less likely to disrupt a useful schema under SYSTEM-C. 7 Empirical Results To verify the effect these disruptions might have on solutions found by evolutionary computation, I performed the optimization of topologies for the natural language problem outlined previously under the three genome coding schemes discussed above. In order to better measure the effect of crossover, computations were performed with no mutation and population size of 21 elements. Network fitness values were computed by taking the sum of square errors for all output nodes throughout the presentation of a language of 413 sentences. Training is performed on 20% of this same language. In order to verify consistent performance, all evolved networks were validated by

8 performing 144 bootstrap validation runs per network (Weiss and Kulikowski (1991)). Evolutionary runs were repeated 48 times, and the graphs presented here, although taken from particular runs, are typical of results throughout all runs. Figures 7 and 8 show the fitness for the best, average, and worst individuals in a run of 40 generations. Figure 7 plots the values for SYSTEM-A, while figure 8 shows the values for SYSTEM-C. It quickly becomes evident that average and even worst element evolve towards the best solution faster under SYSTEM-C. This is in accordance with the prediction made by TDI computations for these two coding schemes; the traits that make the best individual so good can be transfered to other members of the population more efficiently under SYSTEM-C. Figure 7: fitness values for SYSTEM-A Figure 8: fitness values for SYSTEM-C In order to verify that it is in fact the ability to efficiently process GA schemata that causes this performance difference, runs with SYSTEM-C were repeated with higher mutation values. A sample run of this type of experiment is plotted in figure 9. Notice how it now takes longer for the GA to process and disperse good schemata through the population. It is important to notice that what is being computed, predicted, and verified here is simply the ability of a GA system to process the schemata that already exist in a population. These

9 Figure 9: fitness values for SYSTEM-C under higher mutation values computations do not talk directly about the ability of the system to discover new schemata. In fact, random mutation in some cases could generate an individual which is better than anything currently in the population, or anything that could be evolved from the population by schema recombination. An example of this is shown in figure 10. Notice, however, that although the best elements in this population are better than the ones obtained in previous experiments, the system is still slower to process the schemata. This is shown by the bigger difference between better and worse elements within this particular run. Figure 10: fitness values for SYSTEM-B under higher mutation values 8 Analog vs. Digital Sentence Representation The method for sentence representation at the output layer used in these experiments has a number of limitations and problems. For example, a high number of output nodes are required to remain inactive (output of zero) for the output pattern to be considered correct. Because of this, both the evolutionary process and the (backpropagation) learning process find solutions where many nodes are simply disconnected from the rest of the network, ensuring that they never activate. Although this leads to errors for some of the patterns, these errors are small when considered within the

10 context of the complete language used. Another issue introduced by this output coding scheme is the difficulty of expanding it to include more that the three noun phrases and two verb phrases used here. The number of nodes, and therefore connections, generated by such an expansion would make the system harder to train and slower to respond to inputs. In order to avoid this type of problem, I have modified the way in which sentences are represented at the output layer. Instead of having binary (and mutually exclusive) nodes representing noun properties such as human, animal, mechanical and inanimate, these four nodes are substituted with a single type-of-noun node, with activations ranging between 0 and 1. Something similar can be done to describe verbs, substituting nodes for verb-of-movement, verb-of-sound-producing, verb-of-sound-receiving, etc., with a single type-of-verb node. Another change similar to this can be done with the NP1-MODIFIES-NP2, NP1-MODIFIES-NP3, NP1-MODIFIES-VP1, NP1- MODIFIES-VP2 nodes; they can be substituted with a single node that indicates which phrase this one phrase is modifying. By doing this we both drastically reduce the number of nodes that can correctly keep their output at zero, and the number of nodes that need to be added to the output layer when a new phrase is added to the sentence representation. In particular, the number of nodes needed to represent a noun prase is reduced from 3*NVP + NNP + 12(where NVP is the number of verb phrases and NNP is the number of noun phrases) to a constant 8. The number of nodes needed to represent a verb phrase is reduced from NNP + 6 to a constant 6. 9 empirical Results for Analog Representation. After 40 generations, typical runs of the experiments described in section 8 converge to NN with errors of around %. This compares very favorably with results for digital systems, where the variability was somewhat higher. In fact, the only digital system capable of ever producing better results (Type II in table 1) has the biggest performance variability, and sometimes performs close to 10 percentage points below the lowest analog performer. A sample plot of fitness values for analog systems is presented in figure 11. Figure 11: fitness values for analog SYSTEM-C, low mutation values In terms of GA schemata disruption, figure 11 shows this coding taking ample advantage of the better elements in the population. Compare figure 11, for example, with figure 8; we note that the system moves towards good fitness values in the presence of a superior element much quicker than the digital version of SYSTEM-C. Equations (1) & (2) predict this, based on a smaller total number of connections C used (since there are fewer output nodes).

11 10 Future Work In order to better understand the effect of different TDI values during the evolution of NN, analysis of a series of different coding schemes should be caried out. For example, the systems mentioned in section 1 could be examined, as well as others found in the GA/NN literature. By observing how different coding schemes appear to be able (or not) to process good GA schematta, a better understanding of TDI values can be obtained. In addition, the disruptive ability of different TDI values should be compared with that of several ranges of mutation rates. By doing so, TDI values can be better understood, allowing this methodology to better assist in the design of coding schemes and the balancing between different types of evolutionary operations. Another aspect of TDI computation to be explored relates to its effect when a population contains elements with drastically different fitness values. For example, figure 10 shows fitness values for a run where there is a considerable initial difference between the best and worst elements. Although the graph shows that fitness values are starting to converge towards the 40th generation, the system seems to process the schemata of the best elements slowly. It would be important to determine if this is caused by the magnitude of the difference, or by other factors (such as evolutionary pressure, size of the population, etc.). Finally, several question relating to the analog coding system presented in section 8 should also be explored further. In particular, the effect of using the same number of nodes, but with different coding schemes for nouns and verbs should be investigated. Furthermore, the possibility of having a GA evolve these codings should be explored. 11 Conclusion This paper has presented a mathematical methodology for predicting the effectiveness of a GA in processing NN topology. The methodology is used to predict the effectiveness of several GA coding schemes. These predictions are shown to correspond with actual results. In addition, different ways of representing sentences at a NN output later are presented. Advantages of using non-binary representations are discussed, both from the point of view of expanding sentences that can it can process and efficiency of GA used to evolve them. References de Garis, H. (1996). CAM-BRAIN: The Evolutionary Engineering of a Billion Neuron Artificial Brain by 2001 Which Grows/Evolves at Electronic Speeds Inside a Cellular Automata Machine (CAM), In Lecture Notes in Computer Science - Towards Evolvable Hardware, Vol. 1062, pages Springer Verlag, pages Dávila, J. (1999) Exploring the Relationship Between Neural Network Topology and Optimal Training Set by Means of Genetic Algorithms. In International Conference on Artificial Neural Networks and Genetic Algorithms, pages Springer-Verlag. Dávila, J. (2000) A New Metric for Evaluating Genetic Optimization of Neural Networks. In Proceedings of the First IEEE Symposium on Combinations of Evolutionary Computation and Neural Networks. pages IEEE Fullmer, B., Miikkulainen, R., (1991). Using marker- based genetic encoding of neural networks to evolve finite-state behavior. In Proceedings of the first European Conference on Artificial Life, pages Kitano, H., (1994). Designing Neural Networks using Genetic Algorithm with Graph Generation System. In Complex Systems, 4. pages Weiss, S., Kulikowski, C. (1991). Computer Systems that Learn. Classification and Prediction Methods from Statistics. In Neural Nets, Machine Learning, and Expert Systems. Morgan Kaufmann.

12 Yao, X. (1993). A review of evolutionary artificial neural networks. In International Journal of Intelligent Systems, 4. pages

Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?)

Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?) SKIP - May 2004 Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?) S. G. Hohmann, Electronic Vision(s), Kirchhoff Institut für Physik, Universität Heidelberg Hardware Neuronale Netzwerke

More information

Similarity Templates or Schemata. CS 571 Evolutionary Computation

Similarity Templates or Schemata. CS 571 Evolutionary Computation Similarity Templates or Schemata CS 571 Evolutionary Computation Similarities among Strings in a Population A GA has a population of strings (solutions) that change from generation to generation. What

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

Neural Network Weight Selection Using Genetic Algorithms

Neural Network Weight Selection Using Genetic Algorithms Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks

More information

Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms

Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms Franz Rothlauf Department of Information Systems University of Bayreuth, Germany franz.rothlauf@uni-bayreuth.de

More information

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization 2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic

More information

The Genetic Algorithm for finding the maxima of single-variable functions

The Genetic Algorithm for finding the maxima of single-variable functions Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding

More information

A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices

A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices Yaser Alkhalifah Roger L. Wainwright Department of Mathematical Department of Mathematical and Computer Sciences and Computer

More information

The Binary Genetic Algorithm. Universidad de los Andes-CODENSA

The Binary Genetic Algorithm. Universidad de los Andes-CODENSA The Binary Genetic Algorithm Universidad de los Andes-CODENSA 1. Genetic Algorithms: Natural Selection on a Computer Figure 1 shows the analogy between biological i l evolution and a binary GA. Both start

More information

Evolving SQL Queries for Data Mining

Evolving SQL Queries for Data Mining Evolving SQL Queries for Data Mining Majid Salim and Xin Yao School of Computer Science, The University of Birmingham Edgbaston, Birmingham B15 2TT, UK {msc30mms,x.yao}@cs.bham.ac.uk Abstract. This paper

More information

Applied Cloning Techniques for a Genetic Algorithm Used in Evolvable Hardware Design

Applied Cloning Techniques for a Genetic Algorithm Used in Evolvable Hardware Design Applied Cloning Techniques for a Genetic Algorithm Used in Evolvable Hardware Design Viet C. Trinh vtrinh@isl.ucf.edu Gregory A. Holifield greg.holifield@us.army.mil School of Electrical Engineering and

More information

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Chapter 5 A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Graph Matching has attracted the exploration of applying new computing paradigms because of the large number of applications

More information

The Parallel Software Design Process. Parallel Software Design

The Parallel Software Design Process. Parallel Software Design Parallel Software Design The Parallel Software Design Process Deborah Stacey, Chair Dept. of Comp. & Info Sci., University of Guelph dastacey@uoguelph.ca Why Parallel? Why NOT Parallel? Why Talk about

More information

Introduction to Evolutionary Computation

Introduction to Evolutionary Computation Introduction to Evolutionary Computation The Brought to you by (insert your name) The EvoNet Training Committee Some of the Slides for this lecture were taken from the Found at: www.cs.uh.edu/~ceick/ai/ec.ppt

More information

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-&3 -(' ( +-   % '.+ % ' -0(+$, The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure

More information

Mutations for Permutations

Mutations for Permutations Mutations for Permutations Insert mutation: Pick two allele values at random Move the second to follow the first, shifting the rest along to accommodate Note: this preserves most of the order and adjacency

More information

CHAPTER 4 GENETIC ALGORITHM

CHAPTER 4 GENETIC ALGORITHM 69 CHAPTER 4 GENETIC ALGORITHM 4.1 INTRODUCTION Genetic Algorithms (GAs) were first proposed by John Holland (Holland 1975) whose ideas were applied and expanded on by Goldberg (Goldberg 1989). GAs is

More information

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM Journal of Al-Nahrain University Vol.10(2), December, 2007, pp.172-177 Science GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM * Azhar W. Hammad, ** Dr. Ban N. Thannoon Al-Nahrain

More information

A Comparative Study of Linear Encoding in Genetic Programming

A Comparative Study of Linear Encoding in Genetic Programming 2011 Ninth International Conference on ICT and Knowledge A Comparative Study of Linear Encoding in Genetic Programming Yuttana Suttasupa, Suppat Rungraungsilp, Suwat Pinyopan, Pravit Wungchusunti, Prabhas

More information

Genetic Algorithm for Finding Shortest Path in a Network

Genetic Algorithm for Finding Shortest Path in a Network Intern. J. Fuzzy Mathematical Archive Vol. 2, 2013, 43-48 ISSN: 2320 3242 (P), 2320 3250 (online) Published on 26 August 2013 www.researchmathsci.org International Journal of Genetic Algorithm for Finding

More information

Evolutionary Computation Part 2

Evolutionary Computation Part 2 Evolutionary Computation Part 2 CS454, Autumn 2017 Shin Yoo (with some slides borrowed from Seongmin Lee @ COINSE) Crossover Operators Offsprings inherit genes from their parents, but not in identical

More information

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ OUTLINE Optimisation problems Optimisation & search Two Examples The knapsack problem

More information

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

International Journal of Digital Application & Contemporary research Website:   (Volume 1, Issue 7, February 2013) Performance Analysis of GA and PSO over Economic Load Dispatch Problem Sakshi Rajpoot sakshirajpoot1988@gmail.com Dr. Sandeep Bhongade sandeepbhongade@rediffmail.com Abstract Economic Load dispatch problem

More information

DETERMINING MAXIMUM/MINIMUM VALUES FOR TWO- DIMENTIONAL MATHMATICLE FUNCTIONS USING RANDOM CREOSSOVER TECHNIQUES

DETERMINING MAXIMUM/MINIMUM VALUES FOR TWO- DIMENTIONAL MATHMATICLE FUNCTIONS USING RANDOM CREOSSOVER TECHNIQUES DETERMINING MAXIMUM/MINIMUM VALUES FOR TWO- DIMENTIONAL MATHMATICLE FUNCTIONS USING RANDOM CREOSSOVER TECHNIQUES SHIHADEH ALQRAINY. Department of Software Engineering, Albalqa Applied University. E-mail:

More information

Seismic regionalization based on an artificial neural network

Seismic regionalization based on an artificial neural network Seismic regionalization based on an artificial neural network *Jaime García-Pérez 1) and René Riaño 2) 1), 2) Instituto de Ingeniería, UNAM, CU, Coyoacán, México D.F., 014510, Mexico 1) jgap@pumas.ii.unam.mx

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

Artificial Neural Network based Curve Prediction

Artificial Neural Network based Curve Prediction Artificial Neural Network based Curve Prediction LECTURE COURSE: AUSGEWÄHLTE OPTIMIERUNGSVERFAHREN FÜR INGENIEURE SUPERVISOR: PROF. CHRISTIAN HAFNER STUDENTS: ANTHONY HSIAO, MICHAEL BOESCH Abstract We

More information

Genetic Programming: A study on Computer Language

Genetic Programming: A study on Computer Language Genetic Programming: A study on Computer Language Nilam Choudhary Prof.(Dr.) Baldev Singh Er. Gaurav Bagaria Abstract- this paper describes genetic programming in more depth, assuming that the reader is

More information

Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree

Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree 28 Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree 1 Tanu Gupta, 2 Anil Kumar 1 Research Scholar, IFTM, University, Moradabad, India. 2 Sr. Lecturer, KIMT, Moradabad, India. Abstract Many

More information

Artificial Intelligence Application (Genetic Algorithm)

Artificial Intelligence Application (Genetic Algorithm) Babylon University College of Information Technology Software Department Artificial Intelligence Application (Genetic Algorithm) By Dr. Asaad Sabah Hadi 2014-2015 EVOLUTIONARY ALGORITHM The main idea about

More information

Quality of Genetic Algorithm in the Cloud

Quality of Genetic Algorithm in the Cloud Quality of Genetic Algorithm in the Cloud Lee Curry Department of Computer Science, University of Texas at El Paso, El Paso, Texas 79968 Email: lacurry2@miners.utep.edu Sekou L. Remy School of Computing,

More information

Santa Fe Trail Problem Solution Using Grammatical Evolution

Santa Fe Trail Problem Solution Using Grammatical Evolution 2012 International Conference on Industrial and Intelligent Information (ICIII 2012) IPCSIT vol.31 (2012) (2012) IACSIT Press, Singapore Santa Fe Trail Problem Solution Using Grammatical Evolution Hideyuki

More information

Kyrre Glette INF3490 Evolvable Hardware Cartesian Genetic Programming

Kyrre Glette INF3490 Evolvable Hardware Cartesian Genetic Programming Kyrre Glette kyrrehg@ifi INF3490 Evolvable Hardware Cartesian Genetic Programming Overview Introduction to Evolvable Hardware (EHW) Cartesian Genetic Programming Applications of EHW 3 Evolvable Hardware

More information

Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you?

Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you? Gurjit Randhawa Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you? This would be nice! Can it be done? A blind generate

More information

Time Complexity Analysis of the Genetic Algorithm Clustering Method

Time Complexity Analysis of the Genetic Algorithm Clustering Method Time Complexity Analysis of the Genetic Algorithm Clustering Method Z. M. NOPIAH, M. I. KHAIRIR, S. ABDULLAH, M. N. BAHARIN, and A. ARIFIN Department of Mechanical and Materials Engineering Universiti

More information

Previous Lecture Genetic Programming

Previous Lecture Genetic Programming Genetic Programming Previous Lecture Constraint Handling Penalty Approach Penalize fitness for infeasible solutions, depending on distance from feasible region Balanace between under- and over-penalization

More information

4/22/2014. Genetic Algorithms. Diwakar Yagyasen Department of Computer Science BBDNITM. Introduction

4/22/2014. Genetic Algorithms. Diwakar Yagyasen Department of Computer Science BBDNITM. Introduction 4/22/24 s Diwakar Yagyasen Department of Computer Science BBDNITM Visit dylycknow.weebly.com for detail 2 The basic purpose of a genetic algorithm () is to mimic Nature s evolutionary approach The algorithm

More information

A Comparison Between FS-NEAT and FD-NEAT and an Investigation of Different Initial Topologies for a Classification Task with Irrelevant Features

A Comparison Between FS-NEAT and FD-NEAT and an Investigation of Different Initial Topologies for a Classification Task with Irrelevant Features A Comparison Between FS-NEAT and FD-NEAT and an Investigation of Different Initial Topologies for a Classification Task with Ir Features Evgenia Papavasileiou and Bart Jansen Department of Electronics

More information

Topological Machining Fixture Layout Synthesis Using Genetic Algorithms

Topological Machining Fixture Layout Synthesis Using Genetic Algorithms Topological Machining Fixture Layout Synthesis Using Genetic Algorithms Necmettin Kaya Uludag University, Mechanical Eng. Department, Bursa, Turkey Ferruh Öztürk Uludag University, Mechanical Eng. Department,

More information

Artificial Neuron Modelling Based on Wave Shape

Artificial Neuron Modelling Based on Wave Shape Artificial Neuron Modelling Based on Wave Shape Kieran Greer, Distributed Computing Systems, Belfast, UK. http://distributedcomputingsystems.co.uk Version 1.2 Abstract This paper describes a new model

More information

Outline. Motivation. Introduction of GAs. Genetic Algorithm 9/7/2017. Motivation Genetic algorithms An illustrative example Hypothesis space search

Outline. Motivation. Introduction of GAs. Genetic Algorithm 9/7/2017. Motivation Genetic algorithms An illustrative example Hypothesis space search Outline Genetic Algorithm Motivation Genetic algorithms An illustrative example Hypothesis space search Motivation Evolution is known to be a successful, robust method for adaptation within biological

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

Hyperplane Ranking in. Simple Genetic Algorithms. D. Whitley, K. Mathias, and L. Pyeatt. Department of Computer Science. Colorado State University

Hyperplane Ranking in. Simple Genetic Algorithms. D. Whitley, K. Mathias, and L. Pyeatt. Department of Computer Science. Colorado State University Hyperplane Ranking in Simple Genetic Algorithms D. Whitley, K. Mathias, and L. yeatt Department of Computer Science Colorado State University Fort Collins, Colorado 8523 USA whitley,mathiask,pyeatt@cs.colostate.edu

More information

Genetic Algorithms Variations and Implementation Issues

Genetic Algorithms Variations and Implementation Issues Genetic Algorithms Variations and Implementation Issues CS 431 Advanced Topics in AI Classic Genetic Algorithms GAs as proposed by Holland had the following properties: Randomly generated population Binary

More information

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa Instructors: Parth Shah, Riju Pahwa Lecture 2 Notes Outline 1. Neural Networks The Big Idea Architecture SGD and Backpropagation 2. Convolutional Neural Networks Intuition Architecture 3. Recurrent Neural

More information

Predicting Diabetes using Neural Networks and Randomized Optimization

Predicting Diabetes using Neural Networks and Randomized Optimization Predicting Diabetes using Neural Networks and Randomized Optimization Kunal Sharma GTID: ksharma74 CS 4641 Machine Learning Abstract This paper analysis the following randomized optimization techniques

More information

RESOLVING AMBIGUITIES IN PREPOSITION PHRASE USING GENETIC ALGORITHM

RESOLVING AMBIGUITIES IN PREPOSITION PHRASE USING GENETIC ALGORITHM International Journal of Computer Engineering and Applications, Volume VIII, Issue III, December 14 RESOLVING AMBIGUITIES IN PREPOSITION PHRASE USING GENETIC ALGORITHM Department of Computer Engineering,

More information

Genetic Programming. Charles Chilaka. Department of Computational Science Memorial University of Newfoundland

Genetic Programming. Charles Chilaka. Department of Computational Science Memorial University of Newfoundland Genetic Programming Charles Chilaka Department of Computational Science Memorial University of Newfoundland Class Project for Bio 4241 March 27, 2014 Charles Chilaka (MUN) Genetic algorithms and programming

More information

arxiv: v1 [cs.ne] 28 Mar 2016

arxiv: v1 [cs.ne] 28 Mar 2016 Genetic cellular neural networks for generating three-dimensional geometry Hugo Martay 2015-03-19 arxiv:1603.08551v1 [cs.ne] 28 Mar 2016 Abstract There are a number of ways to procedurally generate interesting

More information

Genetic Algorithms. Kang Zheng Karl Schober

Genetic Algorithms. Kang Zheng Karl Schober Genetic Algorithms Kang Zheng Karl Schober Genetic algorithm What is Genetic algorithm? A genetic algorithm (or GA) is a search technique used in computing to find true or approximate solutions to optimization

More information

EVOLVING LEGO. Exploring the impact of alternative encodings on the performance of evolutionary algorithms. 1. Introduction

EVOLVING LEGO. Exploring the impact of alternative encodings on the performance of evolutionary algorithms. 1. Introduction N. Gu, S. Watanabe, H. Erhan, M. Hank Haeusler, W. Huang, R. Sosa (eds.), Rethinking Comprehensive Design: Speculative Counterculture, Proceedings of the 19th International Conference on Computer- Aided

More information

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem etic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem R. O. Oladele Department of Computer Science University of Ilorin P.M.B. 1515, Ilorin, NIGERIA

More information

Solving Traveling Salesman Problem for Large Spaces using Modified Meta- Optimization Genetic Algorithm

Solving Traveling Salesman Problem for Large Spaces using Modified Meta- Optimization Genetic Algorithm Solving Traveling Salesman Problem for Large Spaces using Modified Meta- Optimization Genetic Algorithm Maad M. Mijwel Computer science, college of science, Baghdad University Baghdad, Iraq maadalnaimiy@yahoo.com

More information

ANTICIPATORY VERSUS TRADITIONAL GENETIC ALGORITHM

ANTICIPATORY VERSUS TRADITIONAL GENETIC ALGORITHM Anticipatory Versus Traditional Genetic Algorithm ANTICIPATORY VERSUS TRADITIONAL GENETIC ALGORITHM ABSTRACT Irina Mocanu 1 Eugenia Kalisz 2 This paper evaluates the performances of a new type of genetic

More information

Genetic Image Network for Image Classification

Genetic Image Network for Image Classification Genetic Image Network for Image Classification Shinichi Shirakawa, Shiro Nakayama, and Tomoharu Nagao Graduate School of Environment and Information Sciences, Yokohama National University, 79-7, Tokiwadai,

More information

Introduction to Genetic Algorithms. Based on Chapter 10 of Marsland Chapter 9 of Mitchell

Introduction to Genetic Algorithms. Based on Chapter 10 of Marsland Chapter 9 of Mitchell Introduction to Genetic Algorithms Based on Chapter 10 of Marsland Chapter 9 of Mitchell Genetic Algorithms - History Pioneered by John Holland in the 1970s Became popular in the late 1980s Based on ideas

More information

Automatic Selection of GCC Optimization Options Using A Gene Weighted Genetic Algorithm

Automatic Selection of GCC Optimization Options Using A Gene Weighted Genetic Algorithm Automatic Selection of GCC Optimization Options Using A Gene Weighted Genetic Algorithm San-Chih Lin, Chi-Kuang Chang, Nai-Wei Lin National Chung Cheng University Chiayi, Taiwan 621, R.O.C. {lsch94,changck,naiwei}@cs.ccu.edu.tw

More information

Mutation in Compressed Encoding in Estimation of Distribution Algorithm

Mutation in Compressed Encoding in Estimation of Distribution Algorithm Mutation in Compressed Encoding in Estimation of Distribution Algorithm Orawan Watchanupaporn, Worasait Suwannik Department of Computer Science asetsart University Bangkok, Thailand orawan.liu@gmail.com,

More information

Automata Construct with Genetic Algorithm

Automata Construct with Genetic Algorithm Automata Construct with Genetic Algorithm Vít Fábera Department of Informatics and Telecommunication, Faculty of Transportation Sciences, Czech Technical University, Konviktská 2, Praha, Czech Republic,

More information

A New Crossover Technique for Cartesian Genetic Programming

A New Crossover Technique for Cartesian Genetic Programming A New Crossover Technique for Cartesian Genetic Programming Genetic Programming Track Janet Clegg Intelligent Systems Group, Department of Electronics University of York, Heslington York, YO DD, UK jc@ohm.york.ac.uk

More information

Introduction to Genetic Algorithms. Genetic Algorithms

Introduction to Genetic Algorithms. Genetic Algorithms Introduction to Genetic Algorithms Genetic Algorithms We ve covered enough material that we can write programs that use genetic algorithms! More advanced example of using arrays Could be better written

More information

Cleaning an Arbitrary Regular Network with Mobile Agents

Cleaning an Arbitrary Regular Network with Mobile Agents Cleaning an Arbitrary Regular Network with Mobile Agents Paola Flocchini, Amiya Nayak and Arno Schulz School of Information Technology and Engineering University of Ottawa 800 King Edward Avenue Ottawa,

More information

Homework 2: Search and Optimization

Homework 2: Search and Optimization Scott Chow ROB 537: Learning Based Control October 16, 2017 Homework 2: Search and Optimization 1 Introduction The Traveling Salesman Problem is a well-explored problem that has been shown to be NP-Complete.

More information

A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS

A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS Jim Gasvoda and Qin Ding Department of Computer Science, Pennsylvania State University at Harrisburg, Middletown, PA 17057, USA {jmg289, qding}@psu.edu

More information

Random Search Report An objective look at random search performance for 4 problem sets

Random Search Report An objective look at random search performance for 4 problem sets Random Search Report An objective look at random search performance for 4 problem sets Dudon Wai Georgia Institute of Technology CS 7641: Machine Learning Atlanta, GA dwai3@gatech.edu Abstract: This report

More information

Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree

Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree Rahul Mathur M.Tech (Purs.) BU, AJMER IMRAN KHAN Assistant Professor AIT, Ajmer VIKAS CHOUDHARY Assistant Professor AIT, Ajmer ABSTRACT:-Many

More information

An Empirical Study of Lazy Multilabel Classification Algorithms

An Empirical Study of Lazy Multilabel Classification Algorithms An Empirical Study of Lazy Multilabel Classification Algorithms E. Spyromitros and G. Tsoumakas and I. Vlahavas Department of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece

More information

Network Routing Protocol using Genetic Algorithms

Network Routing Protocol using Genetic Algorithms International Journal of Electrical & Computer Sciences IJECS-IJENS Vol:0 No:02 40 Network Routing Protocol using Genetic Algorithms Gihan Nagib and Wahied G. Ali Abstract This paper aims to develop a

More information

CS5401 FS2015 Exam 1 Key

CS5401 FS2015 Exam 1 Key CS5401 FS2015 Exam 1 Key This is a closed-book, closed-notes exam. The only items you are allowed to use are writing implements. Mark each sheet of paper you use with your name and the string cs5401fs2015

More information

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter Review: Final Exam Model for a Learning Step Learner initially Environm ent Teacher Compare s pe c ia l Information Control Correct Learning criteria Feedback changed Learner after Learning Learning by

More information

GENETIC ALGORITHM with Hands-On exercise

GENETIC ALGORITHM with Hands-On exercise GENETIC ALGORITHM with Hands-On exercise Adopted From Lecture by Michael Negnevitsky, Electrical Engineering & Computer Science University of Tasmania 1 Objective To understand the processes ie. GAs Basic

More information

Computational Intelligence

Computational Intelligence Computational Intelligence Module 6 Evolutionary Computation Ajith Abraham Ph.D. Q What is the most powerful problem solver in the Universe? ΑThe (human) brain that created the wheel, New York, wars and

More information

Genetic Programming Part 1

Genetic Programming Part 1 Genetic Programming Part 1 Evolutionary Computation Lecture 11 Thorsten Schnier 06/11/2009 Previous Lecture Multi-objective Optimization Pareto optimality Hyper-volume based indicators Recent lectures

More information

C 1 Modified Genetic Algorithm to Solve Time-varying Lot Sizes Economic Lot Scheduling Problem

C 1 Modified Genetic Algorithm to Solve Time-varying Lot Sizes Economic Lot Scheduling Problem C 1 Modified Genetic Algorithm to Solve Time-varying Lot Sizes Economic Lot Scheduling Problem Bethany Elvira 1, Yudi Satria 2, dan Rahmi Rusin 3 1 Student in Department of Mathematics, University of Indonesia,

More information

GENETIC ALGORITHMS, FLOATING POINT NUMBERS AND APPLICATIONS

GENETIC ALGORITHMS, FLOATING POINT NUMBERS AND APPLICATIONS International Journal of Modern Physics C Vol. 16, No. 11 (2005) 1811 1816 c World Scientific Publishing Company GENETIC ALGORITHMS, FLOATING POINT NUMBERS AND APPLICATIONS YORICK HARDY and WILLI-HANS

More information

Chapter 5 Components for Evolution of Modular Artificial Neural Networks

Chapter 5 Components for Evolution of Modular Artificial Neural Networks Chapter 5 Components for Evolution of Modular Artificial Neural Networks 5.1 Introduction In this chapter, the methods and components used for modular evolution of Artificial Neural Networks (ANNs) are

More information

Using Genetic Algorithms to Improve Pattern Classification Performance

Using Genetic Algorithms to Improve Pattern Classification Performance Using Genetic Algorithms to Improve Pattern Classification Performance Eric I. Chang and Richard P. Lippmann Lincoln Laboratory, MIT Lexington, MA 021739108 Abstract Genetic algorithms were used to select

More information

Genetic Algorithms and Genetic Programming Lecture 7

Genetic Algorithms and Genetic Programming Lecture 7 Genetic Algorithms and Genetic Programming Lecture 7 Gillian Hayes 13th October 2006 Lecture 7: The Building Block Hypothesis The Building Block Hypothesis Experimental evidence for the BBH The Royal Road

More information

Lecture 6: The Building Block Hypothesis. Genetic Algorithms and Genetic Programming Lecture 6. The Schema Theorem Reminder

Lecture 6: The Building Block Hypothesis. Genetic Algorithms and Genetic Programming Lecture 6. The Schema Theorem Reminder Lecture 6: The Building Block Hypothesis 1 Genetic Algorithms and Genetic Programming Lecture 6 Gillian Hayes 9th October 2007 The Building Block Hypothesis Experimental evidence for the BBH The Royal

More information

Neural Network Application Design. Supervised Function Approximation. Supervised Function Approximation. Supervised Function Approximation

Neural Network Application Design. Supervised Function Approximation. Supervised Function Approximation. Supervised Function Approximation Supervised Function Approximation There is a tradeoff between a network s ability to precisely learn the given exemplars and its ability to generalize (i.e., inter- and extrapolate). This problem is similar

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

turning data into dollars

turning data into dollars turning data into dollars Tom s Ten Data Tips November 2008 Neural Networks Neural Networks (NNs) are sometimes considered the epitome of data mining algorithms. Loosely modeled after the human brain (hence

More information

A Genetic Algorithm for Multiprocessor Task Scheduling

A Genetic Algorithm for Multiprocessor Task Scheduling A Genetic Algorithm for Multiprocessor Task Scheduling Tashniba Kaiser, Olawale Jegede, Ken Ferens, Douglas Buchanan Dept. of Electrical and Computer Engineering, University of Manitoba, Winnipeg, MB,

More information

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery Monika Sharma 1, Deepak Sharma 2 1 Research Scholar Department of Computer Science and Engineering, NNSS SGI Samalkha,

More information

Artificial Neural Network Evolutionary Algorithm (ANNEVA) Abstract

Artificial Neural Network Evolutionary Algorithm (ANNEVA) Abstract Artificial Neural Network Evolutionary Algorithm (ANNEVA) Tyler Haugen Dr. Jeff McGough Math and Computer Science Department South Dakota School of Mines and Technology Rapid City, SD 57701 tyler.haugen@mines.sdsmt.edu

More information

Using Genetic Algorithms to solve complex optimization problems. New Mexico. Supercomputing Challenge. Final Report. April 4, 2012.

Using Genetic Algorithms to solve complex optimization problems. New Mexico. Supercomputing Challenge. Final Report. April 4, 2012. Using Genetic Algorithms to solve complex optimization problems New Mexico Supercomputing Challenge Final Report April 4, 2012 Team 68 Los Alamos High School Team Members: Alexander Swart Teacher: Mr.

More information

Literature Review On Implementing Binary Knapsack problem

Literature Review On Implementing Binary Knapsack problem Literature Review On Implementing Binary Knapsack problem Ms. Niyati Raj, Prof. Jahnavi Vitthalpura PG student Department of Information Technology, L.D. College of Engineering, Ahmedabad, India Assistant

More information

Lecture 6: Genetic Algorithm. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved

Lecture 6: Genetic Algorithm. An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved Lecture 6: Genetic Algorithm An Introduction to Meta-Heuristics, Produced by Qiangfu Zhao (Since 2012), All rights reserved Lec06/1 Search and optimization again Given a problem, the set of all possible

More information

Escaping Local Optima: Genetic Algorithm

Escaping Local Optima: Genetic Algorithm Artificial Intelligence Escaping Local Optima: Genetic Algorithm Dae-Won Kim School of Computer Science & Engineering Chung-Ang University We re trying to escape local optima To achieve this, we have learned

More information

Genetic Programming for Multiclass Object Classification

Genetic Programming for Multiclass Object Classification Genetic Programming for Multiclass Object Classification by William Richmond Smart A thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Master

More information

A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP

A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP Wael Raef Alkhayri Fahed Al duwairi High School Aljabereyah, Kuwait Suhail Sami Owais Applied Science Private University Amman,

More information

For Monday. Read chapter 18, sections Homework:

For Monday. Read chapter 18, sections Homework: For Monday Read chapter 18, sections 10-12 The material in section 8 and 9 is interesting, but we won t take time to cover it this semester Homework: Chapter 18, exercise 25 a-b Program 4 Model Neuron

More information

Using a genetic algorithm for editing k-nearest neighbor classifiers

Using a genetic algorithm for editing k-nearest neighbor classifiers Using a genetic algorithm for editing k-nearest neighbor classifiers R. Gil-Pita 1 and X. Yao 23 1 Teoría de la Señal y Comunicaciones, Universidad de Alcalá, Madrid (SPAIN) 2 Computer Sciences Department,

More information

Using Genetic Algorithms to optimize ACS-TSP

Using Genetic Algorithms to optimize ACS-TSP Using Genetic Algorithms to optimize ACS-TSP Marcin L. Pilat and Tony White School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, K1S 5B6, Canada {mpilat,arpwhite}@scs.carleton.ca

More information

Variations on Genetic Cellular Automata

Variations on Genetic Cellular Automata Variations on Genetic Cellular Automata Alice Durand David Olson Physics Department amdurand@ucdavis.edu daolson@ucdavis.edu Abstract: We investigated the properties of cellular automata with three or

More information

Introduction to Genetic Algorithms

Introduction to Genetic Algorithms Advanced Topics in Image Analysis and Machine Learning Introduction to Genetic Algorithms Week 3 Faculty of Information Science and Engineering Ritsumeikan University Today s class outline Genetic Algorithms

More information

A Genetic Algorithm Approach to the Group Technology Problem

A Genetic Algorithm Approach to the Group Technology Problem IMECS 008, 9- March, 008, Hong Kong A Genetic Algorithm Approach to the Group Technology Problem Hatim H. Sharif, Khaled S. El-Kilany, and Mostafa A. Helaly Abstract In recent years, the process of cellular

More information

Evolutionary origins of modularity

Evolutionary origins of modularity Evolutionary origins of modularity Jeff Clune, Jean-Baptiste Mouret and Hod Lipson Proceedings of the Royal Society B 2013 Presented by Raghav Partha Evolvability Evolvability capacity to rapidly adapt

More information

A More Stable Approach To LISP Tree GP

A More Stable Approach To LISP Tree GP A More Stable Approach To LISP Tree GP Joseph Doliner August 15, 2008 Abstract In this paper we begin by familiarising ourselves with the basic concepts of Evolutionary Computing and how it can be used

More information