Evolving caching algorithms in C by genetic programming
|
|
- Ruby Quinn
- 5 years ago
- Views:
Transcription
1 Evolving caching algorithms in C by genetic programming Norman Paterson Computer Science Division University of St Andrews ST ANDREWS Fife, KY16 9SS, Scotland norman@dcs.st-and.ac.uk ABSTRACT This paper outlines current work in ontogenic-mapped and languageabstracted genetic programming. Developments in the genetic algorithm for deriving software (GADS) technique are described, and tested in a series of experiments to generate caching algorithms in C. GADS quickly finds over-fitted solutions which perform better than designed solutions but only in one niche. The need for a scalable approach in GADS to deal with language definitions involving more productions is demonstrated. 1 Introduction Genetic algorithms (GAs) were introduced in Holland (1975). Genetic programming (GP) was introduced in Koza (1993). GP is the application of the GA technique to the problem of producing computer programs. Koza (1993) does this by extending GA techniques to deal with genotypes of type tree LISP trees in particular. Michalewicz (1994) describes how GA has been extended to genotypes of various other data types. The original GA data type is best described as a tuple. A current line of research is to abstract the language in which the phenotype is written from the GP process. Keller & Banzhaf (1996) describe a many-to-one mapping from the chromosome to the terminal symbols of the phenotype language. Hörner (1996) describes a GP system in which the genotype is a derivation tree rather than a parse tree. Paterson & Livesey (1996) describe GADS, an implementation of GP that uses tuple genotypes. These approaches are described and compared in more detail in 2. The main body of this paper reports recent developments of the GADS technique. The target language is changed from first-order Lisp to a subset of C. Evaluating fitness involves compiling and running the phenotype. A more general repair mechanism is introduced, where a default terminal string is specified for each nonterminal as part of the BNF definition of the phenotype language. GA parameters are refined: uniform crossover is replaced by Mike Livesey Computer Science Division University of St Andrews ST ANDREWS Fife, KY16 9SS, Scotland mjl@dcs.st-and.ac.uk single-point crossover. This results in a marked improvement from generation to generation that was lacking in previous work. A series of experiments on caching algorithms is described. Caching was chosen because it is small, has no closed solution, and an improved solution would be of value outside of the GP community. The work is carried out with the machine acting as a servant under human direction. 2 Related work GADS differs from current mainstream GP in two main ways: it distinguishes between the genotype and phenotype, and the phenotype language is abstracted so that phenotypes can be generated in any language of choice. This section describes 2 other systems with these properties and compares them with GADS. 2.1 Linear genomes (LG) Keller & Banzhaf (1996) introduce a genotype-phenotype mapping (called GPM or genetic code) based on a theory of molecular evolution. The chromosome is a tuple of codons. Every codon is represented by the same number of bits. The difference between a codon and a gene is that there is a many-to-one mapping from codons to the terminal symbols of the phenotype language, and the notion of distance or similarity between codons is defined. That is, distance based on the codon value, not on its position in the chromosome. For example, the genotype might map to the sequence ab*. This is now scanned for possible repair, according to the rules of the phenotype language. The symbol a is valid but b must not immediately follow a. The valid symbols to follow a are * and +. The repair is effected on the basis of the codon values. The invalid codon is 001, and the "nearest" valid replacement is 011, so the sequence is repaired to a**. Scanning and repair continue in this way, leading to the fully repaired sequence a*b. This is then subject to wrapping and fitness evaluation. The codons in the initial population are randomly generated with a uniform distribution. Two mutation operators that modify bits within a codon are defined. It is claimed that LG is closer to natural mutation, since only a single codon is modified, rather than the entire syntactic units that are changed in Koza (1993). However, limiting mutation to a single codon is only half the story. The full effect of a change depends also on the genotype-phenotype mapping and the grammar of the phenotype language. The effect of changing one codon is not restricted.
2 2.2 Genetic Programming Kernel (GPK) Hörner (1996) introduces the Vienna University of Economics Genetic Programming Kernel (GPK). The chromosomes of GPK are derivation trees of a specific grammar. All the trees in the population are complete, so no repair of incomplete or invalid phenotypes is necessary. However, ensuring this is not trivial. Initialisation is relatively complicated. Generating derivation trees by repeatedly replacing nonterminals by one of their possible derivations chosen at random leads (a) to a preponderance of short derivations, and (b) to many derivation trees which are incomplete when the size limit is reached. Considerable effort is needed to overcome these problems. Whether this is effort well spent, considering that only generation 0 is involved, is not discussed. The genetic operators are designed to maintain the property that all trees in the population are complete. Crossover swaps subtrees which have the same nonterminal root. To mutate an individual, GPK creates a new, random individual and applies the crossover operator to the individual to be mutated and the random individual. The phenotype language is provided to GPK in the form of a BNF definition. 2.3 Comparison to GADS GADS (GA for deriving software) was introduced by Paterson & Livesey (1996). A description is also given in 3 of this paper. The main features in common between GADS and the two systems described above are: The genotype and phenotype are distinct. An ontogenic mapping is necessary to convert from genotype to phenotype. This is not new to GADS; Michalewicz (1994) gives many examples in contexts other than GP. The genetic process is more or less orthogonal to the phenotype language. The language definition is abstracted from the genetic process. GADS differs from both in that it uses a standard GA engine with standard chromosomes, genes and genetic operators. Very little effort is involved in setting up GADS on top of a GA engine. From its description, LG could also be implemented on a standard GA engine, provided that the specialised mutation could be supported. GPK is a sophisticated system with many additional features to those described here; it provides its own, specialised, GA engine. Like GPK, GADS accepts a language specification in BNF. The GADS BNF is extended to include a default terminal string for each nonterminal, and production weights. These are seen as language definition features rather than tuning parameters. In LG, the phenotype language is represented by the genetic code and scanning mechanism. This introduces a whole new class of parameter, since the genetic code defines how one codon is repaired into another. Of course, it may be that this has little effect on the general performance of the system, and that it can be automated. Both linear genomes and GPK use mutation. GPK without mutation is limited to rearranging the parts that were present in generation 0. GADS, at least in this paper, sets the mutation rate to 0. A masking effect means that forms can appear in one generation, which were not only absent in previous generations, but whose very components were absent. Mutation is not necessary for this. 3 Experimental design This section describes the problem that was studied and the system that was used to study it. The problem was to find an efficient caching algorithm. The nature of this problem and its terminology is described in 3.1. The form of the solution that we are looking for is a few lines of C source. The wrapper that we used to evaluate its fitness is described in 3.2. The experiment was conducted as a series of GP runs, using the GADS technique. GADS is introduced in Paterson & Livesey (1996). The main components of the system are the GA engine, the ontogenic mapping and the fitness evaluation. The GA engine maintains a population of chromosomes. These are arrays of integers. The GA engine is described in 3.3. The ontogenic mapping converts chromosomes into phenotypes, which are program fragments in C. This is described in 3.4. Fitness evaluation simulates cache operation using the phenotype, to measure how effective it is at avoiding cache misses. This is described in The problem As a program executes, it accesses locations in memory. However, programs do not access all memory locations equally. They tend to access a small subset of their address space (the working set) and ignore the rest, the working set changing slowly (compared to the rate of accesses) over the life of the program. This can be exploited to improve the performance of the program. A cache memory is a small, fast (fast implies expensive, which in turn implies small) memory, large enough to hold the working set the program is using at any time. Instead of accessing main memory, the program accesses cache memory most of the time, and therefore runs faster. The following terminology is used. The executing program emits a stream of requests (addresses of main memory locations to access). A recorded stream is a trace. The set of distinct requests in a stream is its alphabet. A consecutive series of identical requests is a run. The cache has a fixed number of lines, each of one word. Initially the cache is empty, ie all lines are unoccupied. If a request is met from the cache, it is called a hit. Otherwise, it is a miss. When a miss occurs, the requested word is copied from main memory to the cache. If the cache is not yet full, then the first empty line is used. If the cache is full, the caching algorithm is used to choose a victim (ie a word to evict), so that its line can be used for the requested word. A typical caching algorithm is LRU, which is to replace the word that was least recently used. In general a caching algorithm needs some management information on which to base its decision. This information is provided in various ways by different systems; for
3 example, the lines may be held in a linked list where the order is significant. In this implementation, we provided an integer array info [ ], with one element per line. This is sufficient to implement an algorithm such as LRU, but general enough to find other solutions as well. 3.2 Fitness evaluation: the wrapper The problem to be solved is not to generate a whole caching system, but just the caching algorithm which lies at its heart. A phenotype takes the form of a file, called phenotype.cc, containing some statements in the C programming language, whose effect is to choose a victim on the basis of some calculation. These statements are inserted (by a #include compiler directive) into a function, called phenotype, that is the immediate wrapper of the phenotype. The function is shown below: long phenotype ( long request, /* address of word to access */ boolean miss, /* a miss? 0 = n, 1 = y */ long line_no /* of this word; -1 if miss */ ) { long victim = 0; #include "phenotype.cc" } /* Ensure result is in range */ return abs (victim) % CACHESIZE; The function phenotype is part of a cache simulator called wrapper.cc. When function phenotype is called, the phenotype being evaluated calculates a value for victim (or leaves it at 0). The return statement maps the value of victim into the valid range. Function phenotype is called by the cache simulator once per request, whether the request is a hit or a miss. This allows the phenotype to update the array info [ ]. The result is only used to update the cache if the request is a miss. wrapper.cc also provides a small selection of constants, variables, information functions and calculation functions, some protected, which the phenotype can use. These are shown in table 1, and are built into the language definition. Table 1: Phenotype support facilities write_x (i, v) sets info [i] = v read_x (i) info [i] small_x () index of smallest element of info [ ] large_x () index of greatest element of info [ ] random_x () index of random element of info [ ] counter () successive values 0, 1, 2 etc CACHESIZE count of lines in cache div (x, y) if y ==0 then 1 else x / y rem (x, y) if y ==0 then 1 else x % y Apart from the protected operations, this language is not tailored for GP. It is designed to be sufficiently powerful to implement a range of typical caching algorithms. For example, LRU could be implemented as follows: write_x (line_no, counter ()); victim = small_x (); In fact the function phenotype is slightly more sophisticated than shown above: it has a range of designed caching algorithms such as LRU in addition to the #included one, and a switch statement to choose between them, under control of a command line argument. This allows for calibration and comparison. The designed algorithms play no part in fitness evaluation. 3.3 The GA engine The GA engine used for this experiment is GAGS-1.0, by Merelo (1996). GAGS is a general-purpose GA system, not especially designed for GADS. It provides a C++ environment in which a programmer can set up a wide range of GA environments. GADS is implemented as a C++ program called ga. The ga program is unspecialised and merely provides a GA environment. Genes are bit strings, but GAGS enables the program to view them as objects of any type. In this experiment, genes were viewed simply as integers in the range 0 to 2 n -1, where n was as small as possible (5 or 6) but large enough to specify any production in the syntax. Chromosomes were fixed at 500 genes. This was chosen after some brief experiments to generate sentences from the BNF file. GAGS supports variable-length chromosomes but this feature was not used. The population was fixed at 500 individuals. The genes of the initial population are randomly generated with a uniform distribution. The same random seed was used in all runs. Phenotypes are generated in C by the ontogenic mapping procedure. See 3.4 for details. Phenotypes are evaluated using a cache simulator. See 3.5 for details. The number of generations was limited to 10 or 20. This was found to be sufficient to show convergence. Mutation was not used. Elitism was set at 20%, so that the best 20% of each generation was carried on to the next. The best individual from each generation was noted, and the result of the run was the best individual from the last generation. 3.4 Ontogenic mapping This section describes how the genotype, which is a list of integers, is converted into a fragment of C program. The ontogenic mapping procedure is called by the ga program as the first step in evaluating the fitness of a chromosome. The ontogenic mapping uses a specification of the phenotype language in an extended BNF (Backus normal form). For example, the conventional BNF definitions: <frag> ::= <stmts>; <stmts> ::= <stmt> <stmt>; <stmts> is written as:
4 <frag> {} ::= <stmts>; <stmts> {} ::= <stmt> ::= <stmt>; <stmts> The terminal string {} is the default value of both <frag> and <stmts>. In all cases, a minimal default string was used. For example, <fun> represents a function call, but an unevaluated <fun> is repaired to the constant 0L. Space limitations preclude listing the actual BNF file used. The range of statement forms included a conditional, a write to info [ ], and an assignment to the variable victim. Arithmetic expressions can be combined in the usual ways. Protected functions are used for division and remainder. The base cases for expressions include the defined term CACHESIZE (ie the number of lines in the cache), numbers composed of a mantissa (1, 2 or 5) followed by any number of zeros, variables describing the conditions at the time of the call, and a range of protected functions to access info [ ]. Most of these are described in Table 1. Before the GA engine is compiled, a pre-processor (written in perl) translates the BNF file into a C++ data structure, with the BNF as an array of productions. This data structure is #included into the ontogenic mapping procedure. During execution, each phenotype is generated from a chromosome as a parse tree. The tree is initialised to a single node which represents the unexpanded start symbol <frag>. Each gene is then used in turn. If the value of the gene is in the subscript range of the BNF array, it identifies a production. If the nonterminal left hand side of that production can be found in the parse tree, it is expanded to the right hand side, which in general involves adding a mixture of terminal strings and new unexpanded nonterminal nodes to the parse tree. After all genes in the chromosome have been used, the parse tree is traversed and terminal strings printed to a file. Residual nonterminal nodes are replaced by their default terminal strings as specified in the BNF. The end result is a file containing a number of statements in the phenotype language C. A weakness in this approach is that with as the number of productions grows, the chance of any particular production being chosen by a gene reduces. This can be offset by increasing the population size, or the chromosome length, or replicating selected productions in the BNF. All of these strategies were adopted to some degree. For example, a total of 6 identical productions for <frag> were used in the actual BNF file. 3.5 Fitness evaluation To evaluate the fitness of a phenotype, we compile it into a cache simulator called wrapper, and simulate the cache with a trace file. The effectiveness of a caching algorithm is usually measured by its hit rate: number of hits / number of requests which can conveniently be expressed as a percentage. For GP, a slightly different measurement was used. The raw fitness of a caching algorithm is defined as the number of misses it avoids. The number of misses that can occur is bounded both above and below, depending on the particular trace file, cache size, and the caching algorithm. The trace file is part of the problem definition, while cache size and the caching algorithm are part of the solution. The upper bound is the number of runs in the trace. This bound is achieved if cache size is 1, which means that there is no choice about which word to evict. This is independent of the caching algorithm. A lower bound on the number of misses is the size of the alphabet, since each word must be read at least once. A perfect caching algorithm would not read any word more than once. Whether this is achievable for any given trace file and cache size is moot. An alternative interpretation of this bound is that it corresponds to cache size large enough to hold all of the words of main memory. Again, this is independent of the caching algorithm. Both upper and lower bounds on the number of misses are independent of the caching algorithm. The actual performance for a given trace file depends on the cache size and caching algorithm, but must lie between the bounds. The raw fitness of a caching algorithm is measured as: number of runs number of misses The better the algorithm, the fewer misses, and so the larger the fitness. Three trace files from Flanagan et al (1992) were used. One of these (ken ) was used for fitness evaluation; it comprises about requests with about distinct addresses. The other two traces (ken and ken ) were used to compare the algorithms outside the GP process. To cut down the cache simulator's memory requirements, the addresses were grouped into blocks of 256 words by removing the low order 10 bits. This reduced the alphabet size to about 500. However, we suspect that this may have been a false economy. It would be possible to generate trace data randomly, but any conclusions would then have to be ratified with real data. To avoid introducing this step we use real data from the start. 4 Experimental results The experiment consisted of a series of GP runs. 4.1 GP1: small cache size In this run, an exceptionally small cache size of 20 lines was used. The best-of-run, with misses, appeared in generation 1: victim = random_x (); No further improvement was shown in the following 8 generations of this run. Random is the best of the designed algorithms for this size of cache. 4.2 GP2: large cache size Since a caching algorithm works by exploiting patterns in the request stream, and the cache size is the window in which these patterns can be detected, smaller cache sizes make pattern detection harder. It is therefore no surprise that random is the best we can do. Table 9 shows that as the cache size grows, more intelligent caching algorithms outperform random. The second run was therefore based on
5 increasing the cache size from 20 to 200. The best-of-run, with misses, appeared in generation 8: victim = rem ((rem ((rem ((CACHESIZE), read_x (request + miss))), counter ())), read_x (miss + small_x ())); victim = div (CACHESIZE, CACHESIZE); victim = counter () * (read_x (2L) - 5L); It would appear that only the last assignment to victim has any effect in any of these cases. However, the earlier assignments use counter (), which has a side effect. No further generations were evolved, even though improvement looked possible. At misses, the generated caching algorithm outperforms LRU by almost 14%. 4.3 GP3: no constants A concern about the quality of evolved caching algorithms is that they are over-specialised for the particular cases used in their fitness evaluation. To make it harder for GADS to find an over-fitted algorithm, the next experiment is to remove the capacity to generate numbers, so making arbitrary constants less likely to evolve. This is done simply by deleting the relevant lines of the BNF file. The best-of-run, with misses, appeared in generation 4: victim = random_x () + counter (); Surprisingly, imposing this constraint improved the final performance. At misses, the generated caching algorithm outperforms LRU by over 15%. 4.4 GP4: using info [ ] Although the algorithms discovered so far outperform LRU, none of them make any use of the info [ ] array. To see if this array is useful, we modify the syntax further to force its use. This is done by redefining <frag> as follows: <frag> {} ::= write_x (<expr>, <expr>); ::= victim = read_x (<expr>); The best-of-run, with misses, appeared in generation 3: write_x ((CACHESIZE + (CACHESIZE + (large_x () * CACHESIZE))), counter () + CACHESIZE); victim = read_x (rem ((div (((div (large_x (), line_no)) - miss), (div ((CACHESIZE - (counter () * (read_x (div (random_x (), (div (((div (large_x (), line_no)) - miss), (div ((0L - (counter () * (read_x (div (small_x (), miss)) * read_x (0L - (div (line_no, counter ())))))), line_no)))))) * counter ()))), line_no)))), small_x ())); The experiment was terminated at generation 3 because it was taking such a long time to execute; but in just 3 generations, GADS has produced an even better performance. At misses, the generated caching algorithm outperforms LRU by over 15%. 5 Comparisons Each run used a particular cache size and trace file. When the evolved algorithms were compared using other cache sizes or trace files, they did not perform as well as designed algorithms. Figure 1 shows a typical case: designed algorithm LRU versus algorithm GP4 (evolved for cache size 200). Misses GP4 LRU Cache size Figure 1: Comparison of GP4 and LRU The designed algorithm has fewer misses than the evolved algorithm everywhere except in the particular cache size. The evolved algorithm is over-fitted. Table 2 shows the fitness of all algorithms, averaged over all cache sizes and all trace files. Fitness is measured as: (number of runs number of misses) / (number of runs size of alphabet) which lies in the range [0, 1]. 0% is the worst and 100% is the best that can be achieved. Table 2: Summary comparison LRU 95.39% counter 95.39% random 95.40% constant % GP % GP % GP % GP % It is clear from this that although the GP algorithms do extremely well in specific cases, their average performance is not particularly good. However, the fitness figures for designed algorithms are unexpectedly close: the possibility of a problem in the simulation is discussed in 6. 6 Conclusions A review criticism of the experimental design is that it has two separate aims, namely to introduce a new GP technique, and to solve a genuine problem. This was deliberate, as success would have validated the technique in one step.
6 Success with a toy problem is less than convincing. In the end the evolved algorithms are not as good as the designed algorithms, but for other reasons we cannot conclude that the technique itself is at fault. First, the GA parameters may have been unreasonable. One reviewer thought 1% elitism more realistic than 20%. Another questioned the zero mutation rate. To some extent, these are criticisms of the technique rather than the particular experiment. Too many experiments seem to solve the same problem many times to find "optimal" parameter settings. If optimum settings are necessary for a solution, the technique is of limited value, because in many problems we do not know what the solution is and have therefore no way to know when the settings are optimal. To that extent, this experiment shows GADS to be of limited value. However, it is possible that GADS would operate well over a wide range of settings, and that the particular settings we chose are simply unreasonable. Further experiments would answer this question. Second, modifying the traces by removing the low order 10 bits may have destroyed the value of the traces. The trace ken has requests and an alphabet of about addresses. This gives an average run of 1.6 requests. After removing 10 bits, the alphabet drops to about 500 addresses, giving an average run of 800. A change of such magnitude might be expected to cause problems; but it was not until the comparison of designed algorithms were summarised in table 2 that doubts were expressed. The designed algorithms' performances are too close. In short there are flaws in the experimental design. Nonetheless some valuable conclusions can be drawn. First, getting the experimental design right may take more than one attempt. This can hardly be a new conclusion but it is surely worth mentioning, especially given that some effort was spent in the hope of getting this one right first time! Second, the over-fitting shows clearly that evolution is taking place. GADS succeeds, in very few generations, in exploiting patterns in the trace. Over-fitting is usually seen as a problem, but in fact could be useful. The advantage of designed caching algorithms is that they work well in a wide range of situations; the advantage of over-fitted algorithms is that they work even better in certain niches. An adaptive system that could use this might be of value. For example, an adaptive caching system could maintain a steady-state population of caching algorithms, using the better ones to deal with incoming requests, modifying their fitness, and breeding them. As the request stream changed, algorithms that were less fit would become more suited and would rise in the population. Mutation would prevent the population from converging. Third, the ease of changing languages with GADS is clearly demonstrated. Minor changes to a BNF file were all that was required to produce a range of tailored solutions. Fourth, GADS must be modified to cope with larger language definitions. The current approach cannot scale up because the probability of a gene selecting a production with a particular left-hand-side becomes too low as the number of productions increases. Fifth, GADS has shown that it can be used on a "real" problem. The caching algorithm problem was chosen because it is of interest outside the GP community. It was attacked by a combination of human skill in choosing the phenotype language, in particular the function set, and in directing the GADS runs, interpreting the results, and modifying the approach. GADS was used to explore areas chosen by the human. GADS did not provide the turnkey service that was originally hoped for, but it was a very effective junior partner. Bibliography Flanagan, K, Grimsrud, K, Archibald, J & Nelson, B BACH: BYU Address Collection Hardware. Technical report TR-A Electrical and Computer Engineering Department, Brigham Young University. Holland, John Adaptation in natural and artificial systems. MIT Press. ISBN Hörner, Helmut A C++ class library for genetic programming. Vienna University of Economics. Keller, Robert E & Banzhaf, Wolfgang Genetic programming using mutation, reproduction and genotypephenotype mapping from linear binary genomes into linear LALR(1) phenotypes. In Koza, Goldberg, Fogel & Riolo, eds. Genetic Programming 1996: Proceedings of the First Annual Conference. MIT Press. ISSN ISBN Pp Koza, John R Genetic programming. MIT Press. ISBN Merelo, J J Genetic algorithms from Granada, Spain. ftp://kal-el.ugr.es/gags/gags-1.0.tar.gz. Michalewicz, Zbigniew Genetic algorithms + Data structures = Evolution programs. Springer-Verlag. ISBN Paterson, Norman & Livesey, Mike Distinguishing genotype and phenotype in genetic programming. In Koza, Goldberg, Fogel & Riolo, eds. Late Breaking Papers at GP MIT Press. ISBN
Distinguishing genotype and phenotype in genetic programming
Distinguishing genotype and phenotype in genetic programming Norman R Paterson Computer Science Division University of St Andrews ST ANDREWS Fife, KY16 9SS, Scotland norman@dcs.st-and.ac.uk Mike Livesey
More informationGenetic Programming. Genetic Programming. Genetic Programming. Genetic Programming. Genetic Programming. Genetic Programming
What is it? Genetic programming (GP) is an automated method for creating a working computer program from a high-level problem statement of a problem. Genetic programming starts from a highlevel statement
More informationA Comparative Study of Linear Encoding in Genetic Programming
2011 Ninth International Conference on ICT and Knowledge A Comparative Study of Linear Encoding in Genetic Programming Yuttana Suttasupa, Suppat Rungraungsilp, Suwat Pinyopan, Pravit Wungchusunti, Prabhas
More informationSanta Fe Trail Problem Solution Using Grammatical Evolution
2012 International Conference on Industrial and Intelligent Information (ICIII 2012) IPCSIT vol.31 (2012) (2012) IACSIT Press, Singapore Santa Fe Trail Problem Solution Using Grammatical Evolution Hideyuki
More informationmade up from the BNF denition. However, GPK has been criticised [Paterson 97] for the diculty associated with generating the rst generation - consider
Grammatical Evolution : Evolving Programs for an Arbitrary Language Conor Ryan, JJ Collins & Michael O Neill Dept. Of Computer Science And Information Systems University of Limerick Ireland fconor.ryanjj.j.collinsjmichael.oneillg@ul.ie
More informationEvolving SQL Queries for Data Mining
Evolving SQL Queries for Data Mining Majid Salim and Xin Yao School of Computer Science, The University of Birmingham Edgbaston, Birmingham B15 2TT, UK {msc30mms,x.yao}@cs.bham.ac.uk Abstract. This paper
More information3DUDOOHO*UDPPDWLFDO(YROXWLRQ
3DYHO2ãPHUD Institute of Automation and Computer Science Brno University of Technology Faculty of Mechanical Engineering Brno, Czech Republic osmera@fme.vutbr.cz 3DUDOOHO*UDPPDWLFDO(YROXWLRQ 7RPiã3DQiþHN
More informationGenetic programming. Lecture Genetic Programming. LISP as a GP language. LISP structure. S-expressions
Genetic programming Lecture Genetic Programming CIS 412 Artificial Intelligence Umass, Dartmouth One of the central problems in computer science is how to make computers solve problems without being explicitly
More informationMutations for Permutations
Mutations for Permutations Insert mutation: Pick two allele values at random Move the second to follow the first, shifting the rest along to accommodate Note: this preserves most of the order and adjacency
More informationAn empirical study of the efficiency of learning boolean functions using a Cartesian Genetic Programming approach
An empirical study of the efficiency of learning boolean functions using a Cartesian Genetic Programming approach Julian F. Miller School of Computing Napier University 219 Colinton Road Edinburgh, EH14
More informationThe Genetic Algorithm for finding the maxima of single-variable functions
Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding
More informationGenetic Programming Prof. Thomas Bäck Nat Evur ol al ut ic o om nar put y Aling go rg it roup hms Genetic Programming 1
Genetic Programming Prof. Thomas Bäck Natural Evolutionary Computing Algorithms Group Genetic Programming 1 Genetic programming The idea originated in the 1950s (e.g., Alan Turing) Popularized by J.R.
More informationGenetic Algorithms. Kang Zheng Karl Schober
Genetic Algorithms Kang Zheng Karl Schober Genetic algorithm What is Genetic algorithm? A genetic algorithm (or GA) is a search technique used in computing to find true or approximate solutions to optimization
More informationInvestigating the Application of Genetic Programming to Function Approximation
Investigating the Application of Genetic Programming to Function Approximation Jeremy E. Emch Computer Science Dept. Penn State University University Park, PA 16802 Abstract When analyzing a data set it
More informationEvolution of the Discrete Cosine Transform Using Genetic Programming
Res. Lett. Inf. Math. Sci. (22), 3, 117-125 Available online at http://www.massey.ac.nz/~wwiims/research/letters/ Evolution of the Discrete Cosine Transform Using Genetic Programming Xiang Biao Cui and
More informationReducing Graphic Conflict In Scale Reduced Maps Using A Genetic Algorithm
Reducing Graphic Conflict In Scale Reduced Maps Using A Genetic Algorithm Dr. Ian D. Wilson School of Technology, University of Glamorgan, Pontypridd CF37 1DL, UK Dr. J. Mark Ware School of Computing,
More informationEvolving Hierarchical and Recursive Teleo-reactive Programs through Genetic Programming
Evolving Hierarchical and Recursive Teleo-reactive Programs through Genetic Programming Mykel J. Kochenderfer Department of Computer Science Stanford University Stanford, California 94305 mykel@cs.stanford.edu
More informationComputational Intelligence
Computational Intelligence Module 6 Evolutionary Computation Ajith Abraham Ph.D. Q What is the most powerful problem solver in the Universe? ΑThe (human) brain that created the wheel, New York, wars and
More informationChapter 3: CONTEXT-FREE GRAMMARS AND PARSING Part 1
Chapter 3: CONTEXT-FREE GRAMMARS AND PARSING Part 1 1. Introduction Parsing is the task of Syntax Analysis Determining the syntax, or structure, of a program. The syntax is defined by the grammar rules
More informationADAPTATION OF REPRESENTATION IN GP
1 ADAPTATION OF REPRESENTATION IN GP CEZARY Z. JANIKOW University of Missouri St. Louis Department of Mathematics and Computer Science St Louis, Missouri RAHUL A DESHPANDE University of Missouri St. Louis
More informationAn improved representation for evolving programs
Loughborough University Institutional Repository An improved representation for evolving programs This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation:
More informationEvolutionary Computation Part 2
Evolutionary Computation Part 2 CS454, Autumn 2017 Shin Yoo (with some slides borrowed from Seongmin Lee @ COINSE) Crossover Operators Offsprings inherit genes from their parents, but not in identical
More informationA Comparison of Several Linear Genetic Programming Techniques
A Comparison of Several Linear Genetic Programming Techniques Mihai Oltean Crina Groşan Department of Computer Science, Faculty of Mathematics and Computer Science, Babes-Bolyai University, Kogalniceanu
More informationGenetic Programming Part 1
Genetic Programming Part 1 Evolutionary Computation Lecture 11 Thorsten Schnier 06/11/2009 Previous Lecture Multi-objective Optimization Pareto optimality Hyper-volume based indicators Recent lectures
More informationA More Stable Approach To LISP Tree GP
A More Stable Approach To LISP Tree GP Joseph Doliner August 15, 2008 Abstract In this paper we begin by familiarising ourselves with the basic concepts of Evolutionary Computing and how it can be used
More informationGenetic Programming of Autonomous Agents. Functional Requirements List and Performance Specifi cations. Scott O'Dell
Genetic Programming of Autonomous Agents Functional Requirements List and Performance Specifi cations Scott O'Dell Advisors: Dr. Joel Schipper and Dr. Arnold Patton November 23, 2010 GPAA 1 Project Goals
More informationThis book is licensed under a Creative Commons Attribution 3.0 License
6. Syntax Learning objectives: syntax and semantics syntax diagrams and EBNF describe context-free grammars terminal and nonterminal symbols productions definition of EBNF by itself parse tree grammars
More informationPrevious Lecture Genetic Programming
Genetic Programming Previous Lecture Constraint Handling Penalty Approach Penalize fitness for infeasible solutions, depending on distance from feasible region Balanace between under- and over-penalization
More informationEvolutionary Computation. Chao Lan
Evolutionary Computation Chao Lan Outline Introduction Genetic Algorithm Evolutionary Strategy Genetic Programming Introduction Evolutionary strategy can jointly optimize multiple variables. - e.g., max
More informationUsing Genetic Programming to Evolve a General Purpose Sorting Network for Comparable Data Sets
Using Genetic Programming to Evolve a General Purpose Sorting Network for Comparable Data Sets Peter B. Lubell-Doughtie Stanford Symbolic Systems Program Stanford University P.O. Box 16044 Stanford, California
More informationUsing Genetic Algorithms in Integer Programming for Decision Support
Doi:10.5901/ajis.2014.v3n6p11 Abstract Using Genetic Algorithms in Integer Programming for Decision Support Dr. Youcef Souar Omar Mouffok Taher Moulay University Saida, Algeria Email:Syoucef12@yahoo.fr
More informationOne-Point Geometric Crossover
One-Point Geometric Crossover Alberto Moraglio School of Computing and Center for Reasoning, University of Kent, Canterbury, UK A.Moraglio@kent.ac.uk Abstract. Uniform crossover for binary strings has
More informationOn the Locality of Grammatical Evolution
On the Locality of Grammatical Evolution Franz Rothlauf and Marie Oetzel Department of Business Administration and Information Systems University of Mannheim, 68131 Mannheim/Germany rothlauf@uni-mannheim.de
More informationMINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS
Proceedings of Student/Faculty Research Day, CSIS, Pace University, May 5 th, 2006 MINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS Richard
More informationOptimization of the Throughput of Computer Network Based on Parallel EA
Optimization of the Throughput of Computer Network Based on Parallel EA Imrich Rukovansky Abstract This paper describes Parallel Grammatical Evolution (PGE) that can be together with clustering used for
More informationCompiling and Interpreting Programming. Overview of Compilers and Interpreters
Copyright R.A. van Engelen, FSU Department of Computer Science, 2000 Overview of Compilers and Interpreters Common compiler and interpreter configurations Virtual machines Integrated programming environments
More informationBinary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms
Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms Franz Rothlauf Department of Information Systems University of Bayreuth, Germany franz.rothlauf@uni-bayreuth.de
More informationRobust Gene Expression Programming
Available online at www.sciencedirect.com Procedia Computer Science 6 (2011) 165 170 Complex Adaptive Systems, Volume 1 Cihan H. Dagli, Editor in Chief Conference Organized by Missouri University of Science
More informationJEvolution: Evolutionary Algorithms in Java
Computational Intelligence, Simulation, and Mathematical Models Group CISMM-21-2002 May 19, 2015 JEvolution: Evolutionary Algorithms in Java Technical Report JEvolution V0.98 Helmut A. Mayer helmut@cosy.sbg.ac.at
More informationGenetic Programming for Data Classification: Partitioning the Search Space
Genetic Programming for Data Classification: Partitioning the Search Space Jeroen Eggermont jeggermo@liacs.nl Joost N. Kok joost@liacs.nl Walter A. Kosters kosters@liacs.nl ABSTRACT When Genetic Programming
More informationGenetic Programming. Modern optimization methods 1
Genetic Programming Developed in USA during 90 s Patented by J. Koza Solves typical problems: Prediction, classification, approximation, programming Properties Competitor of neural networks Need for huge
More informationPart 5 Program Analysis Principles and Techniques
1 Part 5 Program Analysis Principles and Techniques Front end 2 source code scanner tokens parser il errors Responsibilities: Recognize legal programs Report errors Produce il Preliminary storage map Shape
More informationGen := 0. Create Initial Random Population. Termination Criterion Satisfied? Yes. Evaluate fitness of each individual in population.
An Experimental Comparison of Genetic Programming and Inductive Logic Programming on Learning Recursive List Functions Lappoon R. Tang Mary Elaine Cali Raymond J. Mooney Department of Computer Sciences
More informationAutomating Test Driven Development with Grammatical Evolution
http://excel.fit.vutbr.cz Automating Test Driven Development with Grammatical Evolution Jan Svoboda* Abstract Test driven development is a widely used process of creating software products with automated
More informationMDL-based Genetic Programming for Object Detection
MDL-based Genetic Programming for Object Detection Yingqiang Lin and Bir Bhanu Center for Research in Intelligent Systems University of California, Riverside, CA, 92521, USA Email: {yqlin, bhanu}@vislab.ucr.edu
More informationGenetic Programming: A study on Computer Language
Genetic Programming: A study on Computer Language Nilam Choudhary Prof.(Dr.) Baldev Singh Er. Gaurav Bagaria Abstract- this paper describes genetic programming in more depth, assuming that the reader is
More informationIntroduction to Evolutionary Computation
Introduction to Evolutionary Computation The Brought to you by (insert your name) The EvoNet Training Committee Some of the Slides for this lecture were taken from the Found at: www.cs.uh.edu/~ceick/ai/ec.ppt
More informationDr. D.M. Akbar Hussain
Syntax Analysis Parsing Syntax Or Structure Given By Determines Grammar Rules Context Free Grammar 1 Context Free Grammars (CFG) Provides the syntactic structure: A grammar is quadruple (V T, V N, S, R)
More informationMetaheuristic Optimization with Evolver, Genocop and OptQuest
Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:
More informationCombinational Circuit Design Using Genetic Algorithms
Combinational Circuit Design Using Genetic Algorithms Nithyananthan K Bannari Amman institute of technology M.E.Embedded systems, Anna University E-mail:nithyananthan.babu@gmail.com Abstract - In the paper
More informationA Simple Syntax-Directed Translator
Chapter 2 A Simple Syntax-Directed Translator 1-1 Introduction The analysis phase of a compiler breaks up a source program into constituent pieces and produces an internal representation for it, called
More informationEECS 6083 Intro to Parsing Context Free Grammars
EECS 6083 Intro to Parsing Context Free Grammars Based on slides from text web site: Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. 1 Parsing sequence of tokens parser
More informationEvolving Teleo-Reactive Programs for Block Stacking using Indexicals through Genetic Programming
Evolving Teleo-Reactive Programs for Block Stacking using Indexicals through Genetic Programming Mykel J. Kochenderfer 6 Abrams Court, Apt. 6F Stanford, CA 95 65-97-75 mykel@cs.stanford.edu Abstract This
More informationA Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2
Chapter 5 A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Graph Matching has attracted the exploration of applying new computing paradigms because of the large number of applications
More informationGenetic Algorithms. PHY 604: Computational Methods in Physics and Astrophysics II
Genetic Algorithms Genetic Algorithms Iterative method for doing optimization Inspiration from biology General idea (see Pang or Wikipedia for more details): Create a collection of organisms/individuals
More informationGenetic Algorithms Variations and Implementation Issues
Genetic Algorithms Variations and Implementation Issues CS 431 Advanced Topics in AI Classic Genetic Algorithms GAs as proposed by Holland had the following properties: Randomly generated population Binary
More informationISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116
IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY AUTOMATIC TEST CASE GENERATION FOR PERFORMANCE ENHANCEMENT OF SOFTWARE THROUGH GENETIC ALGORITHM AND RANDOM TESTING Bright Keswani,
More informationConstructing an Optimisation Phase Using Grammatical Evolution. Brad Alexander and Michael Gratton
Constructing an Optimisation Phase Using Grammatical Evolution Brad Alexander and Michael Gratton Outline Problem Experimental Aim Ingredients Experimental Setup Experimental Results Conclusions/Future
More informationTHE RESEARCH area of automatic heuristic generation is
406 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 16, NO. 3, JUNE 2012 Grammatical Evolution of Local Search Heuristics Edmund K. Burke, Member, IEEE, Matthew R. Hyde, Member, IEEE, and Graham Kendall,
More informationArtificial Intelligence Application (Genetic Algorithm)
Babylon University College of Information Technology Software Department Artificial Intelligence Application (Genetic Algorithm) By Dr. Asaad Sabah Hadi 2014-2015 EVOLUTIONARY ALGORITHM The main idea about
More informationDETERMINING MAXIMUM/MINIMUM VALUES FOR TWO- DIMENTIONAL MATHMATICLE FUNCTIONS USING RANDOM CREOSSOVER TECHNIQUES
DETERMINING MAXIMUM/MINIMUM VALUES FOR TWO- DIMENTIONAL MATHMATICLE FUNCTIONS USING RANDOM CREOSSOVER TECHNIQUES SHIHADEH ALQRAINY. Department of Software Engineering, Albalqa Applied University. E-mail:
More informationGENETIC ALGORITHM with Hands-On exercise
GENETIC ALGORITHM with Hands-On exercise Adopted From Lecture by Michael Negnevitsky, Electrical Engineering & Computer Science University of Tasmania 1 Objective To understand the processes ie. GAs Basic
More informationGenetic Programming for Julia: fast performance and parallel island model implementation
Genetic Programming for Julia: fast performance and parallel island model implementation Morgan R. Frank November 30, 2015 Abstract I introduce a Julia implementation for genetic programming (GP), which
More informationAdaptive Crossover in Genetic Algorithms Using Statistics Mechanism
in Artificial Life VIII, Standish, Abbass, Bedau (eds)(mit Press) 2002. pp 182 185 1 Adaptive Crossover in Genetic Algorithms Using Statistics Mechanism Shengxiang Yang Department of Mathematics and Computer
More informationProgramming Languages Third Edition
Programming Languages Third Edition Chapter 12 Formal Semantics Objectives Become familiar with a sample small language for the purpose of semantic specification Understand operational semantics Understand
More informationGeometric Semantic Genetic Programming ~ Theory & Practice ~
Geometric Semantic Genetic Programming ~ Theory & Practice ~ Alberto Moraglio University of Exeter 25 April 2017 Poznan, Poland 2 Contents Evolutionary Algorithms & Genetic Programming Geometric Genetic
More informationPartitioning Sets with Genetic Algorithms
From: FLAIRS-00 Proceedings. Copyright 2000, AAAI (www.aaai.org). All rights reserved. Partitioning Sets with Genetic Algorithms William A. Greene Computer Science Department University of New Orleans
More informationOptimization of Benchmark Functions Using Genetic Algorithm
Optimization of Benchmark s Using Genetic Algorithm Vinod Goyal GJUS&T, Hisar Sakshi Dhingra GJUS&T, Hisar Jyoti Goyat GJUS&T, Hisar Dr Sanjay Singla IET Bhaddal Technical Campus, Ropar, Punjab Abstrat
More informationIntroduction to Genetic Algorithms. Based on Chapter 10 of Marsland Chapter 9 of Mitchell
Introduction to Genetic Algorithms Based on Chapter 10 of Marsland Chapter 9 of Mitchell Genetic Algorithms - History Pioneered by John Holland in the 1970s Became popular in the late 1980s Based on ideas
More informationTelecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA
A Decoder-based Evolutionary Algorithm for Constrained Parameter Optimization Problems S lawomir Kozie l 1 and Zbigniew Michalewicz 2 1 Department of Electronics, 2 Department of Computer Science, Telecommunication
More informationINTERACTIVE MULTI-OBJECTIVE GENETIC ALGORITHMS FOR THE BUS DRIVER SCHEDULING PROBLEM
Advanced OR and AI Methods in Transportation INTERACTIVE MULTI-OBJECTIVE GENETIC ALGORITHMS FOR THE BUS DRIVER SCHEDULING PROBLEM Jorge PINHO DE SOUSA 1, Teresa GALVÃO DIAS 1, João FALCÃO E CUNHA 1 Abstract.
More informationArtificial Neural Network based Curve Prediction
Artificial Neural Network based Curve Prediction LECTURE COURSE: AUSGEWÄHLTE OPTIMIERUNGSVERFAHREN FÜR INGENIEURE SUPERVISOR: PROF. CHRISTIAN HAFNER STUDENTS: ANTHONY HSIAO, MICHAEL BOESCH Abstract We
More informationAutomated Program Repair through the Evolution of Assembly Code
Automated Program Repair through the Evolution of Assembly Code Eric Schulte University of New Mexico 08 August 2010 1 / 26 Introduction We present a method of automated program repair through the evolution
More informationCOMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou
COMP-421 Compiler Design Presented by Dr Ioanna Dionysiou Administrative! Any questions about the syllabus?! Course Material available at www.cs.unic.ac.cy/ioanna! Next time reading assignment [ALSU07]
More informationHierarchical Crossover in Genetic Algorithms
Hierarchical Crossover in Genetic Algorithms P. J. Bentley* & J. P. Wakefield Abstract This paper identifies the limitations of conventional crossover in genetic algorithms when operating on two chromosomes
More informationGenetic Programming. Charles Chilaka. Department of Computational Science Memorial University of Newfoundland
Genetic Programming Charles Chilaka Department of Computational Science Memorial University of Newfoundland Class Project for Bio 4241 March 27, 2014 Charles Chilaka (MUN) Genetic algorithms and programming
More informationChapter 3. Describing Syntax and Semantics
Chapter 3 Describing Syntax and Semantics Chapter 3 Topics Introduction The General Problem of Describing Syntax Formal Methods of Describing Syntax Attribute Grammars Describing the Meanings of Programs:
More informationGenetic Algorithms and Genetic Programming Lecture 7
Genetic Algorithms and Genetic Programming Lecture 7 Gillian Hayes 13th October 2006 Lecture 7: The Building Block Hypothesis The Building Block Hypothesis Experimental evidence for the BBH The Royal Road
More informationLecture 6: The Building Block Hypothesis. Genetic Algorithms and Genetic Programming Lecture 6. The Schema Theorem Reminder
Lecture 6: The Building Block Hypothesis 1 Genetic Algorithms and Genetic Programming Lecture 6 Gillian Hayes 9th October 2007 The Building Block Hypothesis Experimental evidence for the BBH The Royal
More informationMeta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization
2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic
More informationCoevolving Functions in Genetic Programming: Classification using K-nearest-neighbour
Coevolving Functions in Genetic Programming: Classification using K-nearest-neighbour Manu Ahluwalia Intelligent Computer Systems Centre Faculty of Computer Studies and Mathematics University of the West
More informationMAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS
In: Journal of Applied Statistical Science Volume 18, Number 3, pp. 1 7 ISSN: 1067-5817 c 2011 Nova Science Publishers, Inc. MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS Füsun Akman
More informationA GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS
A GENETIC ALGORITHM FOR CLUSTERING ON VERY LARGE DATA SETS Jim Gasvoda and Qin Ding Department of Computer Science, Pennsylvania State University at Harrisburg, Middletown, PA 17057, USA {jmg289, qding}@psu.edu
More informationEvolving Efficient Security Systems Under Budget Constraints Using Genetic Algorithms
Proceedings of Student Research Day, CSIS, Pace University, May 9th, 2003 Evolving Efficient Security Systems Under Budget Constraints Using Genetic Algorithms Michael L. Gargano, William Edelson, Paul
More informationGenetic Algorithm for Dynamic Capacitated Minimum Spanning Tree
28 Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree 1 Tanu Gupta, 2 Anil Kumar 1 Research Scholar, IFTM, University, Moradabad, India. 2 Sr. Lecturer, KIMT, Moradabad, India. Abstract Many
More informationCMSC 330: Organization of Programming Languages. Architecture of Compilers, Interpreters
: Organization of Programming Languages Context Free Grammars 1 Architecture of Compilers, Interpreters Source Scanner Parser Static Analyzer Intermediate Representation Front End Back End Compiler / Interpreter
More informationWhere We Are. CMSC 330: Organization of Programming Languages. This Lecture. Programming Languages. Motivation for Grammars
CMSC 330: Organization of Programming Languages Context Free Grammars Where We Are Programming languages Ruby OCaml Implementing programming languages Scanner Uses regular expressions Finite automata Parser
More information2.2 Syntax Definition
42 CHAPTER 2. A SIMPLE SYNTAX-DIRECTED TRANSLATOR sequence of "three-address" instructions; a more complete example appears in Fig. 2.2. This form of intermediate code takes its name from instructions
More informationGenetic Programming. and its use for learning Concepts in Description Logics
Concepts in Description Artificial Intelligence Institute Computer Science Department Dresden Technical University May 29, 2006 Outline Outline: brief introduction to explanation of the workings of a algorithm
More informationSemantics via Syntax. f (4) = if define f (x) =2 x + 55.
1 Semantics via Syntax The specification of a programming language starts with its syntax. As every programmer knows, the syntax of a language comes in the shape of a variant of a BNF (Backus-Naur Form)
More informationEvolutionary Algorithms
Evolutionary Algorithms Proposal for a programming project for INF431, Spring 2014 version 14-02-19+23:09 Benjamin Doerr, LIX, Ecole Polytechnique Difficulty * *** 1 Synopsis This project deals with the
More information1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra
Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation
More informationCOP4020 Programming Languages. Compilers and Interpreters Robert van Engelen & Chris Lacher
COP4020 ming Languages Compilers and Interpreters Robert van Engelen & Chris Lacher Overview Common compiler and interpreter configurations Virtual machines Integrated development environments Compiler
More informationStack-Based Genetic Programming
Stack-Based Genetic Programming Timothy Perkis 1048 Neilson St., Albany, CA 94706 email: timper@holonet.net Abstract Some recent work in the field of Genetic Programming (GP) has been concerned with finding
More informationGenetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem
etic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem R. O. Oladele Department of Computer Science University of Ilorin P.M.B. 1515, Ilorin, NIGERIA
More informationCOP4020 Programming Languages. Syntax Prof. Robert van Engelen
COP4020 Programming Languages Syntax Prof. Robert van Engelen Overview n Tokens and regular expressions n Syntax and context-free grammars n Grammar derivations n More about parse trees n Top-down and
More informationGenetic Image Network for Image Classification
Genetic Image Network for Image Classification Shinichi Shirakawa, Shiro Nakayama, and Tomoharu Nagao Graduate School of Environment and Information Sciences, Yokohama National University, 79-7, Tokiwadai,
More informationEvolutionary Art with Cartesian Genetic Programming
Evolutionary Art with Cartesian Genetic Programming Laurence Ashmore 1, and Julian Francis Miller 2 1 Department of Informatics, University of Sussex, Falmer, BN1 9QH, UK emoai@hotmail.com http://www.gaga.demon.co.uk/
More informationCONCEPT FORMATION AND DECISION TREE INDUCTION USING THE GENETIC PROGRAMMING PARADIGM
1 CONCEPT FORMATION AND DECISION TREE INDUCTION USING THE GENETIC PROGRAMMING PARADIGM John R. Koza Computer Science Department Stanford University Stanford, California 94305 USA E-MAIL: Koza@Sunburn.Stanford.Edu
More information