Evolution Strategies, Network Random Keys, and the One-Max Tree Problem

Similar documents
Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms

The Edge-Set Encoding Revisited: On the Bias of a Direct Representation for Trees

The Edge-Set Encoding Revisited: On the Bias of a Direct Representation for Trees

OPTIMIZATION METHODS. For more information visit: or send an to:

Mutations for Permutations

Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree

Genetic Algorithm for Dynamic Capacitated Minimum Spanning Tree

EVOLUTIONARY ALGORITHMS FOR FUZZY LOGIC: A BRIEF OVERVIEW. Thomas B ACK, Frank KURSAWE. University of Dortmund, Department of Computer Science, LS XI

AN EVOLUTIONARY APPROACH TO DISTANCE VECTOR ROUTING

Optimizing Flow Shop Sequencing Through Simulation Optimization Using Evolutionary Methods

Introduction to Optimization

On the Optimal Communication Spanning Tree Problem

Introduction to Optimization

Offspring Generation Method using Delaunay Triangulation for Real-Coded Genetic Algorithms

Dept. of Computer Science. The eld of time series analysis and forecasting methods has signicantly changed in the last

Computational Intelligence

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

Introduction to Evolutionary Computation

Evolutionary Algorithms and the Cardinality Constrained Portfolio Optimization Problem

THE Multiconstrained 0 1 Knapsack Problem (MKP) is

Object Oriented Design and Implementation of a General Evolutionary Algorithm

CS5401 FS2015 Exam 1 Key

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem

Genetic Algorithms. Kang Zheng Karl Schober

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

Genetic Model Optimization for Hausdorff Distance-Based Face Localization

Using Penalties instead of Rewards: Solving OCST Problems with Problem-Specific Guided Local Search

Using Genetic Algorithms in Integer Programming for Decision Support

MINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS

Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?)

Evolutionary Computation for Combinatorial Optimization

EVOLUTIONARY OPTIMIZATION OF A FLOW LINE USED ExtendSim BUILT-IN OPTIMIZER

Network Routing Protocol using Genetic Algorithms

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

Genetic Algorithms. Chapter 3

The Genetic Algorithm for finding the maxima of single-variable functions

HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems

An Evolutionary Algorithm with Stochastic Hill-Climbing for the Edge-Biconnectivity Augmentation Problem

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

An Introduction to Evolutionary Algorithms

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS

Topological Machining Fixture Layout Synthesis Using Genetic Algorithms

Introduction to Optimization

Adaptive Crossover in Genetic Algorithms Using Statistics Mechanism

JEvolution: Evolutionary Algorithms in Java

Introduction to Genetic Algorithms. Based on Chapter 10 of Marsland Chapter 9 of Mitchell

Hierarchical Crossover in Genetic Algorithms

A HYBRID APPROACH IN GENETIC ALGORITHM: COEVOLUTION OF THREE VECTOR SOLUTION ENCODING. A CASE-STUDY

V.Petridis, S. Kazarlis and A. Papaikonomou

Evolutionary Computation. Chao Lan

Evolution Strategies in the Multipoint Connections Routing

DETERMINING MAXIMUM/MINIMUM VALUES FOR TWO- DIMENTIONAL MATHMATICLE FUNCTIONS USING RANDOM CREOSSOVER TECHNIQUES

Genetic Algorithms Variations and Implementation Issues

CHAPTER 5 ENERGY MANAGEMENT USING FUZZY GENETIC APPROACH IN WSN

ISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116

Abstract. 1 Introduction

Chapter 9: Genetic Algorithms

A GENETIC ALGORITHM APPROACH TO OPTIMAL TOPOLOGICAL DESIGN OF ALL TERMINAL NETWORKS

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Chapter 5 Search Strategies

Using Genetic Algorithms to optimize ACS-TSP

EVOLVING LEGO. Exploring the impact of alternative encodings on the performance of evolutionary algorithms. 1. Introduction

Neural Network Weight Selection Using Genetic Algorithms

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

Genetic Algorithms For Vertex. Splitting in DAGs 1

Problem-specific design of metaheuristics for constrained spanning tree problems

Evolutionary Computation Part 2

Automata Construct with Genetic Algorithm

Evolutionary Algorithms

Artificial Intelligence Application (Genetic Algorithm)

A Genetic Algorithm-Based Approach for Energy- Efficient Clustering of Wireless Sensor Networks

Geometric Crossover for Sets, Multisets and Partitions

4/22/2014. Genetic Algorithms. Diwakar Yagyasen Department of Computer Science BBDNITM. Introduction

Distributed Probabilistic Model-Building Genetic Algorithm

Grid Scheduling Strategy using GA (GSSGA)

Two-Dimensional Fitting of Brightness Profiles in Galaxy Images with a Hybrid Algorithm

Evolving SQL Queries for Data Mining

Telecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA

On the Locality of Grammatical Evolution

Population Sizing for the Redundant Trivial Voting Mapping

A Genetic k-modes Algorithm for Clustering Categorical Data

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague.

DERIVATIVE-FREE OPTIMIZATION

Literature Review On Implementing Binary Knapsack problem

A Comparison of the Iterative Fourier Transform Method and. Evolutionary Algorithms for the Design of Diffractive Optical.

Monika Maharishi Dayanand University Rohtak

Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you?

Genetic Algorithm for Finding Shortest Path in a Network

Time Complexity Analysis of the Genetic Algorithm Clustering Method

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter

Learning Geometric Concepts with an Evolutionary Algorithm

METAHEURISTICS Genetic Algorithm

Graphical Approach to Solve the Transcendental Equations Salim Akhtar 1 Ms. Manisha Dawra 2

CONCEPT FORMATION AND DECISION TREE INDUCTION USING THE GENETIC PROGRAMMING PARADIGM

fixed mutation constant gain mutation generations (initial) mutation probability

Multi-Objective Optimization Using Genetic Algorithms

Design of a Route Guidance System with Shortest Driving Time Based on Genetic Algorithm

Transcription:

Evolution Strategies, Network Random Keys, and the One-Max Tree Problem Barbara Schindler, Franz Rothlauf, and Hans-Josef Pesch Department of Information Systems, University of Bayreuth/Germany Department of Applied Mathematics, University of Bayreuth/Germany barbara.schindler@stud.uni-bayreuth.de, rothlauf@uni-bayreuth.de,hans-josef.pesch@uni-bayreuth.de Abstract. Evolution strategies (ES) are efficient optimization methods for continuous problems. However, many combinatorial optimization methods can not be represented by using continuous representations. The development of the network random key representation which represents trees by using real numbers allows one to use ES for combinatorial tree problems. In this paper we apply ES to tree problems using the network random key representation. We examine whether existing recommendations regarding optimal parameter settings for ES, which were developed for the easy sphere and corridor model, are also valid for the easy one-max tree problem. The results show that the -success rule for the ( + )-ES results in low performance because the standard deviation is continuously reduced and we get early convergence. However, for the (µ + λ)-es and the (µ, λ)-es the recommendations from the literature are confirmed for the parameters of mutation τ and τ and the ratio µ/λ. This paper illustrates how existing theory about ES is helpful in finding good parameter settings for new problems like the one-max tree problem. Introduction Evolution strategies [ 3] are a class of direct, probabilistic search and optimization methods gleaned from the model of organic evolution. In contrast to genetic algorithms (GAs) [], which work on binary strings and process schemata, ES have been dedicated to continuous optimization problems. The main operator of ES is mutation, whereas recombination is only important for the self-adaption of the strategy parameters. Random network keys (NetKeys) have been proposed by [] as a way to represent trees with continuous variables. This work was based on [6] and allows to represent a permutation by a sequence of continuous variables. In this work we investigate the performance of ES for tree problems when using the continuous NetKey representation. Because ES have been designed for solving continuous problems and have shown good performance therein, we expect ESs to perform well for network problems when using NetKeys. Furthermore, we want to examine whether the recommendations for the setting of ES parameters, that are derived for the sphere and the corridor model, are also valid

for the easy 8 node one-max tree problem. In analogy to the sphere and corridor models, the one-max tree problem [, 7] is also easy and ES are expected to perform well. Finally, we compare the performance of ES and GAs for the one-max tree problem. We wanted to know which of the two search approaches, mutation versus crossover, performs better for this specific test problem. The paper is structured as follows. In section we present the NetKey encoding and present its major characteristics. This is followed by a short review of the one-max tree problem. In section, after taking a closer look at the different types of ES (subsection.), we perform an analysis of the adjustment of ES parameters for the one-max problem (subsection.), and finally compare the performance of ESs to GAs (subsection.3). The paper ends with concluding remarks. Network Random Keys This section gives a short overview about the NetKey encoding. Network random keys are adapted random keys (RKs) for the representation of trees. RKs allow us to represent permutations and were first presented in [6]. Like the LNB encoding [8] NetKeys belong to the class of weighted representations. Other tree representations are Prüfer numbers [9], direct encodings [], or the determinant encoding []. When using NetKeys, a key sequence of l random numbers r i [, ], where i {,... l }, represents a permutation r s of length l. From the permutation r s of length l = n(n )/ a tree with n nodes and n links is constructed using the following algorithm: () Let i =, G be an empty graph with n nodes, and r s the permutation of length l = n(n )/ that can be constructed from the key sequence r. All possible links of G are numbered from to l. () Let j be the number at the ith position of the permutation r s. (3) If the insertion of the link with number j in G would not create a cycle, then insert the link with number j in G. () Stop, if there are n links in G. () Increment i and continue with step. With this calculation rule, a unique, valid tree can be constructed from every possible key sequence. We give some properties of the encoding: Standard crossover and mutation operators work properly and the encoding has high locality and heritability. NetKeys allow a distinction between important and unimportant links. There is no over- or underspecification of a tree possible. The decoding process goes with O(l log(l)), where l = n(n )/. Examining NetKeys reveals that the mutation of one key results either in the same tree, or in a tree with no more than two different links. Therefore, NetKeys have high locality. Furthermore, standard recombination operators, like x-point

or uniform crossover, create offspring that inherit the properties of their parents that means they have the same links like their parents. If a link exists in a parent, the value of the corresponding key is high in comparison to the other keys. After recombination, the corresponding key in the offspring has the same, high value and is therefore also used with high probability for the construction of the tree. A benefit of the NetKey encoding is that genetic and evolutionary algorithms (GEAs) are able to distinguish between important and unimportant links. The algorithm which constructs a tree from the key sequence uses high-quality links with high key values and ensures that they are not lost during the GEA run. NetKeys always encode valid trees. No over- or underspecification of a tree is possible because the construction rule ensures that only valid solutions are decoded. Thus, no additional repair mechanism are needed. The key sequence that is used for representing a tree has length l = n(n )/. Constructing a a tree results in sorting the l keys that goes with O(l log(l)). Therefore, in comparison to other representations like Prüfer numbers the decoding process is more complex and demanding. For a more detailed description of the NetKey encoding the reader is referred to []. 3 The One-Max Tree Problem This section gives a short overview of the one-max tree problem. For further information please refer to []. For the one-max tree problem an optimal solution (tree) is chosen either randomly or by hand. The structure of this tree can be determined: It can be a star, a list, or an arbitrary tree with n nodes. In this work we only consider the optimal solution to be an arbitrary tree. For the calculation of the fitness of the individuals, the distance d ab between two trees G a and G b is used. It is defined as d ab = n i lij a l b ij, i= j= where l a ij is if the link from node i to node j exists in tree G a and if it does not exist in G a. n denotes the number of nodes. This definition of distance between two trees is based on the Hamming distance [] and d ab {,,..., n }. When using this distance metric for a minimization problem the fitness of an individual G i is defined as the distance d i,opt to the optimal solution G opt. Therefore, f i = d i,opt, and f i {,,..., n }. An individual has fitness (cost) of n if it has only one link in common with the best solution. If the two individuals do not differ (G i = G opt ), the fitness (cost) of G i is f i =. In this work we only want to use a minimization problem. Because this test problem is similar to the standard one-max-problem it is easy to solve for mutation-based GEAs, but somewhat harder for recombination-based GAs [3].

Performance of Evolution Strategies and Adjustment of Parameters In this section, after a short introduction into the functionality of evolution strategies, we present an investigation into the adjustment of ES parameters for the one-max tree problem when using the NetKey encoding. The section ends with a short comparison of ES and GA for this specific problem.. A Short Introduction into Evolution Strategies ESs were developed by Rechenberg and Schwefel in the 96s at the Technical University of Berlin in Germany []. First applications were experimental and dealt with hydrodynamical problems like shape optimization of a bended pipe, drag minimization of a joint plate [] and a structure optimization of a two-phase flashing nozzle [3]. The simple ( + )-ES uses n-dimensional real valued vectors and creates one offspring x = {x,... x n} from one parent x = {x,... x n } by applying mutation with identical standard deviations σ to each parental allele x i. x i = x i + σ N i (, ) i =,..., n. N(, ) denotes a normal distributed one-dimensional random variable with expectation zero and standard deviation one. N i (, ) indicates that the random variable is sampled anew for each possible value of the counter i. The resulting individual is evaluated and compared to its parent, and the better one survives to become the parent of the next generation. For the ( + )-ES a theoretical convergence model for two specific problems, the sphere model and the corridor model, exists. The -success rule reflects the theoretical result that, to achieve fast convergence, on average one out of five mutations should result in higher fitness values [, p. 3]. To incorporate the principle of a population, [] introduced the (µ + λ)- ES and the (µ, λ)-es. This notation considers the selection mechanism and the number of parents µ and offspring λ. For the (µ + λ)-es, the µ best individuals survive out of the union of the µ parents and the λ offspring. In the case of the (µ, λ)-es, only the best µ offspring form the next parent generation. Both population-based ES start with a parent population of µ individuals. Each individual a consists of an n-dimensional vector x IR n and l standard deviations σ IR +. One individual is described as a = (x, σ) [6]. For both, (µ + λ)-es and (µ, λ)-es, recombination is used for the creation of the offspring. Mostly, discrete recombination is used for the decision variables x i and intermediate recombination is used for the standard deviations σ i. Discrete recombination means that x i is randomly taken from one parent, whereas intermediate recombination creates σ i as the arithmetic mean of the parents standard deviations. However, the main operator in ES is mutation. It is applied to every individual after recombination: σ k = σ k exp(τ N(, ) + τ N k (, )) k =,,..., l, x i = x i + σ i N i(, ) i =,,..., n.

The standard deviations σ k are mutated using a multiplicative, logarithmic normally distributed process with the factors τ and τ. Then, the decision variables x i are mutated by using the modified σ k. This mutation mechanism enables the ES to evolve its own strategy parameters during the search, exploiting an implicit link between appropriate internal model and good fitness values. One of the major advantages of ES is seen in its ability to incorporate the most important parameters of the strategy, e.g standard deviations, into the search process. Therefore, optimization not only takes place on object variables, but also on strategy parameters according to the actual local topology of the object function. This capability is called self-adaption.. Adjustment of Parameters Over time, many recommendations for choosing ES parameters have been developed mainly for the simple sphere and the corridor model. We want to investigate if these recommendations also hold true for simple one-max tree problems represented using the real-valued NetKey encoding. When using the ( + )-ES there are two possibilities for choosing the standard deviation σ. It can be either fixed to some value or adapted according to the -success rule. For sphere and corridor models this rule results in fastest convergence. However, sometimes the probability of success cannot exceed. For problems, where the objective function has discontinuous first partial derivatives, or at the edge of the allowed search space, the -success rule does not work properly. Especially in the latter case, the success rule progressively forces the sequence of iteration points nearer to the boundary and the step lengths are continuously reduced without the optimum being approached with comparable accuracy [7]. Figure 3(a) and Figure illus- standard deviation σ.. 6 8 Fig.. Standard deviation σ over number of. We use the -rule and a (+ )-ES for the 8 node one-max tree problem. trate the problems of the -success rule when using ES for solving an 8 node one-max tree problem. The plots show the fitness and the standard deviation σ over the number of. The initial standard deviation σ = and we performed runs. Due to the success rule the standard deviation is continuously reduced and we get early convergence. The same results have been obtained for larger 6 and 3 node problem instances. This behavior of ( + )-ES can be explained when examining the NetKey encoding. In section we saw that only n out of n(n )/ links are used for constructing the tree. Therefore, a mutation of one allele often does not result in a change of the represented tree. Many mutations do not result in a different phenotype but only change the

genotype. However, for the -rule we assume that every mutation results in a different phenotype and about every fifth new phenotype is superior to its parent. Therefore, the one-max tree problem is more difficult than the fully easy sphere and corridor models, and the -rule can not be used. Instead, we can use a fixed standard deviation σ. In Figure 3(b) we show the fitness after iterations over the standard deviation σ for the 8 and 6 node one-max tree problem. The results indicate that the ( + )-ES shows the best performance for a fixed standard deviation of σ... Larger standard deviations do not result in a faster convergence but the search becomes random. With smaller standard deviations we also do not get better solutions, because with small standard deviations of mutation we only make slow progress. To overcome the problem of the ( + )-ES getting stuck in local optima, population-based ES approaches like (µ + λ)-es and (µ, λ)-es have been proposed. We want to examine how one can adjust the strategy parameters τ and τ. The standard deviations are mutated using a multiplicative, logarithmic normally distributed process. The logarithmic normal distribution is motivated as follows. A multiplicative modification process for the standard deviations guarantees positive values for σ and smaller modifications must occur more often than larger ones [8]. Because the factors τ and τ are robust parameters, [8] suggests setting them as follows: τ ( n) and τ ( n). Newer investigations indicate that optimal adjustments are in the interval [.,.] [9, ]. τ and τ can be interpreted in the sense of learning rates as in artificial neural networks, and preliminary experiments with proportionality factors indicate that the search process can be tuned for particular objective functions by modifying these factors. We investigated in Figure for the (µ + λ)-es whether the recommendations of Schwefel or Kursawe are also valid for the 8 node onemax tree problem. The plots show how the best fitness after generations depends on τ and τ. The results confirm the recommendations from Kursawe to initialize the parameters in the interval [.,.], where τ > τ. The best solutions are τ =. and τ =. for the (µ + λ)-es and τ =. and τ =. for the (µ, λ)-es. The next part of our investigation focuses on the optimal proportion of µ parents to λ offspring to maximize the convergence velocity. [7] proposed a (, )-ES or a (, 6)-ES that is nearly optimal for sphere and corridor models. Figure 3(c) ((µ + λ)-es) and Figure 3(d) ((µ, λ)-es) show the fitness over the number of for the 8 node one-max tree problem. We used a population size of N = µ + λ =, τ =.3, τ =.6, σ =., and performed runs for every parameter setting. The results show that a ratio of µ λ {... 7 } results in good performance for the (µ + λ)-es and the (µ, λ)-es. These results confirm the recommendations from [] and [6]. The investigations for the sphere model indicated that the ratio of µ λ 7 is optimal concerning the accelerating effect of self-adaption. This ratio also provides the basic parameterization instrument for controlling the character of the search. Decreasing µ/λ emphasizes on pathoriented search and convergence velocity, while increasing µ/λ leads to a more volume-oriented search.

best fitness after generations.6....8.6...... best fitness after generations.9.8.7.6...3.....3. τ τ (a) τ fixed at. (b) τ fixed at. Fig.. Best fitness at end of the run over the strategy parameters τ and τ for the (µ + λ)-es. In Figure (a) we fixed τ at. and varied τ, and in Figure (b) we fixed τ at. and varied τ. We used N =, σ =., µ/λ =., generations, and runs per plot. Finally, we want to examine the population size N which provides optimal convergence velocity. The population size mainly depends on the representation of the individuals and the optimization problem. Guidelines for choosing proper population sizes N when using NetKeys for the one-max tree problem and using selectorecombinative GAs were shown in []. In Figure 3(e) we compare the fitness over the number of for the (µ+λ)-es for different population sizes N = µ + λ. We used τ =.3, τ =.6, σ =., µ/λ =. and performed runs for every parameter setting. The results reveal that for a 8 node problem, a population size N = is enough to allow ES to find the optimal solution reliably and fast. Our investigations indicate that the simple -rule for the ( + )-ES from [] does not work when using NetKeys. However, when using (µ + λ)-es or (µ, λ)-es the recommendations for the simple sphere and corridor models from [8], [9], and [7] can also be used for the one-max tree problem using NetKeys. The existing guidelines help us to choose proper strategy parameters τ, τ, and the ratio µ λ. For further information about the use of ES for tree problems using the NetKey encoding the reader is referred to []..3 A Comparison to Genetic Algorithms for the One-Max Tree Problem After identifying optimal strategy parameters for ES we want to compare the performance of ES with GAs for the one-max tree problem using NetKeys. For both optimization methods, ES and GA, we use uniform crossover. For the GA we implemented a roulette-wheel selection scheme and used N =. Mutation in the context of GA means that the value of one key is randomly

fitness. 3. 3.. 6 8 fitness after 8 6 8 nodes 6 nodes.. standard deviation σ (a) Performance of ( + )-ES using the -success rule. 6 µ/λ=. µ/λ=. µ/λ=. (b) ( + )-ES using fixed σ (fitness after over σ). 6 µ/λ=. µ/λ=. µ/λ=. fitness 3 fitness 3 6 8 6 8 fitness (c) (µ+λ)-es for different µ/λ ratios. 6 3 N= N= N=3 N= 6 8 (e) (µ+λ)-es for different N = µ+σ. fitness (d) (µ, λ)-es for different µ/λ ratios. 7 6 3 GA (p c =,p m =) GA (p c =.8, p m =.) (+)-ES (µ,λ)-es (µ+λ)-es 3 (f) Comparison between ES and GA. Fig. 3. Adjustment of ES parameters for the 8 node one-max tree problem. All plots show the fitness over the number of (except Figure 3(a) which shows best fitness after over σ.)

changed. As before, we used for the ES τ =.3, τ =.6, σ =., N = µ + λ =, and performed runs for every parameter setting. Figure 3(f) compares the performance of ES and GAs for the one-max tree problem. We plot the fitness over the number of. The results show that a (µ + λ)-es has the highest performance. The ( + )-ES gets stuck and is not able to find the optimal solution. Conclusions In this paper we extended the use of evolution strategies to combinatorial tree problems. Evolution strategies are designed for continuous optimization problems and can be applied to trees when using the continuous network random key (NetKey) representation. We examined for the small 8 node one-max tree problem how to adjust the parameters of the ( + )-, (µ + λ)-, and (µ, λ)-es and compared their performance to a simple GA. The results showed that the recommendations regarding the adjustment of the ES parameters (τ, τ, and µ/λ) for simple sphere and corridor models can also be used for the easy one-max tree problem when using the NetKey encoding. Only the -success rule for the ( + )-ES does not hold true for the one-max tree problem because most of the mutations do not change the represented tree. Therefore, the strategy parameter σ is continously reduced and the algorithm gets stuck. The results indicate that existing theory about ES can often help in finding good parameter settings for new types of problems. We want to encourage researchers when developing new representations or techniques to first look at existing theory, to check if they can be used advantageously, and not to reinvent the wheel. References. H.-P. Schwefel. Kybernetische Evolution als Strategie der experimentellen Forschung in der Strömungstechnik. Master s thesis, Technische Universität Berlin, 96.. I. Rechenberg. Cybernetic solution path of an experimental problem. Technical Report, Royal Aircraft Establishment, Library Translation, Farnborough, Hants., UK, 96. 3. H.-P. Schwefel. Experimentelle Optimierung einer Zweiphasendüse. Bericht 3, AEG Forschungsinstitut Berlin, Projekt MHD-Staustahlrohr, 968.. J. H. Holland. Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor, MI, 97.. F. Rothlauf, D. E. Goldberg, and A. Heinzl. Network random keys a tree network representation scheme for genetic and evolutionary algorithms. Technical Report No. 8/, University of Bayreuth, Germany,. to be published in Evolutionary Computation. 6. J. C. Bean. Genetics and random keys for sequencing and optimization. Technical Report 9-3, Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI, June 99.

7. Franz Rothlauf. Towards a Theory of Representations for Genetic and Evolutionary Algorithms: Development of Basic Concepts and their Application to Binary and Tree Representations. PhD thesis, University of Bayreuth/Germany,. 8. C. C. Palmer. An approach to a problem in network design using genetic algorithms. unpublished PhD thesis, Polytechnic University, Troy, NY, 99. 9. H. Prüfer. Neuer Beweis eines Satzes über Permutationen. Archiv für Mathematik und Physik, 7:7 7, 98.. Günther R. Raidl. An efficient evolutionary algorithm for the degree-constrained minimum spanning tree problem. In Proceedings of IEEE International Conference on Evolutionary Computation, pages 3 8, Piscataway, NJ,. IEEE.. F. Abuali, R. Wainwright, and D. Schoenefeld. Determinant factorization and cycle basis: Encoding schemes for the representation of spanning trees on incomplete graphs. In Proceedings of the 99 ACM/SIGAPP Symposium on Applied Comuting, pages 3 3, Nashville, TN, February 99. ACM Press.. R. Hamming. Coding and Information Theory. Prentice-Hall, 98. 3. D. E. Goldberg, K. Deb, and D. Thierens. Toward a better understanding of mixing in genetic algorithms. Journal of the Society of Instrument and Control Engineers, 3(): 6, 993.. I. Rechenberg. Bionik, Evolution und Optimierung. Naturwissenschaftliche Rundschau, 6():6 7, 973.. H.-P. Schwefel. Evolutionsstrategie und numerische Optimierung. PhD thesis, Technical University of Berlin, 97. 6. T. Bäck. Evolutionary Algorithms in Theory and Practice. Oxford University Press, New York, 996. 7. H.-P. Schwefel. Evolution and Optimum Seeking. Wisley & Sons, New York, 99. 8. H.-P. Schwefel. Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie. Birkhuser, Basel, 977. from Interdisciplinary Systems Research, volume 6. 9. F. Kursawe. Grundlegende empirische Untersuchungen der Parameter von Evolutionsstrategien - Metastrategien. PhD thesis, University of Dortmund, 999.. V. Nissen. Einführung in evolutionäre Algorithmen: Optimierung nach dem Vorbild der Evolution. Vieweg, Wiesbaden, 997.. H.-P. Schwefel. Collective phenomena in evolutionary systems. In P. Checkland and I. Kiss, editors, Problems of Constancy and Change - The Complementarity of Systems Approaches to Complexity, volume, pages 33, Budapest, 987. Papers presented at the 3st Annual Meeting of the International Society for General System Research.. Barbara Schindler. Einsatz von Evolutionären Stratgien zur Optimierung baumförmiger Kommunikationsnetzwerke. Master s thesis, Universität Bayreuth, Lehrstuhl für Wirtschaftsinformatik, Mai.