Telecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA

Similar documents
It is a common practice to replace the equations h j (~x) = 0 by a set of inequalities h j (~x)» ffi and h j (~x) ffi for some small ffi > 0. In the r

Genetic Algorithms, Numerical Optimization, and Constraints. Zbigniew Michalewicz. Department of Computer Science. University of North Carolina

search [3], and more recently has been applied to a variety of problem settings in combinatorial and nonlinear optimization [4]. It seems that the evo

Problems. search space; in the case of optimization problems with. search the boundary of feasible and infeasible regions

Your Brains and My Beauty: Department of Computer Science. Charlotte, NC 28223, USA. themselves. Apart from static penalties [3], there

Feasibility. Abstract. Numerical optimization problems enjoy a signicant popularity

Test-case Generator for Nonlinear Continuous Parameter. Optimization Techniques. Abstract

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Evolutionary Computation for Combinatorial Optimization

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

The Genetic Algorithm for finding the maxima of single-variable functions

An Efficient Constraint Handling Method for Genetic Algorithms

Hyperplane Ranking in. Simple Genetic Algorithms. D. Whitley, K. Mathias, and L. Pyeatt. Department of Computer Science. Colorado State University

Constraint Handling. Fernando Lobo. University of Algarve

Zbigniew Michalewicz, University of North Carolina, Charlotte, USA and. Polish Academy of Sciences, Warsaw, Poland,

336 THE STATISTICAL SOFTWARE NEWSLETTER where z is one (randomly taken) pole of the simplex S, g the centroid of the remaining d poles of the simplex

A Genetic Algorithm for Minimum Tetrahedralization of a Convex Polyhedron

Dept. of Computer Science. The eld of time series analysis and forecasting methods has signicantly changed in the last

Chapter 14 Global Search Algorithms

Department of. Computer Science. Remapping Subpartitions of. Hyperspace Using Iterative. Genetic Search. Keith Mathias and Darrell Whitley

Overcompressing JPEG images with Evolution Algorithms

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

Lectures 19: The Gauss-Bonnet Theorem I. Table of contents

A Parallel Evolutionary Algorithm for Discovery of Decision Rules

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. XX, NO. Y, MONTH Roman Smierzchalski, Zbigniew Michalewicz, IEEE senior member

Intersection of sets *

Lecture 15: The subspace topology, Closed sets

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

A tabu search based memetic algorithm for the max-mean dispersion problem

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Rearrangement of DNA fragments: a branch-and-cut algorithm Abstract. In this paper we consider a problem that arises in the process of reconstruction

Evolutionary Algorithms and the Cardinality Constrained Portfolio Optimization Problem

The Global Standard for Mobility (GSM) (see, e.g., [6], [4], [5]) yields a

The Cross-Entropy Method for Mathematical Programming

A Generalized Permutation Approach to. Department of Economics, University of Bremen, Germany

[8] that this cannot happen on the projective plane (cf. also [2]) and the results of Robertson, Seymour, and Thomas [5] on linkless embeddings of gra

Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Rowena Cole and Luigi Barone. Department of Computer Science, The University of Western Australia, Western Australia, 6907

CS5401 FS2015 Exam 1 Key

Localization in Graphs. Richardson, TX Azriel Rosenfeld. Center for Automation Research. College Park, MD

H. W. Kuhn. Bryn Mawr College

Varying Fitness Functions in Genetic Algorithms : Studying the Rate of Increase of the Dynamic Penalty Terms

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances

Frontier Pareto-optimum

2 ATTILA FAZEKAS The tracking model of the robot car The schematic picture of the robot car can be seen on Fig.1. Figure 1. The main controlling task

Using Genetic Algorithms to Solve the Box Stacking Problem

Enumeration of Full Graphs: Onset of the Asymptotic Region. Department of Mathematics. Massachusetts Institute of Technology. Cambridge, MA 02139

Routing and Ad-hoc Retrieval with the. Nikolaus Walczuch, Norbert Fuhr, Michael Pollmann, Birgit Sievers. University of Dortmund, Germany.

A Web-Based Evolutionary Algorithm Demonstration using the Traveling Salesman Problem

Parallel Evaluation of Hopfield Neural Networks

Hybridization EVOLUTIONARY COMPUTING. Reasons for Hybridization - 1. Naming. Reasons for Hybridization - 3. Reasons for Hybridization - 2

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

l 8 r 3 l 9 r 1 l 3 l 7 l 1 l 6 l 5 l 10 l 2 l 4 r 2

Discrete Optimization. Lecture Notes 2

Multiobjective Job-Shop Scheduling With Genetic Algorithms Using a New Representation and Standard Uniform Crossover

Fuzzy rule selection by multi-objective genetic local search algorithms and rule evaluation measures in data mining

Sparse Matrices Reordering using Evolutionary Algorithms: A Seeded Approach

Construction C : an inter-level coded version of Construction C

Theorem 2.9: nearest addition algorithm

However, m pq is just an approximation of M pq. As it was pointed out by Lin [2], more precise approximation can be obtained by exact integration of t

Combining Two Local Searches with Crossover: An Efficient Hybrid Algorithm for the Traveling Salesman Problem

An Adaptive Normalization based Constrained Handling Methodology with Hybrid Bi-Objective and Penalty Function Approach

Preliminary Background Tabu Search Genetic Algorithm

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing

Interleaving Schemes on Circulant Graphs with Two Offsets

A New Crossover Technique for Cartesian Genetic Programming

General properties of staircase and convex dual feasible functions

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

Computational problems. Lecture 2: Combinatorial search and optimisation problems. Computational problems. Examples. Example

Using Local Trajectory Optimizers To Speed Up Global. Christopher G. Atkeson. Department of Brain and Cognitive Sciences and

Unconstrained Optimization

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

THE IDENTIFICATION OF THE BOUNDARY GEOMETRY WITH CORNER POINTS IN INVERSE TWO-DIMENSIONAL POTENTIAL PROBLEMS

Mutations for Permutations

A Genetic Based Algorithm to Generate Random Simple Polygons Using a New Polygon Merge Algorithm

3. Genetic local search for Earth observation satellites operations scheduling

3 No-Wait Job Shops with Variable Processing Times

Solving A Nonlinear Side Constrained Transportation Problem. by Using Spanning Tree-based Genetic Algorithm. with Fuzzy Logic Controller

Genetic Algorithms for Solving. Open Shop Scheduling Problems. Sami Khuri and Sowmya Rao Miryala. San Jose State University.

A Connection between Network Coding and. Convolutional Codes

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

algorithm (EA) one needs to choose its components, such as variation operators (mutation and recombination) that suit the representation, selection me

Journal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems *

Adaptive Crossover in Genetic Algorithms Using Statistics Mechanism

arxiv: v1 [cs.ne] 22 Mar 2016

University of Groningen. Morphological design of Discrete-Time Cellular Neural Networks Brugge, Mark Harm ter

to be known. Let i be the leg lengths (the distance between A i and B i ), X a 6-dimensional vector dening the pose of the end-eector: the three rst c

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings

Modeling with Uncertainty Interval Computations Using Fuzzy Sets

Cluster quality 15. Running time 0.7. Distance between estimated and true means Running time [s]

A SIMULATED ANNEALING ALGORITHM FOR SOME CLASS OF DISCRETE-CONTINUOUS SCHEDULING PROBLEMS. Joanna Józefowska, Marek Mika and Jan Węglarz

Lower Bounds for Insertion Methods for TSP. Yossi Azar. Abstract. optimal tour. The lower bound holds even in the Euclidean Plane.

The strong chromatic number of a graph

The Fibonacci hypercube

MINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS

Multi-objective Optimization

N. Hitschfeld. Blanco Encalada 2120, Santiago, CHILE.

Transcription:

A Decoder-based Evolutionary Algorithm for Constrained Parameter Optimization Problems S lawomir Kozie l 1 and Zbigniew Michalewicz 2 1 Department of Electronics, 2 Department of Computer Science, Telecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA ul. Narutowicza 11/12, 80-952 Gdansk, Poland zbyszek@uncc.edu koziel@ue.eti.pg.gda.pl Abstract. Several methods have been proposed for handling nonlinear constraints by evolutionary algorithms for numerical optimization problems; a survey paper [7] provides an overview of various techniques and some experimental results, as well as proposes a set of eleven test problems. Recently a new, decoder-based approach for solving constrained numerical optimization problems was proposed [2, 3]. The proposed method denes a homomorphous mapping between n-dimensional cube and a feasible search space. In [3] we have demonstrated the power of this new approach on several test cases. However, it is possible to enhance the performance of the system even further by introducing additional concepts of (1) nonlinear mappings with an adaptive parameter, and (2) adaptive location of the reference point of the mapping. 1 Introduction The nonlinear parameter optimization problem is dened as f(x) Find x 2 S R n = minff(y); y 2 Sg; (1) such that g j (x) 0; for j = 1; : : : ; q; (2) where f and g i are real-valued functions on S; S is a search space dened as a Cartesian product of domains of variables x i 's (1 i n). The set of feasible points (i.e., points satisfying the constraints (2)) is denoted F. 1 Several methods have been proposed for handling nonlinear constraints by evolutionary algorithms for numerical optimization problems. The recent survey paper [7] classies them into four categories (preservation of feasibility, penalty functions, searching for feasibility, and other hybrids). However, there is one central issue that all these methods have to address, which is, whether to allow processing of infeasible solutions? This is the most important issue to resolve. Many constraint-handling methods process infeasible solutions (e.g., various penaltybased methods), on the other hand, many other techniques process only feasible solutions (e.g., methods based on feasibility-preserving operators). 1 Note, that we do not consider equality constraints; if necessary, an equality h(x) = 0 can be replaced by a pair of inequalities h(x) for some small > 0.

In general, restricting the search to the feasible region seems a very elegant way to treat constrained problems. For example, in [5], the algorithm maintains feasibility of linear constraints using a set of closed operators which convert a feasible solution into another feasible solution. Similar approach for the nonlinear transportation problem is described in [4], where specialized operators transform a feasible solution matrix (or matrices) into another feasible solution. This is also the case for many evolutionary systems developed for the traveling salesman problem [4], where specialized operators maintain feasibility of permutations, as well as for many other combinatorial optimization problems. However, for numerical optimization problems only special cases allowed the use of either specialized operators which preserve feasibility of solutions or repair algorithms, which attempt to convert an infeasible solution into feasible one. For example, a possible use of a repair algorithm was described in [6], but in that approach it was necessary to maintain two separate populations with feasible and infeasible solutions: a set of reference feasible points was used to repair infeasible points. Consequently, most evolutionary techniques for numerical optimization problems with constraints are based on penalties. However, highly nonlinear constraints still present diculties for evolutionary algorithms, as penalty parameters or strategies are then dicult to adjust. In this paper we investigate some properties of a recently proposed approach [3] for solving constrained numerical optimization problems which is based on a homomorphous mapping between n-dimensional cube [ 1; 1] n and a feasible search space. This approach constitutes an example of decoder-based approach 2 where the mapping allows to process feasible solutions only. The rst results [3] indicated a huge potential of this approach; the proposed method does not require any additional parameters, does not require evaluation of infeasible solutions, does not require any specialized operators to maintain feasibility or to search the boundary of the feasible region [9], [8]. Moreover, any standard evolutionary algorithm (e.g., binary-coded genetic algorithm or evolution strategy) can be used in connection with the mapping. On the top of that, the method guarantees a feasible solution, which is not always the case for other methods. The paper is organized as follows. The following section presents the new method, whereas section 3 discusses the research issues of this paper. Section 4 presents some experimental results and section 5 concludes the paper. 2 The homomorphous mapping The idea behind this technique is to develop a homomorphous mapping ', which transforms the n-dimensional cube [ 1; 1] n into the feasible region F of the problem [3]. Note, that F need not be convex; it might be concave or even can consist of disjoint (non-convex) regions. The search space S is dened as a Cartesian product of domains of variables of the problem, l(i) x i u(i), for 1 i n, whereas a feasible part F of 2 Actually, this is the rst approach of this type; until recently, mappings (or decoders) were applied only to discrete optimization problems.

the search space is dened by problem specic constraints: inequalities (2) from the previous section. Assume, a solution r0 is feasible (i.e., r0 2 F). Then any boundary point s of the search space S denes a line segment L between r0 and s (gure 1 illustrates the case). Note that such a line segment may intersect a boundary of the feasible search space F in more than just one point. L s r 0. S F Fig. 1. A line segment in a non-convex space F (two-dimensional case) Let us dene an additional one-to-one mapping g between the cube [ 1; 1] n and the search space S. Then the mapping g : [ 1; 1] n! S can be dened as g(y) = x, where x i = y i u(i) l(i) + u(i)+l(i) 2 2, for i = 1; : : :; n. Indeed, for y i = 1 the corresponding x i = l(i), and for y i = 1, x i = u(i). A line segment L between any reference point r0 2 F and a point s at the boundary of the search space S, is dened as L(r0; s) = r0 + t (s r0), for 0 t 1. Clearly, if the feasible search space F is convex, 3 then the above line segment intersects the boundary of F in precisely one point, for some t 0 2 [0; 1]. Consequently, for convex feasible search spaces F, it is possible to establish a one-to-one mapping ' : [ 1; 1] n! F as follows: r0 + y '(y) = max t 0 (g(y=y max ) r0) if y 6= 0 r0 if y = 0 where r 0 2 F is a reference point, and y max = max n i=1 jy ij. Figure 2 illustrates the transformation '. On the other hand, if the feasible search space F is not convex, then the line segment L may intersect the boundary of F in many points (see gure 1). 3 Note, that the convexity of the feasible search space F is not necessary; it is suf- cient if we assume the existence of the reference point r 0, such that every line segment originating in r 0 intersects the boundary of F in precisely one point. This requirement is satised, of course, for any convex set F.

x2 y 2 s= g(y/y max ) 1 y/y max.. y 1 0 1 1 y1 ϕ F. r 0. ϕ (y). s r0 S. y max. t. 0 ( s r 0 ) x1 Fig. 2. A mapping ' from the cube [ 1; 1] n into the space F (two-dimensional case), with particular steps of the transformation Let us consider an arbitrary point y 2 [ 1; 1] n and a reference point r0 2 F. A line segment L between the reference point r0 and the point s = g(y=y max ) at the boundary of the search space S, is dened as before, however, instead of a single interval of feasibility [0; t 0 ] for convex search spaces, we may have several intervals of feasibility: [t 1 ; t 2 ]; : : :; [t 2k 1 ; t 2k ]. Assume there are altogether k subintervals of feasibility for a such line segment and t i 's mark their limits. Clearly, t 1 = 0, t i < t i+1 for i = 1; : : :; 2k 1, and t 2k 1. Thus, it is necessary to introduce an additional mapping, which transforms interval [0; 1] into sum of intervals [t 2i 1 ; t 2i ]. However, we dene such a mapping between (0; 1] and the sum of intervals (t 2i 1 ; t 2i ]: : (0; 1]! S k i=1 (t 2i 1; t 2i ]. Note, that due to this change, one boundary point (from each interval 1 i k) is lost. However, this is not a serious problem, since we can approach the lost points with arbitrary precision. On the other hand, the benets are clear: it is possible to \glue together" intervals which are open at one end and closed at another; additionally, such a mapping is one-to-one. There are many possibilities for dening such a mapping; we have used the following. First, let us dene a reverse mapping : : S k i=1 (t 2i 1; t 2i ]! (0; 1] as follows: (t) = (t t 2i 1 + P i 1 j=1 d j)=d, where d j = t 2j t 2j 1, d = P k j=1 d j, and t 2i 1 < t t 2i. Clearly, the mapping is reverse of : (a) = t 2j 1 + d j a (t 2j 1) (t 2j) (t 2j 1),

where j is the smallest index such that a (t 2j ). Now we are ready to dene the mapping ', which is the essence of the method of transformation of constrained optimization problem to the unconstrained one for every feasible set F. The mapping ' is given by the following formula: r0 + t '(y) = 0 (g(y=y max ) r0) if y 6= 0; r0 if y = 0; where r 0 2 F is a reference point, y max = max n i=1 jy ij, and t 0 = (jy max j). Finally, it is necessary to consider a method of nding points of intersections t i. Let us consider any boundary point s of S and the line segment L determined by this point and a reference point r0 2 F. There are m constraints g i (x) 0 and each of them can be represented as a function i of one independent variable t (for xed reference point r0 2 F and the boundary point s of S): i (t) = g i (L(r0; s)) = g i (r0 +t(s r0)), for 0 t 1 and i = 1; : : :; m. As stated earlier, the feasible region need not be convex, so it may have more than one point of intersection of the segment L and the boundary of the set F. Therefore, let us partition the interval [0; 1] into v subintervals [v j 1 ; v j ], where v j v j 1 = 1=v (1 j v), so that equations i (t) = 0 have at most one solution in every subinterval. 4 In that case the points of intersection can be determined by a binary search. Once the intersection points between a line segment L and all constraints g i (x) 0 are known, it is quite easy to determine intersection points between this line segment L and the boundary of the feasible set F. 3 Adaptation issues In [3] we reported on experimental results of the system based on the mapping described in the previous section. The system was based on Gray coding with 25 bits per variable, and incorporated proportional selection (no elitism), function scaling, and standard operators (ip mutation and 1-point crossover). All parameters were xed: pop size = 70, generation gap = 100%, and p c = 0:9. The only non-standard feature incorporated into the system (to increase ne tuning capabilities of the system [1]) was a variable probability of mutation. 5 In all experiments, p m (0) = 0:005, r = 4, and p m (T ) = 0:00005. The system provided very good results [3], which were better than for any other constraint handling method reported in literature. Yet there were some additional possibilities for a further improvement and unresolved issues which we address in this paper. First of all, it is important to investigate the role of the reference point r0. Note that instead of keeping this point static during the evolutionary process, 4 Density of the partition is determined by parameter v, which is adjusted experimentally (in all experiments reported in section 4, v = 20). 5 pm(t) = p m(0) (p m(0) p m(t ))(t=t ) r, where t and T are the current and maximal generation numbers, respectively.

it can change its location. In particular, it can \follow" the best solution found so far. In that way, the reference point can \adapt" itself to the current state of the search. One of the aims of this paper is to compare the proposed method with static versus dynamic reference point. In the latter case, the quotient of the total number of generations and the number of changes of the reference point during the run, gives the number of generations between each change; the new reference point is the best individual of the current generation. Note that a change of the reference point r0 changes the phenotypes of the genotypes in the population. Thus it might be worthwhile to consider an additional option: after each change of the reference point, all genotypes in the population are modied accordingly to yield the same phenotype as before the change. For example, if a genotype (0101101:::0111) corresponded to the phenotype ( 2:46610039; 1:09535518) just before the change of the reference point r0, then, after the change, the genotype is changed in such a way, that its phenotype is still ( 2:46610039; 1:09535518) for a new reference point. Also, in the proposed method it is important to investigate a non-uniform distribution of values of vectors y 2 [ 1; 1] n ; this can be achieved, for example, by introducing an additional mapping! : [ 1; 1] n! [ 1; 1] n :!(y) = y 0, where y 0 i = a y i, where a is a parameter of the mapping, and 0 < a 1=y max. Such exploration of non-uniform distribution of y provides additional possibilities for tuning the search: { an increase in value of parameter a would result in selecting new vectors y 0 closer to a boundary of the feasible part of the search space. Thus, it is possible to use this approach to search the boundary of the feasible search space (e.g., instead of using specialized boundary operators [9]). 6 { a decrease in value of parameter a would result in selecting new vectors y 0 closer to zero (i.e., the corresponding new search point would be closer to the reference point). This may work very well with the mechanism of adaptive change of the reference point: the system explores points closer to the reference point which, in turn, \follows" the best solution found so far. Of course, there are many mappings which introduce a non-uniform distribution of values of vectors y; in this paper we experimented with the following mapping:!(y) = y 0, where y 0 i = y i y k 1 max, where k > 1 is a parameter. Clearly, larger k would move new search points closer to the reference point (this corresponds to a decrease in value of parameter a, of course). However, such a mapping concentrates the search around the reference point, hence is not helpful in cases where the optimum solution is located on the boundary of the feasible part of the search space. Thus an additional option 6 Note, however, that in general (e.g., non-convex feasible search spaces) only a part of the boundary will be explored.

(direction of change) was considered: if a vector c represents the normalized direction vector of the last change of the reference point, then the constant parameter k is replaced by a variable k 0 calculated (for every vector y) as follows: 1 + (k 1) (1 k 0 cos2 (c; y)) if cos(c; y) > 0 = k if cos(c; y) 0; Note that if the angle between c and y is close to zero, then cos(c; y) is close to one, and, consequently, the value of parameter k is close to one. 4 Experimental results Ten versions of an evolutionary system were considered (see Table 1). Version Number of Change of Value Option: number changes of r 0 genotype of k direction during run of change 1 0 N/A 1.0 N/A 2 3 N 1.0 N 3 3 N 3.0 N 4 3 N 3.0 Y 5 20 N 1.0 N 6 20 N 3.0 N 7 20 N 3.0 Y 8 3 Y 1.0 N 9 20 Y 3.0 N 10 20 Y 3.0 Y Table 1. Ten versions of the evolutionary system. For each version we report the number of changes of the reference point during the run (0 corresponds to the case where is no change, thus some other options are not applicable N/A), whether an option of re-coding the genotype was used (Yes or No), the value of scaling parameter k, and whether the option (direction of change) was used (Y) or not (N). The experiments were made for four functions: G6, G7, G9, and G10 from [7]. 7 All results are given in Tables 2{3. For each function 10 runs were performed; the tables report the best solution found in all runs, the average value, and the worst one. For G6, all runs had 500 generations, whereas all runs for remaining functions had 5,000 generations. It was interesting to see that: 7 These functions have 2, 10, 7, and 8 variables, respectively, which a number (between 2 and 8) of (mainly) nonlinear constraints. Most constraints are active at the optimum.

G6 G7 Version Minimum Average Maximum Minimum Average Maximum number value value value value value value 1 6961.806423 6403.744816 5658.854943 26.156504 34.014132 62.015826 2 6961.813810 6949.220321 6880.366641 24.823462 29.702066 37.593063 3 6961.813769 6961.616254 6959.862901 25.667881 31.635635 41.275908 4 6961.811700 6961.119165 6955.609490 24.456143 27.501678 34.224130 5 6961.813810 6959.199162 6936.007217 24.923346 29.034924 36.600579 6 6962.041796 6954.089593 6887.142350 24.493854 27.846996 37.850277 7 6961.813805 6961.814303 6961.813735 25.604691 27.765957 33.025607 8 6961.813754 6926.097556 6605.883742 24.449495 27.451748 34.651248 9 6961.689228 6960.275484 6953.448863 24.987889 27.657595 31.823738 10 6961.813247 6960.794588 6958.289256 26.119342 27.744277 29.447646 Table 2. Results for G6 and G7. These are minimization problems and the optimum values of these functions are 6961.81381 and 24.3062091, respectively. G9 G10 Version Minimum Average Maximum Minimum Average Maximum number value value value value value value 1 680.630511 680.660238 680.729387 7160.262971 8592.392352 11511.072437 2 680.630542 680.636662 680.647153 7059.228161 7464.926353 8229.071491 3 680.630181 680.636573 680.664618 7086.430306 7591.768786 9225.975846 4 680.630392 680.637875 680.661187 7197.628211 7819.787329 8827.143414 5 680.631795 680.633758 680.636254 7058.405010 7492.697550 8995.685583 6 680.631554 680.644804 680.703939 7081.945176 7888.418244 9656.438311 7 680.630826 680.634730 680.643466 7078.900133 7607.775554 8695.147494 8 680.631036 680.677782 680.965273 7089.686242 7994.728714 9734.441891 9 680.632734 680.635818 680.639192 7230.799808 7695.850259 8813.595674 10 680.630492 680.638832 680.668193 7063.878216 7597.675949 8637.628629 Table 3. Results for G9 and G10. These are minimization problems and the optimum values of these functions are 680.6300573 and 7049.330923, respectively. { for the test case G6 the best results were obtained for versions 3, 4, 7, 9, and 10. In all these ve versions, the value of the parameter k was set to 3.0; it seems that this factor had a major inuence on the quality of the results. Note also, that these ve versions include all three versions where the option of \changing directions" was used (versions 4, 7, and 10). Also, the dierence between versions 6 and 7 was only in the use of the above option: note the average scores of these two versions. In this case this option proved its usefulness. Similarly, the only dierence between versions 3 and 6 was in the number of changes of the reference point made during the run. For this particular test case, a higher value of this parameter was better in combination with the option of changing the genotype, and a lower value

was better without this option (see versions 9 and 10 for the former case, and versions 3 and 4, for the latter). { it is dicult to evaluate the performance of these versions for the test case G7. Note, that a few versions reported good best results (out of ten runs), however, the average values were less impressive. It seems that slightly better results were obtained for versions 4, 6, 8, and 9, but no interesting patterns emerged. For two of these versions, the number of changes of the reference point during the run was 3, whereas for the other two, it was 20. Two of these versions used the option of changing the genotype, and two others did not. Three versions used a higher value of parameter k = 3 and one version used k = 1. One version used the \direction of change" option. { for the test case G9 all versions gave very good results, so it was possible to judge the performance of these version only on the basis of precision. The best versions (i.e., versions whose the best result was smaller than 680.631 and the average result smaller than 680.64) were versions 2, 3, 4, 7, and 10. This is consistent with our observations made in connection with the test case G6, where almost the same subset was selected. { for the (hardest) test case G10, the best versions were selected on the following basis: the best solution was smaller than 7100 and the average solution was smaller than 7700. Only 5 versions satised these criterion; these were versions 2, 3, 5, 7, and 10. Again, as for test cases G6 and G9, versions 3, 7, and 10 are among the best. It seems that the three versions which gave the best performance overall are versions 3, 7, and 10. Judging from the characteristics of these versions, we may conclude that generally: the higher value of parapeter k (k = 3) gives better results, small number of changes of the reference point does not require changes of genotypes nor the \direction of change" option, if the number of changes of the reference point is larger, it is not important whether genotypes in the population are adjusted (for each change) or not. However, it is important to keep \direction of change" option. 5 Conclusions The results of these preliminary experiments are not, of course, conclusive. It is necessary to conduct a larger number of runs for a larger set of test cases (e.g., G1{G11, see [7]) to understand better the interactions among various options available. It is also necessary to extend this preliminary study for a larger set of parameters values (dierent values of k, dierent values of a number of changes of the reference point, etc). Further, a connection between the type of the problem (size of the feasible search space, number of active constraints at the optimum, modality of the objective function) and the characteristics of various versions discussed earlier, must be studied carefully. Results of some further experiments performed for problems G2 and G3 suggest that the change of the reference point is not always benecial. For these

functions, version #1 (no change of reference point) gave the best results among all versions. It seems that a change of the reference point is benecial only for some types of functions: thus such a change should be controlled by a feedback from the search process. A preliminary version of a new system with adaptive change of the reference point gave the best performance on all mentioned problems (from G2 to G10), making appropriate number of changes (e.g., zero changes for G2 and G3) for dierent problems. A connection between number of changes and the characteristic of the problem will be studied and reported in the next (full) version of the paper. Also, currently a new version of the system based on oating point representation is being developed. Note that for such a system there would be no need for adjusting genotypes in the population, as the algorithm operates on phenotypes. A comparison between these systems (i.e., based on binary and oating point representations) should provide additional clues. Acknowledgments: This material is based upon work supported by the the grant 8 T11B 049 10 from the Polish State Committee for Scientic Research (KBN) and the grant IRI-9725424 from the National Science Foundation. References 1. Kozie l, S. (1996). Non-uniform and non-stationary mutation in numerical optimization using genetic algorithms. Electronics and Telecomm. Quarterly, 42 (3), pp. 273{285. 2. Kozie l, S. (1997). Evolutionary algorithms in constrained numerical optimization problems on convex spaces. Electronics and Telecomm. Quarterly, 43 (1), pp. 5{18. 3. Kozie l, S. and Michalewicz, Z. (1997). Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization. To appear in Evolutionary Computation, 1998. 4. Michalewicz, Z. (1996). Genetic Algorithms+Data Structures=Evolution Programs. New-York: Springer Verlag. 3rd edition. 5. Michalewicz, Z. and C. Z. Janikow (1991). Handling constraints in genetic algorithms. In R. K. Belew and L. B. Booker (Eds.), Proceedings of the 4 th International Conference on Genetic Algorithms, pp. 151{157. Morgan Kaufmann. 6. Michalewicz, Z. and G. Nazhiyath (1995). Genocop III: A co-evolutionary algorithm for numerical optimization problems with nonlinear constraints. In D. B. Fogel (Ed.), Proceedings of the Second IEEE International Conference on Evolutionary Computation, pp. 647{651. IEEE Press. 7. Michalewicz, Z. and Schoenauer, M. (1996). Evolutionary computation for constrained parameter optimization problems. Evolutionary Computation, Vol.4, No.1, pp.1{32. 8. Schoenauer, M. and Z. Michalewicz (1996). Evolutionary computation at the edge of feasibility. W. Ebeling, and H.-M. Voigt (Eds.), Proceedings of the 4 th Conference on Parallel Problems Solving from Nature, pp.245{254, Springer Verlag. 9. Schoenauer, M. and Z. Michalewicz (1997). Boundary Operators for Constrained Parameter Optimization Problems. Proceedings of the 7th International Conference on Genetic Algorithms, pp.320{329, July 1997.