Analyzing Mutation Schemes for Real-Parameter Genetic Algorithms

Similar documents
Analysing mutation schemes for real-parameter genetic algorithms

Balancing Survival of Feasible and Infeasible Solutions in Evolutionary Optimization Algorithms

Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms

NCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Offspring Generation Method using Delaunay Triangulation for Real-Coded Genetic Algorithms

Non-Uniform Mapping in Real-Coded Genetic Algorithms

Abstract. 1 Introduction

Distributed Probabilistic Model-Building Genetic Algorithm

An Efficient Constraint Handling Method for Genetic Algorithms

A Genetic Algorithm Based Augmented Lagrangian Method for Accurate, Fast and Reliable Constrained Optimization

Adaptive Crossover in Genetic Algorithms Using Statistics Mechanism

On the sampling property of real-parameter crossover

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Towards Understanding Evolutionary Bilevel Multi-Objective Optimization Algorithm

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS

Search Space Boundary Extension Method in Real-Coded Genetic Algorithms

Evolutionary Multi-Objective Optimization Without Additional Parameters

Multimodal Optimization Using a Bi-Objective Evolutionary Algorithm

SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2

Multi-Objective Optimization Using Genetic Algorithms

Late Parallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms

Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems

MIC 2009: The VIII Metaheuristics International Conference. A Comparative Study of Adaptive Mutation Operators for Genetic Algorithms

Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts

1 Standard definitions and algorithm description

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

Multi-objective Optimization

Automata Construct with Genetic Algorithm

Multi-objective Optimal Path Planning Using Elitist Non-dominated Sorting Genetic Algorithms

Solving Bilevel Multi-Objective Optimization Problems Using Evolutionary Algorithms

A THREAD BUILDING BLOCKS BASED PARALLEL GENETIC ALGORITHM

Genetic Algorithms for Real Parameter Optimization

Multi-objective Optimization

The Genetic Algorithm for finding the maxima of single-variable functions

Towards an Estimation of Nadir Objective Vector Using Hybrid Evolutionary and Local Search Approaches

Chapter 9: Genetic Algorithms

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems

Optimization of Noisy Fitness Functions by means of Genetic Algorithms using History of Search with Test of Estimation

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague.

GA is the most popular population based heuristic algorithm since it was developed by Holland in 1975 [1]. This algorithm runs faster and requires les

What is GOSET? GOSET stands for Genetic Optimization System Engineering Tool

Optimal boundary control of a tracking problem for a parabolic distributed system with open-loop control using evolutionary algorithms

Multi-objective Genetic Algorithms: Problem Difficulties and Construction of Test Problems

DE/EDA: A New Evolutionary Algorithm for Global Optimization 1

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems

Outline of Lecture. Scope of Optimization in Practice. Scope of Optimization (cont.)

Multi-Objective Optimization using Evolutionary Algorithms

An Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem

Real-Coded Evolutionary Approaches to Unconstrained Numerical Optimization

Genetic algorithms and finite element coupling for mechanical optimization


Multi-Objective Optimization using Evolutionary Algorithms

A motivated definition of exploitation and exploration

Evolutionary Computation. Chao Lan

A GENETIC ALGORITHM APPROACH TO OPTIMAL TOPOLOGICAL DESIGN OF ALL TERMINAL NETWORKS

Reducing Fitness Evaluations Using Clustering Techniques and Neural Network Ensembles

The Simple Genetic Algorithm Performance: A Comparative Study on the Operators Combination

An Introduction to Evolutionary Algorithms

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

A Cooperative Coevolutionary Approach to Function Optimization

ISSN: [Keswani* et al., 7(1): January, 2018] Impact Factor: 4.116

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls

Binary Differential Evolution Strategies

Time Complexity Analysis of the Genetic Algorithm Clustering Method

Inducing Parameters of a Decision Tree for Expert System Shell McESE by Genetic Algorithm

An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization

Approximation-Guided Evolutionary Multi-Objective Optimization

HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS

COMPLETE INDUCTION OF RECURRENT NEURAL NETWORKS. PETER J. ANGELINE IBM Federal Systems Company, Rt 17C Owego, New York 13827

Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms

Recombination of Similar Parents in EMO Algorithms

Computational Intelligence

Using Genetic Algorithms in Integer Programming for Decision Support

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

Finding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm

Efficient Evolutionary Algorithm for Single-Objective Bilevel Optimization

CS5401 FS2015 Exam 1 Key

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

Optimization of Association Rule Mining through Genetic Algorithm

Network Routing Protocol using Genetic Algorithms

Improving a Particle Swarm Optimization Algorithm Using an Evolutionary Algorithm Framework

Graphical Approach to Solve the Transcendental Equations Salim Akhtar 1 Ms. Manisha Dawra 2

Finding Knees in Multi-objective Optimization

Improving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms.

Probability Models.S4 Simulating Random Variables

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Understanding Interactions among Genetic Algorithm Parameters

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

A Real Coded Genetic Algorithm for Data Partitioning and Scheduling in Networks with Arbitrary Processor Release Time

Adaptive Elitist-Population Based Genetic Algorithm for Multimodal Function Optimization

A New Crossover Technique for Cartesian Genetic Programming

A Parameterless-Niching-Assisted Bi-objective Approach to Multimodal Optimization

Multi-objective optimization using Trigonometric mutation multi-objective differential evolution algorithm

arxiv:cs/ v1 [cs.ne] 15 Feb 2004

Approximate Evolution Strategy using Stochastic Ranking

DCMOGADES: Distributed Cooperation model of Multi-Objective Genetic Algorithm with Distributed Scheme

Evolutionary Algorithms and the Cardinality Constrained Portfolio Optimization Problem

Transcription:

Analyzing Mutation Schemes for Real-Parameter Genetic Algorithms Kalyanmoy Deb Debayan Deb Department of Mechanical Engineering Dept. of Computer Science and Engineering Indian Institute of Technology Kanpur Michigan State University Kanpur, PIN 0806, India East Lansing, MI 4884, USA E-mail: deb@iitk.ac.in E-mail: ronny3050@gmail.com KanGAL Report Number 006 Abstract Mutation is an important operator in genetic algorithms (GAs), as it ensures maintenance of diversity in evolving populations of GAs. Real-parameter GAs (RGAs) handle real-valued variables directly without going to in a binary string representation of variables. Although RGAs were first suggested in early nineties, the mutation operator is still implemented variablewise and independently for each variable. In this paper, we investigate the effect of five different mutation schemes for RGAs for two different mutation operators. Based on extensive simulation studies, it is observed that a mutation clock implementation is computationally quick and also efficient in finding a solution close to the optimum on four different problems used in this study for both mutation operators. Moreover, parametric studies with their associated parameters reveal suitable working ranges of the parameters. Interestingly, both mutation operators with their respective optimal parameter settings are found to possess a similar inherent probability of offspring creation, a matter that is believed to the reason for their superior working. This study signifies that the longsuggested mutation clock operator should be considered as a valuable mutation operator for RGAs. Introduction Mutation operator in a genetic algorithm(ga) is used primarily as a mechanism for maintaining diversity in the population [6, 8]. In contrast to a recombination operator, a mutation operator operates on only one evolving population member at a time and modifies it independent to the rest of the population members. Although mutation operator alone may not constitute an efficient search, along with a suitable recombination operator, mutation operator plays an important role in making the overall search efficient [6]. Early researchers on GAs used binary strings to represent a real-valued variable. For an l-bit string mapped between the specified variable bounds [a, b], GA s search is limited to a discrete set of l equispaced points within the bounds. In such a binarycoded GA (BGA), usually a one-point or two-point crossover operators were used. For mutation, a bitwise mutation operator which attempted to mutate everybit (alterthe bit toitscomplement)with aprobability p m independently to the outcome of mutation to other bits. Parametric studies [] have shown that a mutation probability of p m = /L (where L is the total number of bits used to represent all n variables) performed the best. Researchers have realized that there are certain shortcomings of using a BGA to solve real-parameter optimization problems. One of the reasons was the artificial discreteness associated with the coding mechanism and the second was the biasness of mutated solutions towards certain parts of the search space. Starting early 990s, real-parameter GAs (RGAs) [5, 3,, 7, 9] were suggested. However, all of these studies concentrated in developing a recombination operator that would take two real values(parents) and recombine (or blend) to create two new real values as offsprings. The recombination operations were performed variable-wise and vector-wise. For RGAs, several mutation operators are also suggested random mutation [0], Gaussian mutation [], polynomial mutation [3, ], and others. The effect is to perturb the current variable value (parent) to a neighboring value (offspring). While operated on a multi-variable population member, one common strategy has been to mutate each variable with a pre-specified probability p m at a time using one of the above mutation operators. Not much attention has been made so far in analyzing whether this one-at-a-time mutation scheme is efficient or there exists other mutation schemes that may be more efficient in RGAs. In this paper, we consider two different and commonly-used mutation operators (polynomial mutation and Gaussian mutation) and investigate the performance of five different mutation schemes including the usual one-at-a-time scheme. The mutation

schemes are compared based on computational time and required number of function evaluations. Thereafter, a parametric study of the best-found mutation scheme is performed to not only suggest an efficient mutation scheme but also adequate range of its associated parameter values. After finding the optimal parameter settings, the corresponding probability distributions for creating offspring solutions are compared to reveal interesting observations. In the remainder of this paper, we briefly describe two mutation operators used in this study in detail. Thereafter, we present five different mutation schemes suggested here. Simulation results of a real-parameter GA with identical selection and recombination operators but different mutation schemes are compared against each other and against a no mutation scheme. Interesting observations are made. The effect of mutation strength and mutation probability for both mutation operators are investigated for the winning mutation scheme. Finally, both mutation schemes are compared based on their inherent probability distributions. Conclusions of the study are then made. Polynomial Mutation in Real- Parameter GAs Deb and Agrawal[3] suggested a polynomial mutation operator with a user-defined index parameter (η m ). Based on a theoretical study, they concluded that η m induces an effect of a perturbation of O((b a)/η m ) in a variable, where a and b are lower and upper bounds of the variable. They also found that a value η m [0, 00] is adequate in most problems that they tried. In this operator, a polynomial probability distribution is used to perturb a solution in a parent s vicinity. The probability distribution in both left and right of a variable value is adjusted so that no value outside the specified range [a,b] is created by the mutation operator. For a given parent solution p [a,b], the mutated solution p for a particular variable is created forarandomnumberucreatedwithin[0,],asfollows: p = { p+ δ L (p x (L) i ), for u 0.5, p+ δ R (x (U) i p), for u > 0.5. () Then, either of the two parameters ( δ L or δ R ) is calculated, as follows: δ L = (u) /(+ηm), for u 0.5, () δ R = (( u)) /(+ηm), for u > 0.5. (3) To illustrate, Figure shows the probability density of creating a mutated child point from a parent point p = 3.0 in a bounded range of [,8] with η m = 0. Prob. Density 6 5 4 3 p 0 3 4 5 6 7 8 x_i Figure : Probability density function of creating a mutated child solution using polynomial mutation operator. 3 Gaussian Mutation in Real- Parameter GAs Let x i [a i,b i ] be a real variable. Then the truncated Gaussian mutation operator changes x i to a neighboring value using the following probability distribution: p(x i ;x i,σ i ) = ( ) x σ φ i x i i σ i ( bi x Φ( i ai x σ ) Φ i ), if a i x i b i, i σ i 0, otherwise, (4) where φ(ζ) = π exp ( ζ) is the probability distribution of the standard normal distribution and Φ( ) is the cumulative distribution function. This mutation operator has a mutation strength parameter σ i for every variable, which should be related to the bounds a i and b i. We use σ = σ i /(b i a i ) as a fixed nondimensionalized parameter for all n variables. To implement the above concept, we use the following equation to compute the offspring x i : x i = x i + σ(b i a i )erf (u i ), (5) where u i and an approximated value of erf (u i ) are calculated as follows (with u i [0,] a random number): { u ul ( u i = i ), if u i 0.5, u R (u i ), otherwise, erf (u i ) ( sign(u i ) πα + ln( u i ( πα + ln( u i ) )), ) ) ln( u i ) α

where α = 8(π 3) 3π(4 π) 0.400 and sign(u i ) is if u i < 0 and is + if u i 0. Also, u L and u R are calculated as follows: u L = 0.5 ( erf ( ) ) ai x i +, (bi a i )σ ( ( ) ) bi x i u R = 0.5 erf +, (bi a i )σ where erf() function is the standard C-library error function: erf(y) = y exp( t )dt. π The above mutated offspring x i will always lie within [a i,b i ] for any random number u i [0,]. Note that for u i u L, x i = a i will be created and for u i u R, x i = b i will be automatically created. Thus, the Gaussian mutation procedure for mutating i-th variable x i is as follows: Step : Create a random number u i [0,]. Step : Use Equation 5 to create offspring x i from parent x i. Figure shows the probability density functions for three parents x i = 4.5, 3.0 and 9.4 with bounds [ 5,0] and σ = /30. Note that the probabilities of creating an offspring x i < 5 and x i > 0 in the respective cases are zero. Also, for two extreme parents (close to respective boundaries), the Gaussian distribution is truncated to make zero probabilities of creating offsprings outside the specified lower and upper bounds. Probability Density.8.6.4. 0.8 0.6 0.4 0. 0 a i 0 5 4 0 4 6 8 0 xi bi Figure : Probability density functions for creating mutated offspring solutions from three parents using the Gaussian mutation operator. 4 Five Mutation Schemes We discuss five different mutation schemes used in this study. WealsocompareallfiveschemeswithScheme0 in which no mutation operator is used. 4. Scheme : Usual Mutation This mutation scheme follows the usual method of mutating each and every variable one at a time with a pre-defined mutation probability p m [6, ]. Usually, p m = /n is used, so that on an average, one variable gets mutated per individual. A random number u [0,] is created for every variable for an individual and if u p m the variable is mutated using the polynomial mutation operator. 4. Scheme : Mutation Clock The usual mutation scheme described above requires n random numbers to be created per individual. To reduce the computational complexity, Goldberg [6] suggested a mutation clock scheme for binary-coded GAs, in which once a bit is mutated, the next bit to be mutated (in the same or in a different individual) is determined by using an exponential probability distribution: p(t) = λexp( λt), (6) where λ is the inverse of the average occurrence of mutations. We implement here mutation clock, for the first time, to real-parameter GAs. With a mutation probability of p m, on an average, one mutation will occur in /p m variables. Thus, λ = p m. For a mutation event, a random number u [0,] is first chosen. Then, equating u = l t=0 p mexp( p m t)dt, we obtain the next occurrence (l) of mutation as: l = p m log( u). (7) If k-th variable on i-th individual in the population is currently mutated, the next variable to be mutated is ((k +l) mod n)-th variable of the ((k +l)/n)-th individual from current individual in the population. At every generation, the initial variable to be mutated is calculated using i = k =. This operator should, on an average, require n times less number of random numbers than that required in Scheme. 4.3 : One Mutation per Solution Here, we choose a random variable i [,n] and x i is mutated using the polynomial mutation. Exactly, one mutation is performed per individual. 3

4.4 : Fixed Strategy Mutation This mutation scheme is similar to, except that every variable is given an equal chance, thereby implementing a less noisy implementation of. For this purpose, variables are ordered in a random order in every generation and then variables are mutated using the polynomial mutation according to the sorted order of variables from first to last population member. After n variables are mutated in n top population members, the same order is followed from (n + )-th population member. Again, exactly one mutation is performed per individual. 4.5 : Diversity based Mutation In this mutation scheme, we put more probability for mutation to a variable that has less population-wise diversity. To implement, first the variance of values of each variable across the population members is computed and variables are sorted in ascending order of variance. Thereafter, an exponential probability distribution (p(i) = λexp( λi) for i [0,n ] is used. Tomaketheaboveaprobabilitydistribution, λisused byfinding the rootofthe followingequationforafixed n: λexp( nλ) exp( λ) λ+ = 0. (8) Thereafter, for a random number u [0,], the variable (l+) that should be mutated is given: l = λ log ( u( exp( n λ)) ). (9) For n = 5, λ = 0.68 is found. This scheme makes one mutation per individual. 5 Results with Polynomial Mutation We consider four different problems to investigate the effect of five suggested mutation schemes. The objective functions have minimum at x i = 0 (i =,...,n) for the first three problems, and x i = (i =,...,n) forthe fourth problem. However,in eachcase, weconsider 5 variables (n = 5), initialized and bounded within x i [ 5,0]. This does not make the optimum exactly at the middle of the search space. We use the following GA parameter settings: (i) No. of real variables, n = 5, (ii) Population size = 50, (iii) SBX operator probability, p c = 0.9, (iv) SBX operator index, η c =, (v) Polynomial mutation prob., p m = /n, (vi) Polynomial mutation index, η m = 0, and (vii) Termination parameter ǫ T = 0.0. For each mutation scheme, we make 5 different runs starting from different initial populations, however the same set of initial populations are used for all mutation schemes. Mutation schemes are then compared with each other and with the zero mutation scheme. 5. Ellipsoidal Function This function is unimodal and additively separable having f ell (x ) = 0: f ell (x) = D ix i. (0) i= The performance of different mutation schemes on the problem is shown in Table. While all mutation schemes perform successfully on 00% runs, GAs without mutation have failed to find the required solution in more than 50% of the runs. This amply suggests the importance of using a mutation scheme in RGAs. Figure 3 shows how the objective value is reduced with generation for all five mutation schemes. Clearly, Schemes and perform the best. Table shows that 0 0. 0.0 50 00 50 00 50 300 350 400 Scheme Scheme Figure 3: Variation of number of function evaluations with generation for ellipsoidal problem using polynomial mutation operator. despite the use of a clock in Scheme, it takes more or less identical number of mutations. Scheme takes about O(/p m ) or O(n) less number of random numbers, thereby making it faster than Scheme. Thus, it can be concluded that Scheme (mutation clock) is found to be most efficient mutation scheme for the ellipsoidal problem. 4

Table : Performance of different mutation schemes on ellipsoidal problem using polynomial mutation operator. Scheme 0 Scheme Scheme min. 0.05 0.07 0.06 0. 0.0 0.5 avg..96 0.08 0.08 0.5 0.4 0.0 med. 3.56 0.09 0.08 0.5 0.5 0.9 max. 4.49 0.0 0.09 0.0 0.0 0.5 min..00 57.00 49.00 77.00 47.00 3.00 avg. 7,70. 90.7 88.84 385.00 370. 40.60 med. 0,000.00 93.00 85.00 389.00 375.00 408.00 max. 0,000.00 3.00 6.00 500.00 50.00 539.00 min. 0.00 5,737.00 5,3.00 7,700.00 4,700.00 3,00.00 avg. 0.00 9,3.7 9,69.78 38,500.00 37,0.76 4,060.78 med. 0.00 9,59.00 9,96.00 38,900.00 37,500.00 40,800.00 max. 0.00 3,56.00 3,3.00 50,000.00 50,00.00 53,900.00 5. Schwefel s Function The function is as follows: D i f sch (x) = i= j= x j. () The performance of different mutation schemes on this problem is shown in Table. Figure 4 shows the reduction in objective value with generation. All five schemes performs almost equally well. Table shows 5.3 Ackley s Function Next, we consider Ackley s function: f ack (x) = 0+e 0exp 0. D ( ) D exp cos(πx i ). D i= D i= x i () 0 0. Scheme Scheme The performance of different mutation schemes on this problem is shown in Table 3. The table shows that Schemesandperformmuchbetterthanotherthree mutation schemes. Again, RGAs without any mutation operator do not perform well. Figure 5 shows the variation of objective value with generation. It is clear that Schemes and perform better than other three schemes. 0.0 0 00 400 600 800 00 400 600 Figure 4: Variation of number of function evaluations with generation for Schwefel s function using polynomial mutation operator. that to achieve the same function value, requires lesser generations and hence lesser mutations compared to other mutation schemes. Since Schwefel s function makes all variables to be linked with each other, a variance-based mutation helps to identify the less-converged variables to be mutated more often, thereby making the performance of better. However, the performance of other four mutation schemes are also comparable, although RGAs without any mutation did not perform well. 0 0. 0.0 0 00 400 600 800 00 400 600 Scheme Scheme Figure 5: Variation of f with generation for Ackley s function using polynomial mutation operator. 5

Table : Performance of different mutation schemes on Schwefel s problem using polynomial mutation operator. Scheme 0 Scheme Scheme min. 3.99 0.5 0.69 0.59 0.47 0.68 avg. 4.7.7.6 0.9 0.94.04 med. 4.96..3 0.88 0.94.03 max. 5.38.60.84.85.53.66 min. 0,000.00,065.00,5.00,335.00,060.00,30.00 avg. 0,000.00,559.57,788.9,053.0,46.77,997.37 med. 0,000.00,490.00,78.00,979.00,.00,97.00 max. 0,000.00 5,307.00 4,076.00 4,0.00 3,470.00 3,6.00 min. 0.00 06,89.00 59,04.00 33,500.00 06,000.00 30,00.00 avg. 0.00 57,30. 89,855.70 05,309.80 4,676.47 99,737.5 med. 0.00 50,44.00 8,69.00 97,900.00,00.00 97,00.00 max. 0.00 533,8.00 43,45.00 4,000.00 347,000.00 36,00.00 Table 3: Performance of different mutation schemes on Ackley s function using polynomial mutation operator. Scheme 0 Scheme Scheme min. 0.09 0. 0.0 0.53 0.4 0.58 avg. 3.7 0.4 0.4 0.7 0.68 0.86 med. 3.93 0.4 0.3 0.7 0.69 0.86 max. 4.08 0.8 0.9.0 0.87.56 min. 07.00 4.00 5.00,0.00 969.00,3.00 avg. 9,47.43 9.88 303.3,64.90,563.5,68.78 med. 0,000.00 9.00 96.00,65.00,598.00 68.00 max. 0,000.00 37.00 49.00,364.00,03.00 3,066.00 min. 0.00,39.00 3,53.00,000.00 96,900.00 3,00.00 avg. 0.00 9,349.04 3,57.67 64,90.9 56,350.98 68,78.43 med. 0.00 9,068.00 30,89.00 65,00.00 59800.00 68,00.00 max. 0.00 37,58.00 44,37.00 36,400.00 03,00.00 306,600.00 5.4 Rosenbrock s Function Rosenbrock s function is as follows: f ros (x) = D i= [ 00 ( x i+ x i) +(x ) ]. (3) Since this is a difficult problem to solve, as reported in the literature [4], we use ǫ T = 5. The performance of different mutation schemes on this problem is shown in Table 4. The table indicates that Schemes and perform the best. Scheme takes slightly smaller number of generations and computational time to find a similar solution. Clearly, RGAs without the mutation operator do not perform well. Figure 6 reiterates the fact that Scheme performs slightly better that Scheme and other mutation schemes. On four different problems, it is observed that in terms of function evaluations, in general, Schemes and perform better than the other three mutation schemes including the zero mutation scheme. Tables show that Scheme requires smaller number of computational time compared to Scheme. Hence, we recommend the use of mutation clock as an recommended mutation scheme with the polynomial mutation operator for real-parameter GAs. 6 Parametric Studies Having established the superiority of mutation clock scheme, we now perform a parametric study of the distribution index η m and mutation probability p m of polynomial mutation operator for Scheme. 6. Parametric Study with Distribution Index In the previous section, η m = 0 was used. Here, all other RGA parameters are kept fixed, except that η m is changed in different simulations. Figure 7 shows that η m [00,00] performs the best on the ellipsoidal problem. For smaller values, the 6

Table 4: Performance of different mutation schemes on Rosenbrock s problem using polynomial mutation operator. Scheme 0 Scheme Scheme min. 0.08 0.05 0.05 0.05 0.05 0.06 avg. 4.63 0.4 0.4 0.9 0.7 0.30 med. 5.4 0.08 0.07 0.09 0.08 0.0 max. 5.5 3.09 0.99.45.77. min. 65.00 0.00 06.00 0.00 6.00 0.00 avg. 9433.04 473.75 309. 63.88 598.9 56. med. 0,000.00 60.00 59.00 99.00 96.00 88.00 max. 0,000.00 6,84.00,63.00 5,48.00 6,7.00 4,07.00 min. 0.00 0,48.00,034.00,000.00,600.00,000.00 avg. 0.00 47,593.5 3,59.07 63,88.3 59,89.4 56,.57 med. 0.00 5,87.00 6,544.0 9,900.00 9,600.00 8,800.00 max. 0.00 63,346.00 5,044.00 548,00.00 6,700.00 40,700.00 00 Scheme Scheme 500 400 300 00 0 0 40 60 80 00 0 40 60 80 00 00 0 50 00 50 00 50 300 350 400 450 500 Eta_m Figure 6: Variation of f with generation for Rosenbrock s function using polynomial mutation operator. perturbancedue to mutation is largeand hence the respective RGA finds it difficult to find a near-optimal solution quickly. On the other hand, for larger η m values, the perturbance is too small to create a nearoptimal solution quickly. The values η m [00,00] provide RGAs right perturbance in parent solutions to converge near to the optimal solution. This working range for η m is wide, thereby demonstrating the robustness of the polynomial mutation operator with the proposed mutation clock scheme. Figure 8 shows η m [00,50] produce best results for Schwefel s function, which is similar to that obtained for the ellipsoidal function. The number in bracket shown for the smallest η m value indicates the number of times (out of 5) for which RGAs terminated with f(x) ǫ T condition. Figure 9 shows η m [00,50] produce best results for Ackley s function. Although η m [00,300] produce some better runs, the variance in the performance is too wide for it to be recommended for Figure 7: Parametric study of η m for Scheme on ellipsoidal function using polynomial mutation operator. practice. Figure 0 shows η m [0,00] produce best results for Rosenbrock s function. 6. Parametric Study with Mutation Probability Next, we perform a parametric study of mutation probability p m with Scheme and fix the mutation index as η m = 00 (which is found to be near-optimal in the previous subsection). We use p m = k/n with k = 0.0, 0., 0.5, 0.75,,.5,, and 5. The rest of the RGA parameters are kept the same as before. Figure shows the number of generations needed for 5 runs for the ellipsoidal problem. Mutation probabilities of 0.0/n and 0./n are not found to produce a reasonable result. The figure shows that 0.5/n to.5/n perform the best. Thus, the usual practice of p m = /n is justified by our study. 7

0 (4) 0 (49) (48) (50) 0 50 00 50 00 50 300 350 400 450 500 Eta_m 00 0 50 00 50 00 50 300 350 400 450 500 Eta_m Figure 8: Parametric study of η m for Scheme on Schwefel s function using polynomial mutation operator. Figure 0: Parametric study of η m for Scheme on Rosenbrock s function using polynomial mutation operator. 0 500 (3) 400 300 (39) 00 50 0 50 00 50 00 50 300 350 400 450 500 Eta_m 0.033 0.05 0.067 0. 0.33 0.33 Figure 9: Parametric study of η m for Scheme on Ackley s function using polynomial mutation operator. Figure shows that p m [0.75,.0]/n performs better for the Schwefel s function. Too low or too high values of p m causes too little changes or too frequent changes in variables of the population members. For Ackley s function, p m [0.75,.5]/n performs better, with p m = /n clearly performing the best. For the Rosenbrock s function, although most p m values find similar best performance, the least variance in the results come with p m = /n. Thus, based on the above extensive simulation studies, we recommend the following parameter values for the polynomial mutation operator which is to be applied with SBX recombination operator with η c = and the binary tournament selection operator: Figure : Parametric study of p m for Scheme on the ellipsoidal function using polynomial mutation operator. 7 Results with Gaussian Mutation We now perform a similar study for the Gaussian mutation operator, which is another commonly-used mutation operator described before. For this mutation, we use the following RGA parameter settings: (i) No. of real variables, n = 5, (ii) Population size = 50, (iii) SBX operator probability, p c = 0.9, (iv) SBX operator index, η c =, (v) Gaussian mutation prob. p m = 0.067, (vi) Gaussian mutation strength σ= /30, and (vii) Termination parameter ǫ T = 0.0. Mutation scheme = Mutation clock [6], Mutation index, η m [3] = [00,50], Mutation probability, p m = [0.75,.5]/n, where n is the number of variables. Likebefore, foreachmutationscheme, wemake5different runs starting from different initial populations, however the same set of initial populations are used for all mutation schemes. Mutation schemes are then 8

0 (50) 0.033 0.05 0.067 0. 0.33 0.33 0.67 00 0.033 0.050.067 0. 0.33 0.33 0.67 Figure : Parametric study of p m for Scheme on Schwefel s function using polynomial mutation operator. Figure 4: Parametric study of p m for Scheme on Rosenbrock s function using polynomial mutation operator. 0 Ellipsoidal Function 0 0. Scheme Scheme 50 0.033 0.05 0.067 0. 0.33 0.33 0.0 50 00 50 00 50 300 350 400 Figure 3: Parametric study of p m for Scheme on the Ackley s function using polynomial mutation operator. compared with each other and with the zero mutation scheme. 7. Ellipsoidal Function The performance of different mutation schemes on the problem is shown in Table 5. All five mutation schemes perform successfully on 00% runs, however, RGAs without mutation have failed to find the required solution in more than 50% of the runs. This amply suggests the importance of using a mutation scheme in RGAs. Figure 5 shows how the objective value is reduced with generation for all five mutation schemes. Clearly, Schemes and perform the better than the other three schemes. 7. Schwefel s Function The performance of different mutation schemes on this problem is shown in Table 6. The performance of all Figure 5: Variation of number of function evaluations with generation for ellipsoidal problem using Gaussian mutation operator. five mutation schemes are similar with having a slight edge. Figure 6 shows the reduction in objective value with generation. All five schemes performs almost equally well for this problem. 7.3 Ackley s Function The performance of different mutation schemes on this problem is shown in Table 7. It is clear from th e table thatschemesandarealmostanorderofmagnitude better than the rest of the schemes. Figure 7 shows the variation of objective value with generation. It is also clear from the figure that Schemes and perform better than other three schemes. 9

Table 5: Performance of different mutation schemes on ellipsoidal problem using Gaussian mutation operator. Scheme 0 Scheme Scheme min. 0.06 0.07 0.06 0.09 0.08 0. avg. 3.0 0.08 0.08 0.4 0.3 0.7 med. 3.8 0.08 0.08 0.4 0.3 0.7 max. 4.55 0.0 0.09 0.8 0.7 0.4 min..00 40.00 4.00 5.00 74.00 3.00 avg. 7,70. 75.00 78.78 37.96 308.08 344.08 med. 0,000.00 75.00 80.00 38.00 305.00 339.00 max. 0,000.00.00 4.00 48.00 405.00 497.00 min. 0.00 4,054.00 4,59.00,500.00 7,400.00 3,00.00 avg. 0.00 7,573.03 8,594. 3,796.08 30,807.84 34,407.84 med. 0.00 7,63.00 8,74.0 3,800.00 30,500.00 33,900.0 max. 0.00,377.00,3.00 4,800.00 40,500.00 49,700.00 Table 6: Performance of different mutation schemes on Schwefel s problem using Gaussian mutation operator. Scheme 0 Scheme Scheme min. 4.05 0.5 0.4 0.40 0.5 0.49 avg. 4.60.06.03 0.75 0.80 0.88 med. 4.73.00.0 0.75 0.8 0.86 max. 5.5.75.3.49.6.46 min. 0,000.00 983.00 86.00 87.00,07.00 904.00 avg. 0,000.00 046.0,47.80,6.35,77.68,597.78 med. 0,000.00,940.00,094.00,633.00,74.00,578.0 max. 0,000.00 3,343.00 4,700.00 3,7.00,593.00,67.00 min. 0.00 98,937.00 89,39.00 87,00.00 0,700.00 90,400.00 avg. 0.00 05,74.96 3,75.4 6,35.9 7,768.63 59,778.43 med. 0.00 95,30.00 7,533.00 63,300.00 74,00.00 57,800.00 max. 0.00 336,73.00 488,678.00 3,700.00 59,300.00 67,00.00 7.4 Rosenbrock s Function Like before, we use ǫ T = 5. The performance of different mutation schemes on this problem is shown in Table 8. Figure 8 indicates that all schemes perform more or less in a similar manner. On four different problems, it is observed that in terms of function evaluations, Schemes and perform better or similar than the other three mutation schemes. All proposed mutation schemes are better than RGAs without any mutation. Tables show that Scheme requires much smaller number of computational time compared to Scheme. Hence, we again recommend the use of mutation clock as an efficient mutation scheme for real-parameter GAs with the Gaussian mutation operator. 8 Parametric Studies with Gaussian Mutation Operator We now perform a parametric study of the mutation strength σ and mutation probability p m for Scheme using Gaussian mutation. 8. Parametric Study for Mutation Strength Figure 9 shows that σ 0.5/5 = /60 performs the best on the ellipsoidal problem. As the mutation strength is increased, the performance gets worse. Figure 0 shows σ 0./5 = /50 produce best results for Schwefel s function. Figure shows σ 0.5/5 = /60 produce best results for Ackley s function. A smaller value makes 38 out of 5 runs successful. Figure shows although median performance is 0

Table 7: Performance of different mutation schemes on Ackley s function using Gaussian mutation operator. Scheme 0 Scheme Scheme min. 0.0 0.0 0.0 0.3 0.37 0.58 avg. 4.8 0.4 0.3 0.64 0.63 0.90 med. 4.55 0.4 0.3 0.64 0.6 0.90 max. 5. 0.9 0.0 0.9 0.95.35 min. 07.00 0.00 0.00 690.00 807.00 83.00 avg. 9,47.43 75.55 78.,375.3,30.33,403.84 med. 0,000.00 77.00 74.00,36.00,55.00,346.00 max. 0,000.00 366.00 45.00,009.00,089.00,3.00 min. 0.00 0,58.00,079.00 69,000.00 80,700.00 83,00.00 avg. 0.00 7,70.8 8,98.57 37,53.37 30,33.33 403,84.3 med. 0.00 7,98.00 8,479.00 36,00.00 5,500.00 34,600.00 max. 0.00 36,94.00 43,90.00 00,900.00 08,900.00 3,00.00 Table 8: Performance of different mutation schemes on Rosenbrock s problem using Gaussian mutation operator. Scheme 0 Scheme Scheme min. 0.08 0.05 0.05 0.05 0.04 0.06 avg. 4.3.64 0.0 0.7 0. 0.3 med. 4.58 0.08 0.08 0.08 0.07 0.0 max. 5.07 0.3.76.35. 5.40 min. 65.00 89.00 95.00 04.00 86.00.00 avg. 9433.04 45.76 487.0 360. 46.96 585.80 med. 0,000.00 57.00 66.00 6.00 59.00 87.00 max. 0,000.00 3,8.00 3,708.00,98.00 4,579.00 0,000.00 min. 0.00 8,839.00 9,83.00 0,400.00 8,600.00,00.00 avg. 0.00 45,438.7 50,64.08 36,0.76 46,96.08 58,580.39 med. 0.00 5,853.00 7,35.00 6,00.00 5,900.00 8,700.00 max. 0.00 39,7.00 385,37.00 9,800.00 457,900.00 00,000.00 better for σ [0.5,.0]/5, there is a large variance in the performance over 5 runs for Rosenbrock s function. Nevertheless, the above simulations suggest that σ = 0.5/5 = /60 performs the better than other values. 8. Parametric Study with Mutation Probability Next, we perform a parametric study of mutation probability p m with Scheme and fix the mutation index as σ = /60 (which is found to be near-optimal in the previous subsection). We use p m = k/n with k = 0.,0.5,0.75,,.5,,and 5. The rest of the RGA parameters are kept the same as before. Figure 3 shows the number of generations needed for 5 runs for the ellipsoidal problem. Mutation probabilities in the range [0.5,.5]/n are found to produce best performance. On Schwefel s function (Figure 4), p m [0.75,.0]/n is found to produce better results. For Ackley s function (Figure 5), p m [0.5,.0]/n produces best and for Rosenbrock s function (Figure 6), p m [0.75,.0]/n produces best performance. Thus, based on the above simulation results, we recommend the following parameter values for the Gaussian mutation operator which is to be applied with SBX recombination operator with η c = and the binary tournament selection operator: Mutation scheme = Mutation clock [6], Standard Deviation, σ = [/60, /30], Mutation probability, p m = [0.75,.0]/n, where n is the number of variables. 8.3 Comparison of Polynomial and Gaussian Mutation Operators Independent parametric studies revealed that for all other parameters being equal, η m = 00 for polyno-

Table 9: Performance of polynomial mutation (with η m = 00) and Gaussian mutation (with σ = /60) on all four problems. (s) min. avg. med. max. min. avg. med. max. min. avg. med. max. Mut. Op. Ellipsoidal Schwefel Ackley Rosenbrock Poly. 0.050 0.398 0.079 0.047 Gauss. 0.050 0.339 0.089 0.046 Poly. 0.059 0.556 0.00 0.7 Gauss. 0.064 0.6 0.48 0.35 Poly. 0.059 0.553 0.099 0.087 Gauss. 0.063 0.587 0.08 0.08 Poly. 0.067 0.688 0.5 0.85 Gauss. 0.074 0.95 0.0.407 Poly. 3.00 843.00 7.00 0.00 Gauss. 4.00 76.00 89.00 96.00 Poly. 44.74,84.5 7.00 373.59 Gauss. 53.,3.4 35.80 49.63 Poly. 46.00,8.00 6.00 88.00 Gauss. 5.00,57.00 3.00 69.00 Poly. 64.00,466.00 74.00,760.00 Gauss. 78.00,05.00 39.00 5,076.00 Poly.,799.00 87,65.00 8,005.00 0,45.00 Gauss.,876.00 75,73.00 9,840.00 9,93.00 Poly. 5,054.80 3,05.7,634.3 38,88.88 Gauss. 5,935.00 36,389.57 4,56.37 5,7.7 Poly. 5,73.00,80.00,45.00 9,773.00 Gauss. 5,68.00 30,35.00 3,99.00 7,430.00 Poly. 7,9.00 5,07.00 8,7.00 8,379.00 Gauss. 8,687.00,50.00 33,00.00 57,745.00 0 0. Schwefel s Function Scheme Scheme 0 0. Scheme Scheme 0.0 0.0 0 00 400 600 800 00 400 600 0 00 400 600 800 00 400 600 Figure 6: Variation of number of function evaluations with generation for Schwefel s function using Gaussian mutation operator. mial mutation and σ = 0.5/5 = /60 for Gaussian mutation performs the best. We tabulate the performance of RGAs with Scheme (mutation clock) for these two parameter settings in Table 9 with p m = /n. Itisclearthattheperformanceofthesetwo mutation operators with respective parameter settings are similar, except in the number of overall number Figure 7: Variation of f with generation for Ackley s function using Gaussian mutation operator. of mutations for Rosenbrock s function. Compared to the wide range of these performance metric values observed with other mutation schemes and parameter values, as shown in earliertables of this paper, the differences in the performance metric values between the two mutation operators for their respective parameter values are small. To investigate why these two parameter settings

Rosenbrock Function 0 00 Scheme Scheme 9000 8000 7000 6000 5000 4000 3000 (43) 000 0 0 40 60 80 00 0 40 60 80 00 0. 0.5 sigma.5 Figure 8: Variation of f with generation for Rosenbrock s function using Gaussian mutation operator. Figure 0: Parametric study of σ for Scheme on Schwefel s function using Gaussian mutation operator. 900 900 800 (38) 800 (38) 700 600 500 700 600 500 400 400 300 300 00 00 0. 0.5.5 3 5 sigma 0. 0.5.5 sigma 3 5 Figure 9: Parametric study of σ for Scheme on ellipsoidal function using Gaussian mutation operator. Figure : Parametric study of σ for Scheme on Ackley s function using Gaussian mutation operator. cause a similar performance of the two mutation operators, we plot the probability density function of two mutation events on a parent x i = 3.0 with bounds [ 5, 0]. The polynomial mutation distribution is used with η m = 00 and Gaussian mutation is used with σ = /60. Figure 7 shows that both have similar probability distribution of creating offspring solutions around the parent. It is interesting that an inherent equivalent probability distribution for creating offspring solutions make both mutation operators to exhibit a similar performance. This amply reveals the fact that instead of the detail of implementation of this or that mutation operator being important as often reported in the literature, it is the inherent probability of offspring creation that dictates the efficacy of a mutation operator. 9 Conclusions Mutation operators are used for maintaining diversity in a genetic algorithm. In this paper, we have suggested five different mutation schemes for realparameter GAs and tested their performance with two commonly-used mutation operators polynomial mutation and Gaussian mutation on four different test problems. Based on their performance, we conclude the following: Any of the five mutation operators is better than not performing any mutation, The mutation clock operator, which was suggested in eighties, has been long ignored and has been found here to be the fastest in terms of its execution time and better in terms of its performance. The above conclusions are valid for both polynomial and Gaussian mutation operators. Parametric studies have also found range of mutation parameters for both mutation operators. For the polynomial mutation, the distribution index η m [00,50] and mutation probability p m [0.5,.5]/n 3

400 0 350 300 50 (50) (47) 00 (50) 50 0. 0.5.5 3 5 sigma 0.033 0.067 0. 0.33 Figure : Parametric study of σ for Scheme on Rosenbrock s function using Gaussian mutation operator. 0 Figure 4: Parametric study of p m for Scheme on Schwefel s function using Gaussian mutation operator. 0 50 0.0067 0.033 0.067 0. 0.33 50 0.0067 0.033 0.067 0. 0.33 Figure3: Parametricstudyofp m forschemeonthe ellipsoidal function using Gaussian mutation operator. (where n is the number of variables) have been found to perform the best. For Gaussian mutation operator, its mutation strength σ [0.5,0.5]/(b i a i ) (where a i and b i are the lower and upper bounds of i-th variable)andmutationprobabilityp m [0.75,.0]/nhave been found to perform better. This study is extensive in performing simulation studies with two different mutation operators which are analyzed with five different mutation schemes. Superior performance of mutation clock scheme with both mutation operators provide us confidence to recommend this scheme in practice. Moreover, a more or less similar optimal mutation probability of around /n observed for both mutation operators also gives us confidence of its usage in other problems. Interestingly, the inherent probability distributions of creating offspring for polynomial and Gaussian mutation operators have been found to be similar and instrumental in the superior performance of both methods with mutation clock scheme. Furthermore, we believe that this study should resurrect the use of mutation Figure5: Parametricstudyofp m forschemeonthe Ackley s function using Gaussian mutation operator. clock in real-parameter GA studies and encourage its use in the coming years. Acknowledgments The first author acknowledges the Academy of Finland grant no 33387. References [] K. Deb. Multi-objective optimization using evolutionary algorithms. Wiley, Chichester, UK, 00. [] K. Deb and R. B. Agrawal. Simulated binary crossover for continuous search space. Complex Systems, 9():5 48, 995. [3] K. Deb and S. Agrawal. A niched-penalty approach for constraint handling in genetic algorithms. In Proceedings of the International Conference on Artificial Neural Networks and Ge- 4

0 (49) 00 0.0067 (50) (49) (49) 0.033 0.067 0. 0.33 Figure 6: Parametric study of p m for Scheme on Rosenbrock s function using Gaussian mutation operator. Prob. Density 4 3.5 3.5.5 0.5 Gaussian mutation (σ=/60) Polynomial mutation (η=00) 0.5.5.75 3 3.5 3.5 3.75 4 xi genetic algorithms. In Parallel Problem Solving from Nature (PPSN-VI), pages 365 374, 000. [8] J. H. Holland. Adaptation in Natural and Artificial Systems. Ann Arbor, MI: MIT Press, 975. [9] H. Kita, I. Ono, and S. Kobayashi. Multi-parental extension of the unimodal normal distribution crossover for real-coded genetic algorithms. In Proceedings of the 999 Congress on Evolutionary Computation, pages 58 587, 999. [0] Z. Michalewicz. Genetic Algorithms + Data Structures = Evolution Programs. Berlin: Springer-Verlag, 99. [] J. D. Schaffer, R. A. Caruana, L. J. Eshelman, and R. Das. A study of control parameters affecting online performance of genetic algorithms. In Proceedings of the International Conference on Genetic Algorithms, pages 5 60, 989. [] H.-P. Schwefel. Collective phenomena in evolutionary systems. In P. Checkland and I. Kiss, editors, Problems of Constancy and Change the Complementarity of Systems Approaches to Complexity, pages 05 033. Budapest: International Society for General System Research, 987. [3] H.-M. Voigt, H. Mühlenbein, and D. Cvetković. Fuzzy recombination for the Breeder Genetic Algorithm. In Proceedings of the Sixth International Conference on Genetic Algorithms, pages 04, 995. Figure 7: Comparison of Polynomial and Gaussian mutation for a parent x i = 3.0 in [ 5,0]. netic Algorithms (ICANNGA-99), pages 35 43. Springer-Verlag, 999. [4] K. Deb, Y. Dhebar, and N. V. R. Pavan. Nonuniform mapping in binary-coded genetic algorithms. Technical Report KanGAL Report No. 00, IIT Kanpur, India, 0. [5] L. J. Eshelman and J. D. Schaffer. Real-coded genetic algorithms and interval-schemata. In Foundations of Genetic Algorithms (FOGA-), pages 87 0, 993. [6] D. Goldberg. Genetic algorthims in search, optimization, and machine learning. Reading, MA: Addison-Wesley, 989. [7] T. Higuchi, S. Tsutsui, and M. Yamamura. Theoretical analysis of simplex crossover for real-coded 5