Global optimization using Lévy flights

Similar documents
ACONM: A hybrid of Ant Colony Optimization and Nelder-Mead Simplex Search

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm

LOW AND HIGH LEVEL HYBRIDIZATION OF ANT COLONY SYSTEM AND GENETIC ALGORITHM FOR JOB SCHEDULING IN GRID COMPUTING

Using Genetic Algorithms to optimize ACS-TSP

Algorithm Design (4) Metaheuristics

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques

Opportunistic Self Organizing Migrating Algorithm for Real-Time Dynamic Traveling Salesman Problem

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems

ACO and other (meta)heuristics for CO

GA is the most popular population based heuristic algorithm since it was developed by Holland in 1975 [1]. This algorithm runs faster and requires les

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

Image Edge Detection Using Ant Colony Optimization

A study of hybridizing Population based Meta heuristics

Particle Swarm Optimization applied to Pattern Recognition

An Evolutionary Algorithm for Minimizing Multimodal Functions

SPATIAL OPTIMIZATION METHODS

SWARM INTELLIGENCE -I

A Hybrid Fireworks Optimization Method with Differential Evolution Operators

GRASP. Greedy Randomized Adaptive. Search Procedure

Unidimensional Search for solving continuous high-dimensional optimization problems

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

Optimization of Benchmark Functions Using Artificial Bee Colony (ABC) Algorithm

Non-deterministic Search techniques. Emma Hart

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD

An Ant Approach to the Flow Shop Problem

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

Particle Swarm Optimization

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

PARTICLE Swarm Optimization (PSO), an algorithm by

Solving Travelling Salesman Problem Using Variants of ABC Algorithm

Cell-to-switch assignment in. cellular networks. barebones particle swarm optimization

Tabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach

Comparative Study of Meta-heuristics Optimization Algorithm using Benchmark Function

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

Refinement of Data-Flow Testing using Ant Colony Algorithm

n Informally: n How to form solutions n How to traverse the search space n Systematic: guarantee completeness

Optimization of Makespan and Mean Flow Time for Job Shop Scheduling Problem FT06 Using ACO

A heuristic approach to find the global optimum of function

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

A Particle Swarm Optimization Algorithm for Solving Flexible Job-Shop Scheduling Problem

A Novel Hybrid Self Organizing Migrating Algorithm with Mutation for Global Optimization

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat

Research Incubator: Combinatorial Optimization. Dr. Lixin Tao December 9, 2003

Metaheuristics: a quick overview

Fast Blackbox Optimization: Iterated Local Search and the Strategy of Powell. Oliver Kramer. Algorithm Engineering Report TR Feb.

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

Artificial Intelligence

PARTICLE SWARM OPTIMIZATION (PSO)

Methods and Models for Combinatorial Optimization Heuristis for Combinatorial Optimization

Performance Comparison of Genetic Algorithm, Particle Swarm Optimization and Simulated Annealing Applied to TSP

HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS

Nature Inspired Meta-heuristics: A Survey

Automatic differentiation based for particle swarm optimization steepest descent direction

LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2

International Conference on Modeling and SimulationCoimbatore, August 2007

A Combinatorial Algorithm for The Cardinality Constrained Portfolio Optimization Problem

A SURVEY OF COMPARISON BETWEEN VARIOUS META- HEURISTIC TECHNIQUES FOR PATH PLANNING PROBLEM

Variable Neighborhood Particle Swarm Optimization for Multi-objective Flexible Job-Shop Scheduling Problems

Pre-requisite Material for Course Heuristics and Approximation Algorithms

Artificial bee colony algorithm with multiple onlookers for constrained optimization problems

Ant Colony Optimization: A New Stochastic Solver for Modeling Vapor-Liquid Equilibrium Data

Task Scheduling Using Probabilistic Ant Colony Heuristics

Heuristic Optimization Introduction and Simple Heuristics

Open Vehicle Routing Problem Optimization under Realistic Assumptions

MIRROR SITE ORGANIZATION ON PACKET SWITCHED NETWORKS USING A SOCIAL INSECT METAPHOR

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION

TABU search and Iterated Local Search classical OR methods

Thesis Submitted for the degree of. Doctor of Philosophy. at the University of Leicester. Changhe Li. Department of Computer Science

Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

Small World Network Based Dynamic Topology for Particle Swarm Optimization


Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization techniques

Evolutionary Non-Linear Great Deluge for University Course Timetabling

VOL. 3, NO. 8 Aug, 2012 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved.

A combination of clustering algorithms with Ant Colony Optimization for large clustered Euclidean Travelling Salesman Problem

Machine Learning for Software Engineering

Particle Swarm Optimization Approach for Scheduling of Flexible Job Shops

Evolutionary Non-linear Great Deluge for University Course Timetabling

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

Adaptative Clustering Particle Swarm Optimization

An Ant System with Direct Communication for the Capacitated Vehicle Routing Problem

[Kaur, 5(8): August 2018] ISSN DOI /zenodo Impact Factor

Ant Algorithms for the University Course Timetabling Problem with Regard to the State-of-the-Art

A simulated annealing algorithm for the vehicle routing problem with time windows and synchronization constraints

Optimal Power Flow Using Particle Swarm Optimization

Fuzzy Ant Clustering by Centroid Positioning

In addition to hybrid swarm intelligence algorithms, another way for an swarm intelligence algorithm to have balance between an swarm intelligence

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization

Particle swarm optimization for mobile network design

Sci.Int.(Lahore),28(1), ,2016 ISSN ; CODEN: SINTE 8 201

CHAPTER 4 GENETIC ALGORITHM

AN IMPROVED MULTIMODAL PSO METHOD BASED ON ELECTROSTATIC INTERACTION USING N- NEAREST-NEIGHBOR LOCAL SEARCH

Transcription:

Global optimization using Lévy flights Truyen Tran, Trung Thanh Nguyen, Hoang Linh Nguyen September 2004 arxiv:1407.5739v1 [cs.ne] 22 Jul 2014 Abstract This paper studies a class of enhanced diffusion processes in which random walkers perform Lévy flights and apply it for global optimization. Lévy flights offer controlled balance between exploitation and exploration. We develop four optimization algorithms based on such properties. We compare new algorithms with the well-known Simulated Annealing on hard test functions and the results are very promising. Key words: Lévy flights, global optimization. This is an edited version of a paper originally published in Proceedings of Second National Symposium on Research, Development and Application of Information and Communication Technology (ICT.rda 04), Hanoi, Sept 24-25, 2004. 1 Introduction Optimization can be characterized as a process to minimize an objective function over a bounded or unbounded space. Of the widely used class of optimization algorithms known as meta-heuristics, that many of them imitate natural processes such as Simulated Annealing (SA - originated in physics), Genetic Algorithm (GA - imitating the Darwinian process of natural selection) [1], Ant Colony Optimization (ACO mimicking behavior of foraging ants) [2] and Particle Swarm Optimization (PSO modeling the flocking and schooling by birds) [3]. In particular, SA and GA have been practically successful although they never guarantee to reach global optima within limited time. This paper studies a group of stochastic processes frequently observed in physics and biology called Lévy flights [4][5]. The goal is to realize the claim by Gutowski that Lévy flights can be used in optimization [6]. A new class of meta-heuristic algorithms called LFO (Lévy Flights Optimization) is introduced. To our knowledge, similar work has not been introduced in the optimization literature. The rest of paper is organized as the following. Sec. 2 outlines some fundamentals of Lévy flights. Sec. 3 introduces four new LFO algorithms. Sec. 4 reports experiments and results. The last section provides several possible outlooks. 2 Lévy flights Extensive investigations in diffusion processes have revealed that there exist some processes not obeying Brownian motion. One class is enhanced diffusion, which has been 1

shown to comply with Lévy flights [5]. The Lévy flights can be characterized by following probability density function: P (x) x 1 β as x, where 0 < β 2 (1) Note that with β 0, the distribution in Eq. (1) cannot be normalized, therefore it has no physical meaning although our computation needs not be concerned about such problem. For 0 < β < 1, the expectation does not exist. For the interest of this paper, we consider a random walk where step length l obeys the distribution: P (l) = β l 0 (1 + l l 0 ) 1+β (2) This is a normalized version of [6] with the scale factor l 0 added since it is more natural to think in term of physical dimension of given space. This form preserves the property of distribution in Eq. (1) for large l but it is much simpler to deal with small l. The distribution is heavy-tailed, as shown on Fig. 1(a). It is not difficult to verify that l can be randomly generated as follows: l = l 0 ( 1 U 1/β 1 where U is uniformly distributed in the interval [0, 1). Figs. 1(b-d) show Lévy fights on a 2D landscape with l 0 = 1. Note that the scale of Fig. 1(b) is much larger than that of Fig. 1(d) (around order of 102). For small β, the random walker tends to get crowded around a central location and occasionally jumps a very big step to a new location. As β increases, the probability of performing a long jump decreases. Note that Fig. 1(d) shows the random walks as β = 3, which gets outside of the range given in Eq. (1). However, for the computational purposes, we need not to strictly be consistent with physics laws. 3 Algorithms 3.1 Search mechanisms The goal in optimization is to efficiently explore the search space in order to find globally optimal. Lévy flights provide a mechanism to achieve this goal. We use the term particle to call the abstract entity that moves on the search landscape. From each position, the particle can fly to a new feasible position at a distance, which is randomly generated from Lévy flights distribution. It is known that a good search algorithm often maintains balance between local exploitation and global exploration [7]. Lévy flights offer a nice way: As in Eq. (2), we control the frequency and the length of long jumps by adjusting the parameters β and l 0, respectively. Ideally, algorithms will dynamically tune these two parameters to best fit a given landscape. There are several ways to formulate an algorithm that employs such Lévy-based distance. Firstly, Lévy flights define a manageable move strategy, which can include small local steps, global jumps and (or) mixing between the two. As the Lévy process can ) (3) 2

(a) β = 1.5 (b) β = 0.5 (c) β = 1.5 (d) β = 3.0 Figure 1: Flight length distributions with various β. 3

occasionally generate long jumps, it can be employed in the multistart model. Secondly, one can use a single particle (as in Greedy, SA, Tabu Search) [8, 9] or a set of particle(s) (as in GA, Evolution Strategy, Genetic Programming, ACO and Scatter Search) [7] moving over the search space. The third way, and probably the most promising, is to combine generic movement model provided by Lévy flights with other known singlesolution, or population-based meta-heuristics. Such combinations could result in more powerful hybrid algorithms than the originals [9]. 3.2 LFO algorithms Here we propose five algorithms under the class of Lévy Flights Optimization (LFO). We first provide the brief description. Pseudocode for the first four algorithms is then given in Algorithms 1 4. Basic LFO (LFO-B). This basic algorithm uses a set of particles at each generation. Starting from one best-known location, the algorithm will generate a new generation at distances which are randomly distributed according to Lévy flights. The new generation will then be evaluated to select the most promising one. The process is repeated until stopping criteria are satisfied. The algorithm is quite closed to GA in a way that it iterates through generations of particle population, and at each generation it selects the good individuals for the next step. However, there selection policy in LFO-B is quite simple: only the best of population survives at each iteration. Hybrid LFO with Local Search (LFO-LS). This is an extension of LFO-B algorithm, where before the selection is made each particle performs its own search to reach local optima. The selection procedure then locates the best local optima found so far, and Lévy flights mechanism helps escape from such trap. It is open to implement the local search algorithm. Local Search with Multiple LFO Restarts (LFO-MLS). This can be viewed as a sequential version of LFO-LS algorithm and actually behaves as a Multi-start Local Search. Here only one particle is used. In each iteration the particle tries local search until getting trapped in local optima. To escape from such trap, it will jump to new location by a step generated from Lévy distribution. The jump is immediately accepted without further selection. LFO with Iterated Local Search (LFO-ILS). This is very similar to LFO-MLS except for the local minima escape strategy. Instead of immediately accepting the Lévybased jump, the algorithm tries a number of jumps until a better solution is found. This behavior is identified as Iterated Local Search [10]. LFO-MLS + SA (LFO-SA). This is a combination of LFO-MLS and SA separated in time: LFO-MLS is run first, then the SA takes the solution as the starting point. The idea is that LFO algorithms may locate the good solution quickly while SA helps improve the quality in the long run. 4

Algorithm 1 Basic LFO (LFO-B). procedure: LFO B() init position(); while (stopping criteria not met) for each member in the new generation l Lévy flights(l 0,β); jump randomly at distance(l); endfor return to best known position(); endwhile endprocedure Algorithm 2 Hybrid LFO with Local Search (LFO-LS). procedure: LFO LS() init position(); while (stopping criteria not met) for each member in the new generation l Lévy flights(l 0,β); jump randomly at distance(l); perform local search(); endfor return to best known position(); endwhile endprocedure Algorithm 3 Local Search with Multiple LFO Restarts (LFO-MLS). procedure: LFO MLS() init position(); while (stopping criteria not met) perform local search(); l Lévy flights(l 0,β); jump randomly at distance(l); endwhile endprocedure Algorithm 4 LFO with Iterated Local Search (LFO-ILS). procedure: LFO ILS() init position(); while (stopping criteria not met) perform local search(); l Lévy flights(l 0,β); while(not found better solution) jump randomly at distance(l); endwhile endwhile endprocedure 5

4 Experiments and Results 4.1 Test problems Test problems are f 0 of [11], f 2 (Rosenbrock s saddle) and f 5 (Shekel s foxholes) in De Jong s test function suite [1], Rastrigin s f 6 [12] and Keane s Bump [13]. f 0 : f 0 (x 1, x 2, x 3, x 4 ) = where: z i = 4 i=1 { (t i sgn(z i ) + z i ) 2 cd i if x i z i < t i d i x 2 i otherwise x i + 0.49999 sgn(x i )s i s i s i = 0.2, t i = 0.05 d i = {1, 1000, 10, 100} c = 0.15 1000 x i 1000. f 2 : f 2 (x 1, x 2,..., x N ) = where: N 1 i=1 (100 ( x 2 i x i+1 ) 2 + (1 xi ) 2) 2.048 x i 2.048, i = 1, 2,..., N. f 5 : f 5 (x 1, x 2 ) = 1 1 500 + 25 1 j=1 j+ 2 i=1 (xi aij)6 a 1j = { 32, 16, 0, 16, 32, 32, 16, 0, 16, 32, 32, 16, 0, 16, 32, 32, 16, 0, 16, 32, 32, 16, 0, 16, 32} a 2j = { 32, 32, 32, 32, 32, 16, 16, 16, 16, 16, 0, 0, 0, 0, 0, 16, 16, 16, 16, 16, 32, 32, 32, 32, 32} 65.536 x i 65.536, i = 1, 2. f 6 : N ( f 6 (x 1, x 2,..., x N ) = 10N + x 2 i 10 cos(2πx i ) ) i=1 5.12 x i 5.12 6

Bump: f BUMP (x 1, x 2,..., x N ) = 1 0 < x i < 10; N i=1 cos4 (x i ) 2 N i=1 cos2 (x i ) N N x i > 0.75; i=1 i=1 ix2 i N i=1 x i < 15N 2 Function f 0 is very difficult to minimize with 10 20 local minima, and f 2 is also considered to be hard despite of being uni-modal [11]. f 6 is interesting because of the sinuous component. According to Keane [13], the Bump is a seriously hard test-function for optimization algorithms because the landscape surface is highly bumpy with very similar peaks and global optimum is generally determined by the product constraint. Among those functions, f 0 and f 5 have fixed dimensions while the rest can be set freely. In our tests, the dimension of 10 is set for f 2 and f 6 and of 50 for the Bump. 4.2 Algorithms implementation We compare four proposed algorithm with the classic SA. This subsection provides greater details in algorithm realization, which can be classified into move strategy, stopping criteria, and algorithm specificity. Move strategy. Each move is selected in random directions spanning in all dimensions of search space where move length is measured in Euclidean metric. In the case of infeasible move to the region outside bounded space, two strategies are used: (i) the move selection is repeated until a feasible one is found, and (ii) the move is stopped at the edges. The first strategy is quite intuitive but it may result in many repetitions in case of long Lévy flights. The second gives chance to explore the boundary regions, in which high quality solutions may be found. Stopping criteria. The main stopping criteria used in main algorithms are time and the quality of best solution found so far (compared with known optimal one). Another stopping criterion is number of non-improvement moves. This can be either number of steps in neighborhood exploitation or number of jumps in global exploration. The first is used in greedy search, while the second applies for multiple restart type. Algorithm specificity. In SA, the initial temperature T 0 is chosen as 10% of randomly generated initial solution while the stopping temperature T s is fixed at 0.0001. Although there are no exact reasons for such choice, it is based on author s experience with SA so that transition probability is always 1 at the beginning and very close to 0 at the end of each run. The cooling schedule is the widely used power type: T (t) = r t T 0 where r = e ln(t/t0)/tm and t m is the maximum allocated run time. In all LFO algorithms, we use the power index β = 1.5. The number of jumps is set at 100 for LFO-B and LFO-MLS algorithms, while the jump distance is limited at a half of largest size of search space s dimensions. 7

Figure 2: Test results for f 0. 4.3 Results and Discussions The tests were run on 2.5GHz Intel computer within given periods of time. Every test was run repeatedly and best found solutions were sampled at relevant intervals and then averaged. Numbers of replications were 100 times for f 0, f 2, and f 5, 20 times for f 6 and 10 times for the Bump. Algorithms performances on the five test problems are presented in Figs. 2 6. In all cases, some of LFO algorithms outperform SA although SA appears to improve its performance in the long runs. This is a proven beauty of SA but it will be impractical if the required time is too long for daily activities. LFO algorithms tend to sample good quality solutions (compared to initial random solution) very quickly, in most cases within a second. The only exception is the Bump problem, where SA seems to work best after a significant time. Although there are not enough representative test cases to statistically conclude on the power of LFO algorithms over SA, there are several interesting points to note. Firstly, Lévy flights portrays well the distribution of flight lengths and times performed by foragers observed in natural experiments [5]. In the process of foraging over a given landscape, Lévy distribution helps reduce the probability of returning previously visited locations, compared to normal distribution. This is particularly advantageous in memoryless algorithms like basic stochastic SA or GA, where there are no built-in mechanisms to avoid revisits. Secondly, as mentioned early in this paper, LFO algorithms work under assumption that good quality solutions can be found around local minima. Such property can be found in most test cases, where experiments have proved the algorithms favor. However, in the Bump problem, such big valley hypothesis does not hold because the global minimum is located on search space boundary. This property can help explain why LFO algorithms fail to model the Bump landscape structure but succeed in other cases. Finally, the hybrid LFO-SA appears to work as well as or better SA in all cases while keeping the similar search power to other LFO algorithms. It is obvious that LFO-SA behaves exactly like LFO-MLS in the short run (in f 0, f 2, f 5 and f 6, see Figs. 2 5) and 8

Figure 3: Test results for f 2. Figure 4: Test results for f 5. 9

Figure 5: Test results for f 6. Figure 6: Test results for f 6. 10

like SA in the long course (in Bump, see Fig. 6). 5 Conclusions & outlooks The paper has proposed a set of optimization algorithms based on Lévy flights. The algorithms were validated against Simulated Annealing on several hard continuous test functions. Experiments have demonstrated that LFO (Lévy Flights Optimization) could be superior to SA. Algorithms implemented in this paper are rather basic, and there are rooms for further extension. For example, the two main parameters of mean Lévy distance and the power index can be adjusted dynamically during the run. Or one may wish to extend the idea of Particle Swarm Optimization (PSO) in the way that the whole population keeps continuous flying while dynamically adjusting speed and direction based on information collected so far. Even each flying particle can be treated as an autonomous agent involving collective intelligent decision making. For those who are familiar with GA, it is possible to extend the practice of GA in the way that at each generation, we select a set of particles in stead of the best one to form the next generation. References [1] K. A. D. Jong, An analysis of the behavior of a class of genetic adaptive systems, Ph.D. dissertation, Univ. Michigan, 1975. [2] M. Dorigo, V. Maniezzo, and A. Colorni, Ant system: optimization by a colony of cooperating agents, Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 26, no. 1, pp. 29 41, 1996. [3] R. C. Eberhart and Y. Shi, Particle swarm optimization: developments, applications and resources, in Evolutionary Computation, 2001. Proceedings of the 2001 Congress on, vol. 1. IEEE, 2001, pp. 81 86. [4] F. Bardou, Cooling gases with lévy flights: using the generalized central limit theorem in physics, arxiv preprint physics/0012049, 2000. [5] G. Viswanathan, S. V. Buldyrev, S. Havlin, M. Da Luz, E. Raposo, and H. E. Stanley, Optimizing the success of random searches, Nature, vol. 401, no. 6756, pp. 911 914, 1999. [6] M. Gutowski, Lévy flights as an underlying mechanism for global optimization algorithms, arxiv preprint math-ph/0106003, 2001. [7] C. Blum and A. Roli, Metaheuristics in combinatorial optimization: Overview and conceptual comparison, ACM Computing Surveys (CSUR), vol. 35, no. 3, pp. 268 308, 2003. [8] F. Glover and M. Laguna, Tabu search. Springer, 1999. [9] E. Talbi, A taxonomy of hybrid metaheuristics, Journal of heuristics, vol. 8, no. 5, pp. 541 564, 2002. 11

[10] H. R. Lourenco, O. C. Martin, and T. Stutzle, Iterated local search, International Series in Operations Research and Management Science, no. 57, pp. 321 354, 2003. [11] A. Corana, M. Marchesi, C. Martini, and S. Ridella, Minimizing multimodal functions of continuous variables with the simulated annealing algorithm, ACM Transactions on Mathematical Software (TOMS), vol. 13, no. 3, pp. 262 280, 1987. [12] L. D. Whitley, K. E. Mathias, S. B. Rana, and J. Dzubera, Building better test functions. in ICGA, 1995, pp. 239 247. [13] A. J. Keane, A brief comparison of some evolutionary optimization methods, 1996. 12