A Hybrid Fireworks Optimization Method with Differential Evolution Operators

Similar documents
Modified Particle Swarm Optimization

Feeding the Fish Weight Update Strategies for the Fish School Search Algorithm

Ying TAN. Peking University, China. Fireworks Algorithm (FWA) for Optimization. Ying TAN. Introduction. Conventional FWA.

Optimization of Benchmark Functions Using Artificial Bee Colony (ABC) Algorithm

Binary Differential Evolution Strategies

Cooperative Coevolution using The Brain Storm Optimization Algorithm

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

Adaptative Clustering Particle Swarm Optimization

An Empirical Study on Influence of Approximation Approaches on Enhancing Fireworks Algorithm

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

Small World Particle Swarm Optimizer for Global Optimization Problems

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Using Genetic Algorithms to optimize ACS-TSP

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat

Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm

Unidimensional Search for solving continuous high-dimensional optimization problems

An Island Based Hybrid Evolutionary Algorithm for Optimization

Modified Shuffled Frog-leaping Algorithm with Dimension by Dimension Improvement

Assessing Particle Swarm Optimizers Using Network Science Metrics

Particle Swarm Optimization

GA is the most popular population based heuristic algorithm since it was developed by Holland in 1975 [1]. This algorithm runs faster and requires les

Hybrid Particle Swarm-Based-Simulated Annealing Optimization Techniques

An Improved Tree Seed Algorithm for Optimization Problems

Discrete Particle Swarm Optimization for TSP based on Neighborhood

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

Opportunistic Self Organizing Migrating Algorithm for Real-Time Dynamic Traveling Salesman Problem

ACONM: A hybrid of Ant Colony Optimization and Nelder-Mead Simplex Search

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm

Feeding the Fish Weight Update Strategies for the Fish School Search Algorithm. Andreas Janecek

A Novel Hybrid Self Organizing Migrating Algorithm with Mutation for Global Optimization

QUANTUM BASED PSO TECHNIQUE FOR IMAGE SEGMENTATION

Benchmark Functions for the CEC 2008 Special Session and Competition on Large Scale Global Optimization

A Gaussian Firefly Algorithm

Partial Opposition-based Learning Using Current Best Candidate Solution

Small World Network Based Dynamic Topology for Particle Swarm Optimization

PARTICLE SWARM OPTIMIZATION (PSO) [1] is an

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

FIREWORKS algorithm (FWA) is a newly proposed swarm

QCA & CQCA: Quad Countries Algorithm and Chaotic Quad Countries Algorithm

Enhanced Symbiotic Organisms Search (ESOS) for Global Numerical Optimization Doddy Prayogo Dept. of Civil Engineering Petra Christian University Surab

EE 553 Term Project Report Particle Swarm Optimization (PSO) and PSO with Cross-over

The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms

Using CODEQ to Train Feed-forward Neural Networks

A Comparative Analysis on the Performance of Particle Swarm Optimization and Artificial Immune Systems for Mathematical Test Functions.

Modified Particle Swarm Optimization with Novel Modulated Inertia for Velocity Update

HPSOM: A HYBRID PARTICLE SWARM OPTIMIZATION ALGORITHM WITH GENETIC MUTATION. Received February 2012; revised June 2012

SIMULTANEOUS COMPUTATION OF MODEL ORDER AND PARAMETER ESTIMATION FOR ARX MODEL BASED ON MULTI- SWARM PARTICLE SWARM OPTIMIZATION

Best Guided Backtracking Search Algorithm for Numerical Optimization Problems


A Comparison of Several Heuristic Algorithms for Solving High Dimensional Optimization Problems

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm

Chaos Genetic Algorithm Instead Genetic Algorithm

Exploration vs. Exploitation in Differential Evolution

ABC Optimization: A Co-Operative Learning Approach to Complex Routing Problems

Solving Travelling Salesman Problem Using Variants of ABC Algorithm

An Optimization of Association Rule Mining Algorithm using Weighted Quantum behaved PSO

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Université Libre de Bruxelles

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

Study on the Development of Complex Network for Evolutionary and Swarm based Algorithms

Hybrid Seeker Optimization Algorithm for Global Optimization

A Cultivated Differential Evolution Algorithm using modified Mutation and Selection Strategy

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

FDR PSO-Based Optimization for Non-smooth Functions

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation:

Evolutionary Computation Algorithms for Cryptanalysis: A Study

In recent years several different algorithms

The 100-Digit Challenge:

Witold Pedrycz. University of Alberta Edmonton, Alberta, Canada

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems

Robust Descriptive Statistics Based PSO Algorithm for Image Segmentation

A Novel Probabilistic-PSO Based Learning Algorithm for Optimization of Neural Networks for Benchmark Problems

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

Artificial bee colony algorithm with multiple onlookers for constrained optimization problems

Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

DS-PSO: Particle Swarm Optimization with Dynamic and Static Topologies

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances

Cuckoo search with varied scaling factor

Particle Swarm Optimization Approach for Scheduling of Flexible Job Shops

Solving the Shortest Path Problem in Vehicle Navigation System by Ant Colony Algorithm

A Parameter Study for Differential Evolution

Center-Based Sampling for Population-Based Algorithms

A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Thesis Submitted for the degree of. Doctor of Philosophy. at the University of Leicester. Changhe Li. Department of Computer Science

Individual Parameter Selection Strategy for Particle Swarm Optimization

PARTICLE SWARM OPTIMIZATION (PSO)

Nature Inspired Meta-heuristics: A Survey

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM

Gradual and Cumulative Improvements to the Classical Differential Evolution Scheme through Experiments

Using Interval Scanning to Improve the Performance of the Enhanced Unidimensional Search Method

Population Structure and Particle Swarm Performance

[Kaur, 5(8): August 2018] ISSN DOI /zenodo Impact Factor

Sci.Int.(Lahore),28(1), ,2016 ISSN ; CODEN: SINTE 8 201

CHAOTIC ANT SYSTEM OPTIMIZATION FOR PATH PLANNING OF THE MOBILE ROBOTS

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

Simulated Raindrop Algorithm for Global Optimization

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

Automatic differentiation based for particle swarm optimization steepest descent direction

Transcription:

A Fireworks Optimization Method with Differential Evolution Operators YuJun Zheng a,, XinLi Xu a, HaiFeng Ling b a College of Computer Science & Technology, Zhejiang University of Technology, Hangzhou, 310023, China b Department of Mechanical Engineering, PLA University of Science & Technology, Nanjing 210007, China Abstract Fireworks algorithm () is a relatively new swarm-based metaheuristic for global optimization. The algorithm is inspired by the phenomenon of fireworks display and has a promising performance on a number of benchmark functions. However, in the sense of swarm intelligence, the individuals including fireworks and sparks are not well-informed by the whole swarm. In this paper we develop an improved version of the by combining with differential evolution () operators: mutation, crossover, and selection. At each iteration of the algorithm, most of the newly generated solutions are updated under the guidance of two different vectors that are randomly selected from highly ranked solutions, which increases the information sharing among the individual solutions to a great extent. Experimental results show that the operators can improve diversity and avoid prematurity effectively, and the hybrid method outperforms both the and the on the selected benchmark functions. Key words: Fireworks algorithm (), Differential evolution (), Global optimization, 1 Introduction The complexity of real-world engineering optimization problems gives rise to various kinds of metaheuristics that use stochastic techniques to effectively explore the search space for a global optimum. In particular, metaheuristics Corresponding author. Tel.: +86-571-85290085. Email address: yujun.zheng@computer.org (YuJun Zheng). Preprint submitted to Neurocomputing 28 March 2012

based on swarm intelligence (e.g., [1 7]), which simulates a population of simple individuals evolving their solutions by interacting with one another and with the environment, have shown promising performance on many difficult problems and have become a very active research area in recent years. Fireworks algorithm () is a relatively new global optimization method originally proposed by Tan and Zhu [7]. Inspired by the phenomenon of fireworks explosion, the algorithm selects in the search space a certain number of locations, each for exploding a firework to generate a set of sparks; The fireworks and sparks with good performance are chosen as the locations for the next generation s fireworks, and the evolutionary process continues until a desired optimum is obtained, or the stopping criterion is met. Numerical experiments on a number of benchmark functions show that, the can converge to a global optimum with a much smaller number of function evaluations than that of particle swarm optimization (PSO) algorithms [1,8]. In the standard, the convergence speed is accelerated by good fireworks that generate more sparks within smaller explosion areas, and the search diversity is improved by bad fireworks that generate fewer sparks within larger explosion areas. However, to some extent, such a diversification mechanism is not very flexible and, in particular, it does not utilize more information about other quality solutions in the swarm. That is, in the sense of swarm intelligence, the individuals (fireworks and sparks) are not well-informed by the whole swarm. Inspired by this observation, we develop an improved fireworks optimization method by combining with differential evolution () operators: mutation, crossover, and selection [9]. At each iteration of the algorithm, these operators are applied to guide the generation of new solutions, which improves the diversity of the swarm and avoids being trapped in local optima too early. Experiments on selected benchmark functions show that the well-informed fireworks and sparks can improve the performance of the to a great extent. The remainder of this paper is structured as follows: Section 2 briefly describes the algorithm and the algorithm, Section 3 presents the framework of our hybrid method, the performance of which is demonstrated by the experimental results in Section 4. Section 5 concludes with discussion. 2

2 Backgrounds 2.1 Fireworks Algorithm In the presented in [7], the number of sparks and the amplitude of explosion for each firework x i is respectively defined as follows. s i = m f max f(x i ) + ϵ p j=1(f max f(x j )) + ϵ (1) A i =  f(x i ) f min + ϵ p j=1(f(x j ) f min ) + ϵ (2) where m and  are control parameters, p is the size of the swarm, f max and f min are respectively the maximum and minimum objective values among the p fireworks, and ϵ is a small constant to avoid zero-division-error. To avoid overwhelming effects of splendid fireworks, lower and upper bounds are defined for s i such that: s i = s i s min s max if s i < s min else if s i > s max else (3) For a D-dimensional problem, the location of each spark x j generated by x i can be obtained by randomly setting z directions (z < D), and for each dimension k setting the component x k j based on x k i (1 j s i, 1 k z). There are two ways for setting x k j. For most sparks, a displacement h k = A i rand( 1, 1) is added to x k i, i.e.: x k j = x k i + A i rand( 1, 1) (4) To keep the diversity, for a few specific sparks, an explosion coefficient based on Gaussian distribution is applied to x k i such that: x k j = x k i Gaussian(1, 1) (5) In both the ways, if the new location falls out of the search space, it is mapped to the search space as follows: x k j = x k min + x k j %(x k max x k min) (6) 3

where % denotes the modulo operator for floating-point numbers, as defined in most computer languages. At each iteration of the, among all the current sparks and fireworks, the best location is always selected as a firework of the next generation. After that, p 1 fireworks are selected with probabilities proportional to their distance to other locations. The general framework of the is described in Algorithm 1. Algorithm 1 The standard fireworks algorithm. set the algorithm parameters p, s min, s max, Â, and ˆm; randomly initialize a swarm S of p fireworks; while (stop criteria is not met) do let R be the empty set of sparks; foreach firework x i S do calculate s i for x i according to Equation (1) and (3); calculate A i for x i according to Equation (2); for j = 1 to s i do yield a spark x j ; let z = round(d rand(0, 1)); for k = 1 to z do set x k j according to Equation (4) and (6); R = R {x j }; randomly select a set P of ˆm fireworks from S; foreach firework x i P do yield a spark x j ; let z = round(d rand(0, 1)); for k = 1 to z do set x k j according to Equation (5) and (6); R = R {x j }; R = R S; let gbest be the best location among R, and set S = {gbest}; Add to S other p 1 locations selected from R based on distance probabilities; end 2.2 Difference Evolution Introduced by Storn and Price [9], is an efficient evolutionary algorithm that simultaneously evolves a population of solution vectors. But unlike the genetic algorithm (GA) [10], uses floating-point vectors and does not employ some probability density functions for vector reproduction. Specifically, generates a mutant vector v for each vector x i in the population by adding 4

the weighted difference between two randomly selected vectors to a third one: v i = x r1 + γ(x r2 x r3 ) (7) where random indexes r 1, r 2, r 3 {1, 2,..., p} and coefficient γ > 0. A trial vector u i is then generated by using the crossover operator which mixes the components of the mutant vector and the original one, where each jth component of u i is determined as follows: u j i = v j i x j i if rand(0, 1) < c r or j = r(i) else where c r is the crossover probability ranged in (0, 1) and r(i) is a random integer within (0, p] for each i. In the last step of each iteration, the selection operator chooses the better one for the next generation by comparing u i with x i : u i if f(u i ) f(x i ) x i = else x i (8) (9) 3 The Fireworks Optimization Method For a D-dimensional optimization, the fitness value of a solution is determined by values of all components, and a solution that has discovered the region corresponding to the global optimum in some dimensions may have a low fitness value because of the poor quality in the other dimensions [11]. Thus, many population-based optimization methods, including, comprehensive learning PSO [11], fully informed PSO [12], enable the individuals to make use of the beneficial information in the swarm more effectively to generate better quality solutions. In the standard, after obtaining the set R of all fireworks and sparks, the locations for new fireworks are selected based on distance to other locations in R so as to keep diversity of the swarm. Here we introduce the operators to the to achieve this purpose. In the hybrid algorithm, after obtaining the set R of locations, we first sort the locations in increasing order of fitness, and create a set S of candidate locations which contains the first (best) location in R and other p 1 locations randomly selected from the top 2p locations in R. Then the standard process is applied to S except its best vector. 5

In detail, the following procedure is used to replace the last two lines in Algorithm 1: (1) sort R in increasing order of vector fitness; (2) let S = {R[1]}, R = R\{R[1]}; (3) truncate the length of R to 2p 1; (4) randomly select p 1 locations from R and add them to S, where the selection probability of each x R is f(x)/ z R f(z); (5) for i = 2 to p, apply the mutation, crossover, and selection operators to S[i]. Generally speaking, the above procedure helps to improve the algorithm in the following aspects: For high quality (top 2p) vectors in R, each of them has an opportunity to influence the new fireworks for the next generation, in terms of the roulettewheel selection and selection operations. For candidate fireworks in S, each of them has an opportunity to be informed by existing high quality vectors at each dimension, in terms of the mutation and crossover operations. In particular, the mutation operator makes the difference of two random vectors acts as a search direction for the third one [13]; In comparison with large-amplitude explosion and distance-based selection, the mutation operation is more effective in improving the probability of obtaining the global optimum, whilst requiring less computational cost. 4 Computational Experiments We choose a set of well-known test functions as benchmark problems, the definitions of which are listed in the Appendix. All the functions are tested on 30, 50, and 100 dimensions respectively. The performance of the hybrid algorithm is evaluated in comparison with the standard and standard. The parameters of the are set as those in Ref. [9]. For the, we set p = 5, m = 50, s min = 2, s max = 40, ˆm = 5, and  = min 1 k D(x k max x k min)/5 (where [x k min, x k max] is the search range of the kth dimension of the benchmark function), as suggested in Ref. [7]. For the hybrid algorithm, since the operators are introduced for improving solution diversity, we decrease the values of three parameters such that m = 25, s max = 20, and  = min 1 k D (x k max x k min)/7. For the algorithms, the mean best and the success rate over 40 runs are used as performance measures. The maximum number of function evaluations () is set to 10000 for 30 dimensional functions, 20000 for 50 dimensional func- 6

Table 1 Detailed information of the benchmark functions used in the paper. Function Search range x f(x ) a r Sphere [ 100, 100] D [0,0,...,0] 0 0.000001 Rosenbrock [ 2.048, 2.048] D [1,1,...,1] 0 0.01 Schwefel [ 500, 500] D [420.96,420.96,...,420.96] 0 0.01 Rastrigin [ 5.12, 5.12] D [0,0,...,0] 0 0.01 Griewank [ 600, 600] D [0,0,...,0] 0 0.01 Ackley [ 32.768, 32.768] D [0,0,...,0] 0 0.000001 tions, and 0 for 100 dimensional functions. The search ranges, optimal points and corresponding fitness values, and required accuracies (a r ) for the benchmark functions are shown in Table 1. If a single run obtains a result value in the range [(1 a r )f(x ), (1 + a r )f(x )], it is considered as successful run. The a r values are set mainly as suggested in Ref. [14]. The experimental results on 30, 50, and 100 dimensional functions are summarized in Table 2, Table 3, and Table 4 respectively, and the average convergence speeds of the algorithms are illustrated in Fig 1 3. As we can see, for all the benchmark problems, the performance of the hybrid algorithm is better (or at least not worse than) the standard and. In the sense of required accuracy, except for the 100-D Schwefel function, the hybrid algorithm successfully achieves the global optimum for all the other functions. In particular we observe that: For the Sphere, Rastrigin, and Griewank functions, where the has significant performance advantages over the, the introduction of operators does not degrade the performance of the hybrid algorithm. For the Rosenbrock, Schwefel, and Ackley functions, where the performance of the is not far from (or may be better than) that of the, the introduction of operators contributes to the performance improvement of the hybrid algorithm to a great extent. Moreover, the advantage of the hybrid algorithm is more obvious on 100- D problems than on 50-D and 30-D problems, which shows that the hybrid algorithm is scalable and efficient to high-dimensional problems. 5 Conclusion has been shown to be one of the best performing algorithms for global optimization problems. The paper improves the standard by introducing 7

Table 2 The experimental results on 30-D benchmark problems. mean best success rate(%) Function Sphere 0 1.077E-39 0 100 100 100 Rosenbrock 4.209E-02 9.731E-01 8.966E-04 80 35 100 Schwefel 4.827E+00 1.133E+01 4.699E-01 10 0 32.5 Rastrigin 0 1.208E-07 0 100 100 100 Griewank 1.228E-11 1.985E-03 0 100 75 100 Ackley 4.441E-16 2.565E-15 4.441E-16 100 100 100 Table 3 The experimental results on 50-D benchmark problems. mean best success rate(%) Function Sphere 0 3.383E-36 0 100 100 100 Rosenbrock 3.080E-01 1.115E+00 1.750E-03 57.5 5 85 Schwefel 9.245E+01 1.293E+01 6.883E-01 0 0 25 Rastrigin 0 4.244E-07 0 100 100 100 Griewank 1.541E-11 2.480E-03 0 100 80 100 Ackley 4.441E-16 4.752E-13 4.441E-16 100 100 100 Table 4 The experimental results on 100-D benchmark problems. mean best success rate(%) Function Sphere 7.474E-187 1.627E-26 0 100 100 100 Rosenbrock 3.044E+01 4.036E+00 5.270E-02 0 0 40 Schwefel 2.062E+02 3.430E+01 4.100E-00 0 0 0 Rastrigin 0 3.526E-06 0 100 92.5 100 Griewank 4.380E-08 5.288E-03 0 100 50 100 Ackley 4.441E-16 8.089E-08 4.441E-16 100 70 100 8

9000 9000 7000 7000 3000 3000 1000 1000-1000 (a) Sphere -1000 0 2000 4000 6000 8000 10000 (b) Rosenbrock 9000 450 7000 350 3000 250 150 1000 50-1000 0 2000 4000 6000 8000 10000-50 (c) Schwefel (d) Rastrigin 900 20 700 15 500 10 300 5 100 0-100 0 1000 2000 3000 4000 (e) Griewank -5 (f) Ackley Fig. 1. The averaged convergence curves of the 30D test problems. operators for increasing the information sharing among the fireworks and sparks and diversifying the search process. Experimental results show that the hybrid method performs better than both the and the. We believe such an improvement can bring significant advantages to practical engineering applications. In future work, we will study more relationships between the swarm-based optimization methods and seek more hybridization strategies for potential improvement on those methods. 9

14000 14000 11000 11000 8000 8000 2000 2000-1000 (a) Sphere -1000 0 2000 4000 6000 8000 10000 (b) Rosenbrock 12000 500 9500 350 7000 4500 200 2000 50-500 0 2000 4000 6000 8000 10000 (c) Schwefel -100 (d) Rastrigin 900 20 700 15 500 10 300 5 100 0-100 0 1000 2000 3000 4000 (e) Griewank -5 (f) Ackley Fig. 2. The averaged convergence curves of the 50D test problems. Acknowledgment This work was supported by grants from National Natural Science Foundation (No. 61105073, 61103140, 61173096, 61020106009) of China. We are grateful to the anonymous reviewers of ICSI 2012 for their valuable suggestions for improving the paper. 10

4 9000 3 7000 2 1 3000 1000 - (a) Sphere -1000 0 2000 4000 6000 8000 10000 (b) Rosenbrock 12000 1700 9500 1350 7000 4500 1000 650 2000 300-500 0 2000 4000 6000 8000 10000 (c) Schwefel -50 (d) Rastrigin 2700 20 2100 15 1500 900 10 5 300 0-300 0 1000 2000 3000 4000 (e) Griewank -5 0 500 1000 1500 (f) Ackley Fig. 3. The averaged convergence curves of the 100D test problems. Appendix: Definitions of the test functions (1) Sphere function D f 1 (x) = x 2 i 11

(2) Rosenbrock s function (3) Schwefel s function (4) Rastrigin s function f 2 (x) = (5) Griewank s function (6) Ackley s function D 1 i=2 (100(x 2 i x i 1 ) 2 + (x i 1) 2 ) D f 3 (x) = 418.9829 D x i sin( x i 1 2 ) D f 4 (x) = (x 2 i 10 cos(2πx i ) + 10) f 5 (x) = D ( f 6 (x) = 20 exp 0.2 1 D x 2 i 4000 D ) D x 2 i exp cos( x i ) + 1 i ( 1 D ) D cos(2πx i ) + 20 + e References [1] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Corence on Neural Networks, Perth WA, Australia, 1995, pp. 1942 1948. [2] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of cooperating agents, IEEE Trans. Syst. Man Cybernet. B 26 (1996) 29 41. [3] X. Li, A new intelligent optimization artificial fish school algorithm, Doctor thesis, Zhejiang University, Hangzhou, China, 2003. [4] Y. Liu, Z. Qin, Z. Shi, J. Lu, Center particle swarm optimization, Neurocomputing 70 (2007) 672 679. [5] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm, J. Global Optim. 39 (2007) 459 471. [6] C.J.A.B. Filho, F.B. de Lima Neto, A.J.C.C. Lins, A.I.S. Nascimento, M.P. Lima, A novel search algorithm based on fish school behavior, in: Proceedings of the IEEE International Corence on Systems, Man and Cybernetics, Singapore, 2008, pp. 2646 2651. 12

[7] Y. Tan, Y. Zhu, Fireworks algorithm for optimization, in: Proceedings of 2nd International Corence on Swarm Intelligence, Lect. Note. Comput. Sci. 6145 (2010) 355 364. doi:10.1007/978-3-642-13495-1 44 [8] Y. Tan, Z.M. Xiao, Clonal particle swarm optimization and its applications, in: Proceedings of IEEE Congress on Evolutionary Computation, Singapore, 2007, pp. 2303 2309. [9] R. Storn, K. Price, Differential evolution: A simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 11 (1997) 341 369. [10] J. Holland, Adaptation in Natural and Artificial Systems, University of Michigan, Ann Arbor, 1975. [11] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 10 (2006) 281 295. [12] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: Simpler, maybe Better. IEEE Trans. Evol. Comput. 8 (2004) 204 210. [13] Y.C. Lin, K.S. Hwang, F.S. Wang, Co-evolutionary hybrid differential evolution for mixed-integer optimization problems, Engineering Optim. 33 (2001) 663 682. [14] M. Clerc, A method to improve standard PSO, Technical Report DRAFT MC2009-03-13, France Telecom R&D, Paris, 2009. 13