Journal of Computational and Applied Mathematics 209 (2007) 160 166 www.elsevier.com/locate/cam A heuristic approach to find the global optimum of function M. Duran Toksarı Engineering Faculty, Industrial Engineering Department, Erciyes University, 38039 Kayseri, Turkey Received 23 November 2005; received in revised form 23 October 2006 Abstract This paper presents the modified ant colony optimization (MACO) based algorithm to find global optimum. Algorithm is based on that solution space of problem is restricted by the best solution of the previous iteration. Furthermore, the proposed algorithm is that variables of problem are optimized concurrently. This algorithm was tested on some standard test functions, and successful results were obtained. Its performance was compared with the other algorithms, and observed to be better. 2006 Elsevier B.V. All rights reserved. Keywords: Global optimum; Ant colony optimization; Metaheuristics 1. Introduction Ant colonies optimization (ACO) take inspiration from the behaviour of real ant colonies to solve optimization problems. ACO belongs to class of biologically inspired heuristics [12]. Recently, ACO has been proposed to solve combinatorial optimization problems, such as travelling salesman problem [2], layout problem [10], flow-shop scheduling problem [9], etc. min value is defined as the point makes the function minimum if F( min ) F()for all values of F()function. The function should have many local minimum points, but only one of them is the global minimum. When the minimum value of continuous and differentiable function can be found on the point df/d, utilizing stochastic methods to find the minimum value of no differentiable function could get more advantageous. Many heuristic methods such as ACO based algorithm [13], adaptive random search technique (ARSET) [4], heuristic random optimization (HRO) [8] and David Fletcher method were developed to find global minimum. In this paper, the algorithm, called modified ant colony optimization (MACO) is obtained by MACO based algorithm proposed by Toksari [13]. The superiority of MACO is that variables of problem are optimized concurrently, but ACO based algorithm used to find global minimum is that they are optimized one by one. The basic idea of MACO is restricted by the best solution of the previous iteration. The paper is organized as follows. General ACO will be mentioned shortly in Section 2. In Section 3, the proposed MACO based algorithm to find global optimum will be eplained in detail. Section 4 presents that the algorithm will be solved on eight benchmark problems. Finally, the proposed algorithm will be compared with other heuristic methods. E-mail address: dtoksari@erciyes.edu.tr. 0377-0427/$ - see front matter 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.cam.2006.10.074
M. Duran Toksarı / Journal of Computational and Applied Mathematics 209 (2007) 160 166 161 Step 1. Initialization Initialize pheromone trail Step 2. Solution construction Solution construction using the pheromone trail Step 3. Update the pheromone trail Fig. 1. A generic ACO algorithm. 2. Ant colony optimization (ACO) ACO has been inspired by the behaviour of real ant colonies. One of its main ideas is the indirect communication among the individuals of a colony of agents, called ants, based on an analogy with trails of a chemical substance, called pheromone, which real ants use for communication [11]. Recently, ACO has been successfully applied to several NP-hard combinatorial optimization problems. The general ACO algorithm is illustrated in Fig. 1. The procedure of the ACO algorithm manages the scheduling of three activities [12,1]: the first step consists mainly in the initialization of the pheromone trail. In the second step, each ant constructs a complete solution to the problem according to a probabilistic state transition rule. The state transition rule depends mainly on the state of the pheromone. The iteration (third) step updates quantity of pheromone; a global pheromone updating rule is applied in two phases. First, an evaporation phase where a fraction of the pheromone evaporates, and then a reinforcement phase where each ant deposits an amount of pheromone which is proportional to the fitness of its solution. This process is iterated until a stopping criterion. 3. Finding global optimum by a heuristic approach The MACO algorithm has been used to find global optimum. The proposed algorithm (MACO) was obtained by modifying the ACO based algorithm used to find global minimum [13]. Modification is produced in two parts. First, all variables of problem are optimized concurrently. So, the optimum solution is reached faster, and computational time should be reduced. Second, solution space of problem is restricted by the best solution of the previous iteration. Thus, optimum solution of problem is searched in the limited space. In the proposed algorithm, first, number of m ants being associated with m random initial vectors ( k initial,yk initial (k = 1, 2,...,m)for problem has two variables, f(,y)) (or all of ants may be set to the same value) (Fig. 2a). The feasible solution space is bounded by the best solution of the previous iteration. Quantity of pheromone (τ t ) only intensifies around the best objective function value obtained from the previous iteration and all ants turned towards there to search a solution (Fig. 2b). The solution vector of each ant is updated by using the following formula: t k = t 1 best ± Δ (t = 1, 2,...,I), (3.1) yt k = yt 1 best ± Δ (t = 1, 2,...,I), (3.2) where (t k,yk t ) is the solution vector of the kth ant at iteration t, (best t 1,ybest t 1 ) is the best solution obtained at the iteration t 1 and Δ is a vector generated randomly from [ α, α] range to determine the length of jump. In formulas (3.1) and (3.2), (±) sign is used to define the direction of movement for each variable. Then, all variables are updated at the end of iteration. Thus, variables of problem are optimized concurrently and solution time is decreased. (±) sign defines direction of movement that appears after initial solution. The direction of movement is defined by Eqs. (3.3) and (3.4). ( ) initial best = best initial + initial best 0.01, (3.3)
162 M. Duran Toksarı / Journal of Computational and Applied Mathematics 209 (2007) 160 166 a Ant 3 y Ant 2 B C 1 Ant 1 A -1,4 Global -3-2 -1 Optimum 1 2 3 D Ant 4 Ant 5 E (-1,-1) -1 b - =-1-1,4 D1 Ant 4 Ant 5 E1-0,35 Ant 1 A1 y=1 Global Optimum -0,25 E (-1,-1) B1 Ant 2 Ant 3 C1-1 y=-1 Fig. 2. (a) Five ants being associated with five random initial vectors (the best vector is on the E( 1.4, 1) point. So, quantity of pheromone only intensifies between global minimum point and E( 1.4, 1) point). (b) The first iteration (solution space of problem is restricted by 1.4 <<0 and 1 <y<0. At the end of the iteration 1, the best vector is on the A1 point). -y ( ) ȳinitial best = ybest initial + yinitial best 0.01. (3.4) If f( initial best,ybest initial ) f(best initial,ybest initial ), (+) sign is used in (3.1). Otherwise, ( ) sign is used. If f(best initial, ȳbest initial ) f(initial best,ybest initial ), (+) sign is used in (3.2). Otherwise, ( ) sign is used. At the end of the each iteration, quantity of pheromone (τ t ) is updated in two phases. First, quantity of pheromone (τ t ) is reduced to simulate the evaporation process by formula (3.2). In a second phase, the pheromone is only reinforced around the best objective function value obtained from the previous iteration (formula (3.3)). τ t = 0.1 τ t 1, (3.5) τ t = τ t 1 + (0.01 f( best t 1,ybest t 1 )). (3.6) This process is iterated until number of maimum iteration (I). Steps of the proposed algorithm are as in Fig. 3. Not to pass over global minimum where I is number of maimum iteration, α is set as 0.1 α at the end of the each iteration I (i.e. α = 0.1 α). Thus, the length of jump will gradually decrease. It is eplained in Fig. 4.
M. Duran Toksarı / Journal of Computational and Applied Mathematics 209 (2007) 160 166 163 Fig. 3. Modified ant colony optimization for the global optimum. k=1 k=2 k=3 k=n t = k I n I I Initial Minimum Global Minimum Fig. 4. Jump movements [13]. 4. Computational results In this section, various implementation details of computational eperiment and results of MACO are discussed. Performance of the proposed MACO algorithm is tested on eight test problems, which are test functions known from the literature, see Appendi. Studied problems in fact are the ones studied by numerous authors before. Solutions of MACO are compared with the results belonging to other methods for these problems. Note that the epoch number is used to compare.
164 M. Duran Toksarı / Journal of Computational and Applied Mathematics 209 (2007) 160 166 Table 1 Global optimization methods used for performance analysis Method Name (Reference) Used test problems ACO Ant colony optimization [13] 4 8 ARSET Adaptive random search technique [4] 4 8 HRO Heuristic random optimization [8] 4 HS Harmony search [7] 3 MTS Memory tabu search [5] 1 SZGA Successive zooming genetic algorithm [6] 2 MACO This paper All problems Table 2 Compared results on test problems Problem no. MACO Other Methods Parameter seting Obtained result Epoch number Time (s) Method Obtained result Epoch (m I) number 1 α = 1/(random(10)), m = 5, I = 47 = 0, y= 0 235 0.012 TS = 0, y= 0 500 f(,y) = 2 MTS f(,y) = 2 310 2 α = 1/(random(10)), m = 15, I = 250 = 0, y= 0 3750 0.008 SZGA f(,y) = 2.98E 8 4000 f(,y) = 0 3 α = 1/(random(10)), m = 40, I = 900 = 3, y= 4 36 000 0.021 HS f(,y) = 1 45 000 f(,y) = 1 4 α = 1/(random(10)), m = 5, I = 100 = 3 500 0.009 ACO f()= 3 500 f()= 3 ARSET f()= 3 1000 HRO f()= 2.9999998 1000 5 a α = 1/(random(10)), m = 20, I = 250 = 9.92E 12 5000 0.032 ACO f()= 1.40E 045 5000 f()= 5.60E 45 ARSET f()= 2.21E 043 5000 6 α = 1/(random(10)),m = 15, I = 250 = 3, y= 3 3750 0.018 ACO f(,y) = 0 5000 f(,y) = 0 ARSET f(,y) = 5.04E 023 5000 7 α = 1/(random(10)), m = 18, I = 200 = 1, y= 1 3600 0.062 ACO f(,y) = 0 5000 f(,y) = 0 ARSET f(,y) = 4.02E 016 5000 8 α = 1/(random(10)), m = 15, I = 250 = 10, y= 0 3750 0.044 ACO f(,y) = 10 5000 f(,y) = 10 ARSET f(,y) = 10 5000 a Global minimum of problem is found m = 20, I= 300 and epoch number (m I) = 6000. The results of MACO listed in Table 2 compared with results of other methods in the Table 1. The global minimum in all test problems was insulated from local minimum. Table 2 indicates that the results of MACO are very encouraging, reliable and efficient: more so than ACO based algorithm and ARSET. MACO is also compared with best found methods such as genetic algorithm and tabu search, MACO is the best of them. Table 2 shows that MACO is the fastest algorithm when compared to others because it searches the solution in space restricted by the best solution of the previous iteration, and all variables of problem are optimized concurrently. So, it reaches the optimum solution faster and should reduce computational time. 5. Conclusions In this paper, a new algorithm for global optimization has been presented. The algorithm uses ant colony optimization. When compared with previous methods to find global minimum such as ACO, ARSET, HRO, HS, MTS and SZGA, results shows that proposed algorithm is very promising. MACO found eactly global minimum of seven test problems, so, it is obvious that the best values obtained by using the MACO are the best found values for tested benchmark problems.
M. Duran Toksarı / Journal of Computational and Applied Mathematics 209 (2007) 160 166 165 Appendi The following test functions are taken from [3]. Test problem 1. 2 + y 2 cos(18) cos(18y). The minimum points of the function are = 0, y= 0 and f(,y) = 2. Test problem 2. 2 + 2y 2 0.3 cos(3π) 0.4 cos(4πy) + 0.7. The minimum points of the function are = 0, y= 0 and f(,y) = 0. Test problem 3. ep( 1 2 (2 + y 2 25) 2 ) + sin 4 (4 3y) + 1 2 (2 + y 10)2. The minimum points of the function are = 3, y= 4 and f(,y) = 1. Test problem 4. { } 2 if 1, f()= ( 3) 2. 3 if >1. The minimum points of the function are = 3 and f()= 3. Test problem 5. [ sin ( )] 1 4 [ + cos ( )] 1 4. The minimum points of the function are = 0 and f()= 0. Test problem 6. ( 3)8 (y 3)4 f(,y) = + 8 1 + ( 3) 1 + (y 3) 4. The minimum points of the function are = 3, y= 3 and f(,y) = 0. Test problem 7. f(,y) = (100 ( y 2 ) 2 ) + (1 ) 2. The minimum points of the function are = 1, y= 1 and f(,y) = 0. Test problem 8. f(,y) = 1 + y. The minimum points of the function are = 10, y= 0 and f(,y) = 10 within the range [ 10 10].
166 M. Duran Toksarı / Journal of Computational and Applied Mathematics 209 (2007) 160 166 References [1] M. Dorigo, G. Di Caro, Ant colony optimization: a new meta-heuristic, in: Proceedings of the 1999 Congress on Evolutionary Computation, vol. 2, 1999, pp. 1470 1477. [2] M. Dorigo, L.M. Gambardella, Ant colony system: A cooperative learning approach to the traveling salesman problem, IEEE Trans. Evolutionary Comput. 1 (1997) 53 66. [3] Hamzacebi, The heuristic learning algorithm for artificial neural networks on estimate of future, Ph.D. Thesis, Gazi University, Turkey, 2005. [4] C. Hamzacebi, F. Kutay, A heuristic approach for finding the global minimum: adaptive random search technique, Appl. Math. Comput. 173 (2) (2006) 1323 1333. [5] M. Ji, H. Tang, Global optimization and tabu search, Appl. Math. Comput. 159 (2004) 449 457. [6] Y.D. Kwon, S.B. Kwon, J. Kim, Convergence enhanced genetic algorithm with successive zooming method for solving continuous optimization problems, Comput. Struct. 81 (2003) 1715 1725. [7] K.S. Lee, Z.W. Geem, A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice, Appl. Mech. Eng. 194 (2005) 3902 3933. [8] J. Li, R.R. Rhinehart, Heuristic random optimization, Comput. Chem. Eng. 22 (3) (1998) 427 444. [9] C. Rajendran, H. Ziegler, Ant-colony algorithms for permutation flowshop scheduling to minimize makespan/total flowtime of jobs, European J. Oper. Res. 155 (2004) 426 438. [10] M. Solimanpur, P. Vrat, R. Shankar, Ant colony optimization algorithm to the inter-cell layout problem in cellular manufacturing, European J. Oper. Res. 157 (2004) 592 606. [11] T. Stützle, H.H. Hoos, Ma Min ant system, Future Generation Comput. Systems 16 (8) (2000) 889 914. [12] E.G. Talbi, O. Rou, C. Fonlupt, D. Robillard, Parallel ant colonies for the quadratic assignment problem, Future Generation Comput. Systems 17 (2001) 441 449. [13] M.D. Toksari, Ant colony optimization for finding the global minimum, Appl. Math. Computation 176 (1) (2006) 308 316.