Adaptive Multi-objective Particle Swarm Optimization Algorithm P. K. Tripathi, Sanghamitra Bandyopadhyay, Senior Member, IEEE and S. K. Pal, Fellow, IEEE Abstract In this article we describe a novel Particle Swarm Optimization (PSO) approach to Multi-objective Optimization (MOO) called Adaptive Multi-objective Particle Swarm Optimization (AMOPSO). AMOPSO algorithm s novelty lies in its adaptive nature, that is attained by incorporating inertia and the acceleration coefficient as control variables with usual optimization variables, and evolving these through the swarming procedure. A new diversity parameter has been used to ensure sufficient diversity amongst the solutions of the non dominated front. AMOPSO has been compared with some recently developed multi-objective PSO techniques and evolutionary algorithms for nine function optimization problems, using different performance measures. I. INTRODUCTION Evolutionary algorithms have been found to be very efficient in dealing with multi-objective optimization (MOO) problems [] due to their population based nature. Particle Swarm Optimization (PSO), has been relatively recently proposed in 995 [2]. It is inspired by the flocking behavior of birds, which is very simple to simulate. The simplicity and efficiency of PSO [3], [4] in single objective optimization motivated the researchers to apply it to the MOO [5] [2]. In MOO, two important goals to attain are the convergence to the Pareto-optimal front and the even spread of the solutions on the front. In the present article we describe a multiobjective PSO, called adaptive multi-objective particle swarm optimization (AMOPSO). In AMOPSO the vital parameters of the PSO i.e., inertia and acceleration coefficients are adapted with the iterations, making it capable of effectively handling optimization problems of different characteristics. Here these vital parameters are treated as control parameters and are also subjected to evolution along with the other optimization variables. To overcome premature convergence, the mutation operator from [3] has been incorporated in AMOPSO. In order to improve the diversity in the Pareto-optimal solutions, a novel method exploiting the nearest neighbor concept is used. This method for measuring diversity has an advantage that it needs no parameter specification, unlike the one in []. Note that diversity has earlier been incorporated in PSO using different approaches, namely the hyper-grid approach [], σ-method with clustering [9] and NSGA- II based approach []. Both the hyper-grid and clustering based approaches for diversity are found to take significant computational time. In the former, the size of the hyper-grid needs to be specified a priori, and the performance depends P. K. Tripathi and Sanghamitra Bandyopadhyay are with Machine Intelligence Unit, S. K. Pal is with Center for Soft Computing Research, Indian Statistical Institute 23 B.T. Road Kolkata 78, INDIA. E- mail:{praveen r, sanghami, sankar}@isical.ac.in on its proper choice. The measure adopted in this article is similar to the one in [], though the way of computing the distance to the nearest neighbor is different. Comparative study of AMOPSO with other multi-objective PSOs viz., σ MOPSO [9], NSPSO (Non-dominated sorting PSO) [] and MOPSO [] and also other multi-objective evolutionary methods viz., NSGA-II [4] and PESA-II [5] has been conducted to establish its effectiveness for nine test problems using qualitative measures and also visual displays of the Pareto front. II. BASIC PRINCIPLES A. Multi-objective Optimization (MOO) A general minimization problem of M objectives can be mathematically stated as: Given x =[x,x 2,...,x d ],where d is the dimension of the decision variable space, Minimize : f( x) =[f i ( x),i=,...,m], subject to : g j ( x), j =, 2,...,J, and h k ( x) =, k =, 2,...,K,wheref i ( x) is the i th objective function, g j ( x) is the j th inequality constraint, and h k ( x) is the k th equality constraint. A solution is said to dominate another solution if it is not worse than that solution in all the objectives and is strictly better than that in at least one objective. The solutions over the entire solution space that are not dominated by any other solution are called Pareto-optimal solutions. B. Particle Swarm Optimization (PSO) The PSO is a population based optimization algorithm, inspired by the flocking behavior of birds [2]. The population of the potential solutions is called a swarm and each individual solution within the swarm, is called a particle. Particles in PSO fly in the search domain guided by their individual experience and the experience of the swarm. Considering a d-dimensional search space, the i th particle is associated with the position attribute X i = (x i,,x i,2,...,x i,d ), the velocity attribute V i = (v i,,v i,2,...,v i,d ) and the individual experience attribute P i = (p i,,p i,2,p i,d ). The position attribute ( X i ) signifies the position of the particle in the search space, whereas the velocity attribute ( V i ) is responsible for imparting motion to it. The P i parameter stores the position (coordinates) corresponding to the particle s best individual performance. Similarly the experience of whole of the swarm is captured in the index g, which corresponds to the particle with the best overall performance in the swarm. The movement of the particle towards the optimum solution is governed by updating its position and velocity attributes. -4244-34-/7$25. c 27 IEEE 228
The velocity and position update equations are given as [4]: v i,j = wv i,j + c r (p i,j x i,j )+c 2 r 2 (p g,j x i,j ) () x i,j = x i,j + v i,j (2) where j =,...,d and w, c, c 2. w is the inertia weight, c and c 2 the acceleration coefficients, and r and r 2 are random numbers, generated uniformly in the range [, ], responsible for imparting randomness to the flight of the swarm. The c and c 2 values allow the particle to tune the cognition and the social terms respectively in the velocity update equation (Equation ). A larger value of c allows exploration, while a larger value of c 2 encourages exploitation. III. ADAPTIVE MULTI-OBJECTIVE PARTICLE SWARM OPTIMIZATION: AMOPSO The AMOPSO algorithm proposed in this article is described below. A. Initialization In the initialization phase of AMOPSO, the individuals of the swarm are assigned random values for the coordinates, from the respective domains, for each dimension. Similarly the velocity is initialized to zero in each dimension. The Step takes care of the initialization of AMOPSO. This algorithm maintains an archive for storing the best nondominated solutions found in the flight of the particles. The size of the archive l t at each iteration is allowed to attain a maximum value of N a. Archive is initialized to contain the non-dominated solutions from the swarm. Algorithm AMOPSO: O f = AMOPSO(N s,n a,c,d) /* N s : size of the swarm, N a : size of the archive, C: maximum number of iterations, d: the dimensions of the search space, O f : the final output */ ) t =, randomly initialize S, /*S t : swarm at iteration t */ initialize x i,j, i, i {,...,N s } and j, j {,...,d} /* x i,j :thej th coordinate of the i th particle */ initialize v i,j, i, i {,...,N s } and j, j {,...,d} /* v i,j : velocity of i th particle in j th dimension */ Pb i,j x i,j, i, i {,...,N s } and j, j {,...,d} /* Pb i,j :thej th coordinate of the personal best of the i th particle */ A non dominated(s ), l = A /* returns the non-dominated solutions from the swarm*/ /* A t : archive at iteration t */ 2) for t =to t = C, for i =to i = N s /* update the swarm S t */ /* updating the velocity of each particle */ Gb get gbest() /* returns the global best */ Pb i get pbest() /* returns the personal best */ adjust parameters(w i,c i,ci 2 ) /* adjusts the parameters, w i : the inertia coefficient, c i : the local acceleration coefficient, and c i 2 : the global acceleration coefficient */ v i,j = w i v i,j + c i r (Pb i,j x i,j ) + c i 2r 2 (Gb j x i,j ) j, j {,...,d} /* updating coordinates */ x i,j = x i,j + v i,j j, j {,...,d} /* updating the archive */ A t non dominated(s t A t ) if (l t >N a ) truncate archive() /* l t : size of the archive */ mutate (S t ) /* mutatingtheswarm*/ 3) O f A t and stop. /* returns the Pareto optimal front */ d3 d4 Fig.. a b c d e f f d d2 AMOPSO Diversity computation B. Update The Step 2 of AMOPSO deals with the flight of the particles within the swarm through the search space. The flight, given by Equations and 2, is influenced by many vital parameters, which are explained below: ) Personal Best Performance (pbest): In multi-objective PSO, the pbest stores the best non-dominated solution attained by the individual particle. In AMOPSO the present solution is compared with the pbest solution, and it replaces the pbest solution only if it dominates that solution. 2) Global Best Performance (gbest) : In multi-objective PSO, often the multiple objectives involved in MOO problems are conflicting in nature thus making the choice of a single optimal solution difficult. To resolve this problem the concept of non-dominance is used. Therefore instead of having just one individual solution as the global best a set of all the non-dominated solutions is maintained in the g h 2282 27 IEEE Congress on EvolutionaryComputation(CEC 27)
form of an archive []. Selecting a single particle from the archive as the gbest is a vital issue. There has been a number of attempts to address this issue, some of which may be found in [6]. In the concepts mentioned in [6], authors have considered non-dominance in the selection of the gbest solution. As the attainment of proper diversity is the second objective of MOO, it has been used in AMOPSO to select the most sparsely populated solution from the archive as the gbest. In AMOPSO the diversity measurement has been done using a novel concept. This concept is similar to the crowding-distance measure in [4]. The parameter (d i ) is computed as the distance of each solution to its immediate next neighbor summed over each of the M objectives. An example in Figure illustrates the computation of d i parameter. Density for all the solutions in the archive is obtained. Based on the density values as fitness, roulette wheel selection is done to select a solution as the gbest. 3) Control Parameters (w): The performance of PSO to a large extent depends on its inertia weight (w) and the acceleration coefficients (c and c 2 ). In this article these are called control parameters. These parameters in AMOPSO have been adjusted using the function adjust parameters(w i,c i,ci 2 ). Some values have been suggested for these parameters in the literature [4]. In most of the cases the values of these parameters were found to be problem specific, signifying the use of adaptive parameters [3], [4], [7]. In this article a novel concept for adapting the control parameters has been proposed. These control parameters have been subjected to optimization through swarming, in parallel with that of the normal optimization variables. The intuition behind this concept is to evolve the control parameters also so that the appropriate values of these parameters may be obtained for a specific problem. Here the control parameters have been initially assigned some random values in the range suggested in [4]. They are then updated using the following equations: v c i,j = w i v c i,j + c i r (p c i,j x c i,j)+c i 2r 2 (p c g,j x c i,j) (3) x c i,j = xc i,j + vc i,j (4) Here, x c i,j is the value of the jth control parameter with the i th particle, whereas vi,j c, pc i,j are the velocity and personal best for the j th control variable with i th particle. p c g,j is the global best for the j th control parameters. The previous iteration values of the control parameters have been used for the corresponding values of w i, c i and c i 2. It is to be noted that in the above equations the values of j equal to,, 2 and 3, corresponds to the control parameters inertia weight, cognition acceleration coefficient and the global acceleration coefficient respectively. Note that the control parameters for the flight of a particle are taken from its own copy of control variables. 4) Mutation: The mutation operator plays a key role in MOPSOs []. In this article a mutation operator of [3] has been used to allow better exploration of the search space. 5) Update Archive: The selection of the gbest solution for the velocity update is done from this archive only. In AMOPSO the maximum size of the archive has been fixed to N a =. The archive gets updated by the non dominated solutions of the swarm. All the dominated members from the archive are removed. Since the maximum size of the archive has been fixed, the density has been considered as in [] to truncate the archive to the desired size. After running AMOPSO for a fixed number of iterations, the archive is returned as the resultant non dominated set. IV. EXPERIMENTAL RESULTS The effectiveness of AMOPSO has been demonstrated on various standard test problems, that have known set of Pareto optimal solutions and are characterized to test the algorithms on different aspects of performance. AMOPSO has been compared with some MOEAs and MOPSOs. The MOEAs include NSGA-II and PESA- II, whereas the MOPSOs are MOPSO, σ-mopso and NSPSO. The codes for NSGA-II and MOPSO have been obtained from http://www.iitk.ac.in/kangal/codes.shtml and http://www.lania.mx/ ccoello/emoo/emoosoftware.html respectively. The program for PESA-II has been obtained from the authors, whereas the other algorithms are implemented. The parameters used are: population/swarm size for NSGA-II and NSPSO, for PESA-II (as suggested in [5] ), 5 for MOPSO, σ-mopso and AMOPSO, archive size for PESA-II, σ-mopso, MOPSO and AMOPSO, number of iterations 25 for NSGA-II and NSPSO, 25 for PESA-II, and 5 for MOPSO, σ-mopso and AMOPSO (to keep the number of function evaluations to 25 for all the algorithms), cross-over probability.9 (as suggested in [4]) for NSGA-II and PESA-II, mutation probability inversely proportional to the chromosome length (as suggested in [4]), coding strategy binary for PESA-II (only binary version available) while real encoding is used for NSGA-II, NSPSO, σ-mopso, MOPSO and AMOPSO (PSO naturally operates on real numbers). The values of c and c 2 have been used as and 2 for σ-mopso and NSPSO, respectively (as suggested in [9] and [] ). The value of w has been used as.4 for σ- MOPSO whereas it has been allowed to decrease from. to.4 for NSPSO (as suggested in [9] and []). For AMOPSO the initial range of the values for c i and c i 2 is [.5, 2.5] and that for w i is [.,.] ( as suggested in [7]). The values for the parameters of a particular algorithm have been used keeping in mind the suggestions in the respective literature. Usually a smaller size of the swarm is preferred. The relative performances of NSGA-II, PESA-II, σ-mopso, NSPSO, MOPSO and AMOPSO are evaluated on several test problems (i.e., two and three objectives), using some performance measures. A. Test Problems and Performance Measures In this article nine standard test problems have been used. Seven of these test problems SCH, SCH2, FON [], ZDT, ZDT2, ZDT3, ZDT4 [8] are of two objectives, while the other two, i.e., DLTZ2 and DLTZ7 [9], are 27 IEEE Congress on EvolutionaryComputation(CEC 27) 2283
of three objectives. The performance of the algorithms is evaluated with respect to the convergence measure Υ (viz., distance metric) and the diversity measure Δ [4] (viz., diversity metric). The Υ measure has been used for evaluating the extent of the convergence to the Pareto front, whereas Δ measure has been used for evaluating the diversity of the solutions on the non-dominated set. It should be noted that smaller the value of these parameters, the better will be the performance. B. Results The results are reported in terms of the mean and variance of the performance measures over 2 simulations in Tables I and II. It can be seen that AMOPSO has resulted in better convergence on all the test problems in terms of Υ measure except for the SCH test problem where it is found to be second to the NSGA-II algorithm. PESA-II has also given the best convergence for the ZDT2 test problem. Similarly, the values of the Δ measure in Table II show that AMOPSO is able to attain the best distribution of the solutions on the nondominated front for all the test problems except on FON and DLTZ2, where it is second to the NSGA-II and σ-mopso respectively. To demonstrate the distribution of the solutions on the final non-dominated front, FON, ZDT3 and DLTZ2 test problems have been considered as typical illustrations. Figures 2-4 show the resultant non dominated fronts corresponding to these test problems. Figure 2 provides the non-dominated solutions returned by the six algorithms for the FON test problem. The poor performance of PESA-II and σ-mopso is clearly evident. Figure 2(f) shows that σ-mopso has resulted in poor diversity amongst the solutions of the nondominated set. Although, the results in Figure 2(a), Figure 2(d) and Figure 2(e) are better than the aforementioned results, the best result has been obtained by AMOPSO in Figure 2(c), in terms of convergence as well as diversity. Similarly, Figure 3 represents the final fronts obtained by the six algorithms for ZDT3 function. It can be seen from Figure 3(f) that σ-mopso has failed to converge to the true Pareto-front properly. It should be noted from Figure 3(b) that PESA-II has failed to obtain all the five disconnected Pareto-optimal fronts. Although, NSGA-II has been successful in obtaining the five fronts, one of these did not come out properly (Figure 3(a)). MOPSO is found to be better than PESA-II and NSGA-II in this regard, but its solutions have poor spread on the front as is clearly evident from Figure 3(d). Moreover MOPSO is often found to converge to a local optimal front for this test function. Such an instance is shown in Figure 3(g), where MOPSO has been able to obtain only one front (not all the five) because of local optima problem. NSPSO has resulted in very good convergence, as evident from Figure 3(e), but its diversity is not as good as that of AMOPSO. Compared to all these algorithms, AMOPSO in Figure 3(c) has given better convergence and spread of the solutions on this test function. Figure 4 represents the final non-dominated fronts obtained by the algorithms on DLTZ2 test problem. From Figure 4(a) it can be seen that NSGA-II has failed considerably in attaining the non-dominated set properly in terms of both convergence as well as diversity. MOPSO (Figure 4(d)) has failed to attain the full non-dominated set. Similarly σ- MOPSO (Figure 4(f)) could not attain the non-dominated set properly. Although NSPSO has resulted in better shape of the Pareto front (Figure 4(e)), its convergence is not as good as that of AMOPSO and PESA-II as shown in Figure 4(c) and Figure 4(b) respectively. V. CONCLUSIONS AND DISCUSSION In the present article, a novel multi-objective PSO algorithm, called AMOPSO, has been presented. AMOPSO is adaptive in nature with respect to its inertia weight and acceleration coefficients. This adaptiveness enables it to attain a good balance between exploration and exploitation of the search space. A mutation operator has been incorporated in AMOPSO to resolve the problem of premature convergence to the local Pareto-optimal front (often observed in multi-objective PSOs). An archive has been maintained to store the non-dominated solutions found during AMOPSO execution. The selection of the gbest solution is done from this archive, using the diversity consideration. The method for computing diversity of the solutions is based on the nearest neighbor concept. The performance of AMOPSO is compared with some recently developed multi-objective PSO techniques and evolutionary algorithms, for nine function optimization problems of two and three objectives using some performance measures. AMOPSO is found to be good not only in approximating the Pareto optimal front, but also in terms of diversity of the solutions on the front. In this article only one version of the adaptation of control parameters has been addressed, where each particle has its own control parameter. This form of adaptation can be achieved at other levels also, like at pbest and gbest level. Further it would be interesting to study the values of these control parameters finally when the non dominated front is obtained. REFERENCES [] K. Deb, Multi-Objective Optimization using Evolutionary Algorithms. John Wiley and Sons, USA, 2. [2] J.Kennedy and R. Eberhart, Particle Swarm Optimization, in IEEE International Conference Neural Networks, 995, pp. 942 948. [3] A. P. Engelbrecht, Fundamentals of Computational Swarm Intelligence. John Wiley and Sons, USA, January 26. [4] F. van den Bergh, An analysis of particle swarm optimizers, Ph.D. dissertation, Faculty of Natural and Agricultural Science, University of Pretoria, Pretoria, November 2. [5] X. Hu and R. Eberhart, Multiobjective Optimization Using Dynamic Neighbourhood Particle Swarm Optimization, in Proceedings of the 22 Congress on Evolutionary Computation, part of the 22 IEEE World Congress on Computational Intelligence. Hawaii: IEEE Press, May 22, pp. 2 7. [6] K. Parsopoulos and M. Vrahatis, Particle swarm optimization method in multiobjective problems, in Proceedings of the 22 ACM Symposium on Applied Computing (SAC 22), 22, pp. 63 67. [7] C. Coello and M. Lechuga, MOPSO: A Proposal for Multiple Objective Particle Swarm Optimization, in Proceedings of the 22 Congress on Evolutionary Computation, part of the 22 IEEE World Congress on Computational Intelligence. Hawaii: IEEE Press, May 22, pp. 5 56. 2284 27 IEEE Congress on EvolutionaryComputation(CEC 27)
TABLE I MEAN (M) AND VARIANCE (VAR) OF THE Υ MEASURES FOR THE TEST PROBLEMS Υ Measure Algorithm SCH SCH2 FON ZDT ZDT2 ZDT3 ZDT4 DLTZ2 DLTZ7 NSGA-II(M).339.644.93.3348.7239.45.535.72775.4976 (VAR)....475.368.794.846.24. PESA-II(M).687.962.22.5.74.789 9.98254.329.539 (VAR)...... 2.34.. σ-mopso(m).364.56.25.638.584.25 3.83344.367.24494 (VAR)....48..238.8729..26 NSPSO(M).2.854.255.642.95.49 4.95775.4938.568 (VAR)...... 7.436..9 MOPSO(M).48.45.22.33.89.48 7.37429.82799.4986 (VAR)...... 5.48286.33. AMOPSO(M).8.554.2.99.74.39.43.224.236 (VAR).......259.. TABLE II MEAN (M) AND VARIANCE (VAR) OF THE Δ MEASURES FOR THE TEST PROBLEMS Δ Measure Algorithm SCH SCH2 FON ZDT ZDT2 ZDT3 ZDT4 DLTZ2 DLTZ7 NSGA-II(M).47789.22823.3786.393.4377.73854.726.97599.8949 (VAR).347.626.64.87.472.97.6465.738.52 PESA-II(M).6525.2448.8527.8486.89292.2273.36.7488.7479 (VAR).298.692.32.287.574.2925.72.93.6 σ-mopso(m).6937.4754.8583.39856.38927.766.82842.645.943 (VAR).2237.325.72.73.458.349.54.94.64 NSPSO(M).3965.767.7824.9695.9256.6272.96462.72988.7382 (VAR).494.4239.3..2.69.56.9.2 MOPSO(M).7697.43353.84943.6832.63922.8395.9694.7488.87375 (VAR).6427.354.6.335.4.892.4.93.886 AMOPSO(M).3274.965.72422.3826.3996.5354.656.67273.732 (VAR).23.3.3.6.68.36.376.87.34 [8] J. Fieldsend and S.Singh, A Multi-Objective Algorithm based upon Particle Swarm Optimization, an Efficient Data Structure and Turbulence, in Proceedings of UK Workshop on Computational Intelligence (UKCI 2), vol. 2-4, Bermingham, UK, September 22, pp. 37 44. [9] S. Mostaghim and J. Teich, Strategies for Finding Good Local Guides in Multi-objective Particle Swarm Optimization (MOPSO), in Swarm Intelligence Symposium 23. SIS 3. Inidanapolis, Indiana, USA: IEEE Service Center, April 23 23, pp. 26 33. [] X. Li, A Non-dominated Sorting Particle Swarm Optimizer for Multi-objective Optimization, in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 23), ser. Lecture Notes in Computer Science, vol. 2723. Springer, 23, pp. 37 48. [] C. A. C. Coello, G. T. Pulido, and M. S. Lechuga, Handling Multiple Objectives With Particle Swarm Optimization, IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 256 279, June 24. [2] M. Reyes-Sierra and C. A. C. Coello, Multi-Objective Particle Swarm Optimizers: A Survey of The State-of-the-Art, International Journal of Computational Intelligence Research,, vol. 2, no. 3, pp. 287 38, 26. [3] Z. Michalewicz, Genetic Algorithms + Data Structure = Evolution Programs. Springer-Verlag, 992. [4] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, A Fast and Elitist Multi-objective Genetic Algorithm: NSGA-II, IEEE Transactions On Evolutionary Computation, vol. 6, no. 2, pp. 82 97, April 22. [5] D. W. Corne, N. R. Jerram, J. D. Knowles, and M. J. Oates, PESA-II: Region-based Selection in Evolutionary Multiobjective Optimization, in Proceedings of the Genetic and Evolutionary Computing Conference (GECCO-2). Morgan Kaufmann, 2, pp. 283 29. [6] J. E. Alvarez-Benitez, R. M. Everson, and J. E. Fieldsend, A MOPSO Algorithm Based Exclusively on Pareto Dominance Concepts. in EMO, 25, pp. 459 473. [7] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, Self-Organizing Hierarchical Particle Swarm Optimizer with Time-Varying Acceleration Coefficients, IEEE Transactions On Evolutionary Computation, vol. 8, no. 3, pp. 24 255, June 24. [8] E. Zitzler, K. Deb, and L. Thiele, Comparison of Multiobjective Evolutionary Algorithms: Empirical Results, Evolutionary Computation Journal, vol. 8, no. 2, pp. 25 48, 2. [9] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, Scalable Test Problems for Evolutionary Multi-Objective Optimization, Institut fur Technische Informatik und Kommunikationsnetze,, ETH Zurich Gloriastrasse 35., ETH-Zentrum, CH-892, Zurich, Switzerland, Tech. Rep. TIK-Technical Report No. 2, July 2. 27 IEEE Congress on EvolutionaryComputation(CEC 27) 2285
(a) (b) (c) (d) "nsmpso" "sigma".9.9.8.8.7.7.6.6.5.5.4.4.3.3.2.2....2.3.4.5.6.7.8.9 f (e)..2.3.4.5.6.7.8.9 f (f) Fig. 2. Final Fronts of (a): NSGA-II, (b): PESA2, (c): AMOPSO and (d): MOPSO, (e): NSPSO and (f): σ-mopso on FON 2286 27 IEEE Congress on EvolutionaryComputation(CEC 27)
(a) (b) (c) (d) "nsmpso" 2 "sigma_mopso".8.6.5.4.2.5 -.2 -.4 -.6 -.8..2.3.4.5.6.7.8.9 f (e) -.5..2.3.4.5.6.7.8.9 (f) f (g) Fig. 3. ZDT3 Final Fronts of (a): NSGA-II, (b): PESA2, (c): AMOPSO, (d): MOPSO, (e): NSPSO and (f): σ-mopso (g): MOPSO (local convergence) on 27 IEEE Congress on EvolutionaryComputation(CEC 27) 2287
"nsga2" "pesa2" 3.5 3 2.5 2.5.5.9.8.7.6.5.4.3.2..2.4.6 f.8.2.5.5 2 2.5 3 3.5 f..2.3.4.5.6.7.8.9...2.3.4.5.6.7.8.9 (a) (b) "amopso" "mopso".9.8.7.6.5.4.3.2..2.8.6.4.2..2.3.4.5 f.6.7.8.9..2.3.4.5.6.7.8.9.35.4.45.5 f.55.6.65.7.75.8.85.9.2.4.6.8.2.4 (c) (d) "nsmpso" "sigma".2.8.6.4.2.9.8.7.6.5.4.3.2. f.2.4.6.8.2.2.4.6.8.2 f..2.3.4.5.6.7.8.9...2.3.4.5.6.7.8.9 (e) (f) Fig. 4. Final Fronts of (a): NSGA-II, (b): PESA2, (c): AMOPSO, (d): MOPSO, (e): NSPSO and (f): σ-mopso on DLTZ2 2288 27 IEEE Congress on EvolutionaryComputation(CEC 27)