Scalarization versus Indicator-based Selection in Multi-Objective CMA Evolution Strategies

Size: px
Start display at page:

Download "Scalarization versus Indicator-based Selection in Multi-Objective CMA Evolution Strategies"

Transcription

1 Scalarization versus Indicator-based Selection in Multi-Objective CMA Evolution Strategies Thomas Voß Institut für Neuroinformatik Ruhr-Universität Bochum 780 Bochum, Germany Günter Rudolph Lehrstuhl für Algorithm Engineering Technische Universität Dortmund Dortmund, Germany Nicola Beume Lehrstuhl für Algorithm Engineering Technische Universität Dortmund Dortmund, Germany Christian Igel Institut für Neuroinformatik Ruhr-Universität Bochum 780 Bochum, Germany Abstract While scalarization approaches to multicriteria optimization become infeasible in the case of many objectives, for few objectives the benefits of populationbased methods compared to a set of independent singleobjective optimization trials on scalarized functions are not obvious. The multi-objective covariance matrix adaptation evolution strategy (MO-CMA-ES) is a powerful algorithm for real-valued multi-criteria optimization. This populationbased approach combines mutation and strategy adaptation from the elitist CMA-ES with multi-objective selection. We empirically compare the steady-state MO-CMA-ES with different scalarization algorithms, in which the elitist CMA-ES is used as single-objective optimizer. Although only bicriteria benchmark problems are considered, the MO-CMA-ES performs best in the overall comparison. However, if the scalarized problems have a structure that can easily be exploited by the CMA-ES and that is less apparent in the vector-valued fitness function, the CMA- ES with scalarization outperforms the population-based approach. I. INTRODUCTION The goal of multi-objective optimization (MOO) can be defined as efficiently finding a set of solutions that diversely and accurately approximates the set of Pareto-optimal solutions. Evolutionary algorithms (EAs) are particularly well-suited for MOO. This may have several reasons, but one of the most intriguing arguments is that the search in MOO for a set of solutions perfectly matches the concept of populations in EAs. In this study, we empirically investigate the role of the population dynamics in a recently proposed evolutionary MOO algorithm. Many established deterministic approaches to MOO consider the scalarization of the vector-valued problem. To find a diverse approximation of the set of Pareto-optimal solutions, the vector-valued problem is reduced to a number of different scalar-valued optimization problems, which favor different trade-offs between objectives. Prominent examples of scalarization methods are the basic weighted-sum approach and the Tchebycheff method []. In contrast, most state-of-the-art evolutionary methods for solving MOO problems more directly rely on the notion of Pareto-dominance for finding good approximations of the Pareto-optimal set. This does not explicitly need information regarding the importance of single criteria. However, selection in general requires a ranking of the individuals in the current population including the comparison of solutions that are not related in terms of Pareto-dominance. This ranking may be viewed as some kind of scalarization, which is, however, dynamic and usually for each solution dependent on the other solutions represented in the current population. In the population-based algorithm considered here, this ranking is induced by the hypervolume in the objective space covered by the solutions in the population: Selection aims at preserving those solutions that maximize the covered hypervolume. Scalarization approaches as outlined above are only feasible if there are only very few objectives, as the number of combinations of weighting coefficients expressing relative preferences for objectives grows exponentially with the number of objectives and so scales the number of single-objective optimization trials in simple scalarization methods. However, for two or three objectives the benefits of directly evolving a set of solutions compared to independent (parallel) singleobjective optimization trials on scalarized functions are not obvious. Note that there exists scalarization methods (e.g., based on the Tchebycheff metric) that do not suffer from the often mentioned problems of weightedsum approaches with non-convex MOO problems. In this study, we consider the covariance matrix

2 adaptation evolution strategy (CMA-ES [], []), which is an elaborate algorithm for real-valued optimization performing well in real-world applications as well as in benchmark scenarios [], and its multi-objective variant, the MO-CMA-ES [], []. The latter features a population-based approach combining the methods for mutation and strategy adaptation from the elitist CMA-ES with multi-objective selection. This makes the algorithm ideal for studying the influence of the population dynamics in the case of few objectives, because we can directly compare the MO-CMA-ES with singleobjective optimizations using the elitist CMA-ES. To this end, we formulate hybrid algorithms combining established scalarization methods with the CMA-ES. The remainder of this paper is organized as follows. First, two basic methods for scalarization are presented (Sec. II). In Sec. III, the single- and multi-objective elitist CMA-ESs are described and hybrid algorithms using scalarization and the CMA-ES are introduced. TheexperimentalsetupisgiveninSec.IVandresultsof the performance assessment are presented. The paper closes with conclusions and ideas for future work. II. SCALARIZATION OF MULTI-OBJECTIVE OPTIMIZATION PROBLEMS This section introduces the weighted-sum method and the method of weighted metrics for the scalarization of vector-valued optimization problems. Moreover, the ability of these methods to find Pareto-optimal solutions is briefly discussed. Real-valued, Ñ-objective optimization problems Ê Ê Ñ Ü Üµ Ñ Üµµ to be minimized in each objective are considered. A. The Weighted-Sum Method The weighted-sum method (e.g., [7], [8]) assigns a weighting coefficient to each objective function and minimizes the weighted sum of the objectives. Thus, the vector-valued optimization problem is transformed into a scalar optimization problem of the following form: ÑÑÞ Ü¾Ê Ûµ ܵ Ñ Û Üµ where Û is a weighting-vector with Û ¼ for all Ñ and È Ñ Û. B. The Method of Weighted Metrics In the method of weighted metrics, the distance between some reference point and the feasible objective regionis minimized.most often,the utopianpoint Ù is used as a reference and Ä Ô -metrics are used to measure distance. Moreover, each single objective function is See the References to CMA-ES Applications: tu-berlin.de/user/niko/cmaapplications.pdf f u f Fig.. Illustration of the Tchebycheff-method for the scalarization of a multi-objective optimization problem in the objective space. All valid points () are reachable by a rectangle centered around an utopian point Ù. Thus, even concave regions of the Pareto front (bold line) can be obtained. associated with a weighting coefficient. Thus, the multiobjective optimization problem is replaced with the following single-objective optimization problem, called the weighted Ä Ô -problem: ÑÑÞ Ü¾Ê Ä ÛÙ µ Ô Üµ Z Ñ Û Üµ Ù Ô Ô with Ô, Û ¼ for all È Ñ and Ñ Û. In the case of Ô, the metric is also called the Tchebycheff metric and the weighted Ä - or the Tchebycheff problem is of the form: ÑÑÞ Ü¾Ê ÛÙ µ ܵ ÑÜ Û Üµ Ñ Ù The utopian point Ù Þ can be set via the ideal point Þ with Þ Ñ Üµ Ü ¾ Ê and an offset, where ¾ Ê and is the Ñ-dimensional vector of all ones. C. Properties of Scalarization Methods Now we review some properties of the methods introduced previously regarding their ability to find every Pareto-optimal solution of a vector-valued optimization problem. Recall that a function Ê Ê is convex if for all Ü Ü ¾ ¾ Ê and ¼ the following equation holds Ü µü ¾ µ Ü µ µ Ü ¾ µ A set Ë Ê is convex if Ü Ü ¾ ¾ Ë implies that Ü µü ¾ ¾ Ë with ¼. A multi-objective optimization problem is convex, if all the objective functions and the feasible region are convex. Let Ê Ê Ñ be a multi-objective optimization problem with the feasible region Ë Ê. As stated in [], if is convex and Ü ¾ Ë is Pareto-optimal, then

3 there exists a weighting-vector Û ¾ Ê Ñ such that Ü is a solution of Ûµ. For the Tchebycheff-method a stronger result holds. It is able to find every Pareto-optimal solution regardless of the vector-valued optimization problem being convex. That is, let Ü ¾ Ë be Pareto-optimal. Then there exists a weighting vector Û ¾ Ê Ñ such that Ü is a solution of ÛÙ. III. COVARIANCE MATRIX ADAPTATION EVOLUTION STRATEGIES IN MULTI-OBJECTIVE OPTIMIZATION In this section, we describe the covariance matrix adaptation evolution strategy (CMA-ES). First, we review the elitist single-objective CMA-ES, and then we outline the steady-state multi-objective CMA-ES. The key idea of CMA-ES algorithms is to alter the mutation distribution such that the probability to reproduce steps in the search space that led to the actual population(i.e., produced offspring that were selected) is increased. This enables the algorithm to detect correlations between object variables and to become invariant under affine transformations of the search space. The algorithms implement several important concepts for adapting search distributions efficiently. The first one is known as derandomization meaning that the mutation distribution is adapted in a deterministic way. The second principle is cumulation, which refers to taking the search path of the population over the past generations into account, where the influence of previous steps decays exponentially. A. Nomenclature In the elitist CMA-ES, an individual µ in generation is a -tuple µ Ü µ Ô µ succ µ Ô µ µ comprising its candidate solution vector Ü µ ¾ Ê, an averaged success rate Ô µ Ù ¾ ¼, the global step size µ ¾ Ê, an evolution path Ô µ ¾ Ê, and the covariance matrix µ ¾ Ê. For the multiobjective CMA-ES, an individual µ denotes the -th individual in generation. Additionally, the following nomenclature is used: Ê Ê Ü Üµ is the scalar fitness function to be minimized. For the MO-CMA-ES, Ê Ê Ñ Ü Üµ is the vector-valuedfitness function. Æ Ñ µ is a multi-variate normal distribution with mean vector Ñ and covariance matrix. The notation Ü Æ Ñ µ denotes that random variable Ü is distributed according to Æ Ñ µ. Ü µ Ü µ ¾ Ê is the best point from that is, are broken at random. Ü µ Ü µ Ü µ Ó, for all. Ties B. Elitist Single-Objective CMA Evolution Strategy The (+)-CMA-ES combines success-based step size adaptation and elitist selection [9], [0], [] with the covariance matrix adaptation presented in []. The algorithm is described within three routines. In the main routine, (+)-CMA-ES, the new candidate solution is sampled and the parent solution µ is updateddependingon whether the new solution µ is better than µ Algorithm : (+)-CMA-ES ¼, initialize µ repeat µ µ Ü µ Ü ¼ µ Æ µ¾ µ -update µ Á Ü ¼ µ if Ü ¼ µ Ü µ then Ü µ Ü ¼ µ -update µ Ü µ Ü µ µ until stopping criterion is met Here the indicator function Á µ Ü µ is one if its argument is true and zero otherwise. Thus, Á Ü ¼ µ Ü µ is one if the last mutation has been successful and zero otherwise. After sampling the new candidate solution, the step size is updated based on the success with a learning rate Ô (¼ Ô ) using a target success rate Ô target succ. Procedure -update( Ü Ô succ Ô Ô succ ) Ô Ù ÜÔ Ô µ Ô Ù Ô Ô succ Ô succ Ô target succ Ô target succ Ô succ µ This update rule is rooted in the -success-rule proposed by[9]andis anextensionfromthe rule proposed by []. If the best new candidate solution was better than the parent individual (see main routine), the covariance matrix is updated as in the original CMA-ES (see []). Procedure -update( Ü Ô succ Ô Ü step ) if Ô succ Ô thresh then Ô Ô µô c ¾ c µ Ü step T cov µ cov Ô Ô else Ô µô T cov µ cov Ô Ô c ¾ c µ The update of the evolution path Ô depends on the value of Ô Ù. If the smoothed success rate Ô Ù is high, that is, above Ô thresh ¼, the update of the evolution path Ô is stalled. This prevents a too fast increaseof axesof when the step size is fartoo small, for example, in a linear surrounding. The constants and cov (¼ cov ) are learning rates for the evolution path and the covariance matrix, respectively.

4 Ô The factor c ¾ c µ normalizes the variance of Ô viewed as a random variable (see []). The evolution path Ô is used to update the covariance matrix. The new covariance matrix is a weighted mean of the old covariance matrix and the outer product of Ô. In the second case (line ), the second summand in the update of Ô is missing and the length of Ô shrinks. Although of minor relevance, the term c ¾ c µ (line ) compensates for this shrinking in. The (external) strategy parameters are the population size, target success probability Ô ØÖØ Ù, step size damping, success rate averaging parameter Ô, cumulation timehorizonparameter c,andcovariancematrixlearning rate cov. Default values, as given in [] and used in this paper, are: Ô ØÖØ Ù ¾ Ô ¾ Ô Ô ØÖØ Ù ¾ ÔØÖØ Ù c ¾ ¾µ cov ¾ ¾ µ Ô thresh ¼ The elements of the initial individual ¼µ are set to Ô Ù Ô target succ, Ô ¼, and Á, where Á is the identity matrix. The initial candidate solution Ü ¼µ ¾ Ê and the initial ¾ Ê must be chosen problem dependent. The optimum should presumably be within the cube Æ Ü ¼µ ¾ Ü ¼µ ¾. C. Multi-Objective CMA Evolution Strategy In the following, we briefly outline the multiobjective covariance matrix adaptation evolution strategy (MO-CMA-ES) according to [], for a detailed description and a performance evaluation on benchmark functions (in particular a comparison with real-valued NSGA-II []) we refer to []. The MO-CMA-ES relies on the non-dominated sorting selection scheme suggested by []. As in the SMS-EMOA [], the hypervolume-indicator serves as second-level sorting criterion to rank individuals on the same level of non-dominance. Let be a population, and let, ¼ be two individuals in. Let the non-dominated solutions in be denoted by ÓÑ µ ¾ ¼ ¾ ¼ where denotes the Pareto-dominance relation. The elements in ÓÑ µ are assigned a level of nondominance of. The other ranks of non-dominance are defined recursively by considering the set without the solutions with lower ranks ([]). Formally, let ÓÑ ¼ µ ÓÑ Ð µ ÓÑ Ð µ ÓÑ Ð µ, and ÓÑ Ð µ ÓÑ ÓÑ Ð µµ for Ð. For ¾ we define the level of non-dominance Ö µ to be iff ¾ ÓÑ µ. Let Ë µ be the contributing hypervolume of with respect to ¼ ¾ Ö ¼ µ Ö µ []. Moreover, let ÓØ µ be the contribution rank of with respect to its contributing hypervolume. Then the following relation is defined. ¼ Ö µ Ö ¼ µ Ö µ Ö ¼ µ ÓØ µ ÓØ ¼ µµ The standard version of the steady-state MO-CMA- ES is given in Algorithm. Algorithm : Steady-State MO-CMA-ES Ó ¼, initialize É µ repeat µ Í ÓÑ É µ µ µ É µ ¼ µ µ Ü ¼ µ Ü µ Æ µ¾ µ É µ É µ Ó ¼ µ ܼ µ ¼ µ -update -update -update µ Á ¼ µ Á É µ É µ É µ until stopping criterion is met Ü µ µ ¼ µ µ É µ ¼ µ É µ µ The parent of the single offspring is selected uniformly at random among the non-dominated solutions in the current population and an offspring is generated (line 7). The step size of the parent and its offspring are updated depending on whether the mutation was successful, that is, whether the offspring is better than the parent according to the relation É µ. Both step size and covariance matrix update are the same as in the single-objective (+)-CMA-ES described above. The best individuals in É µ sorted by É µ form the next parent generation (line, where É µ is the -th best offspring in É µ w.r.t. É µ). D. Hybrid MO-CMA-ES In this section we introduce the canonical version of an algorithm for solving vector-valued optimization problems by using the scalarization methods discussed in Sec. II and the single-objective (+)-CMA-ES. Depending on the scalarization method, the algorithm is denoted by (+)-CMA-ES or (+)-CMA-ES, where the superscript indicates whether the Tchebycheff () or the weighted sum approach () is used. a) Choosing the weight vector: We restrict ourselves to bicriteria optimization problems Ê Ê ¾ that

5 allow for a simple and efficient scheme to systematically select the weighting-vectors. Given a number of weight steps Ð a set of weighting-vectors Ï ««µ «Ð µ ¼ Ð is generated. b) Calculating the utopian point: The Tchebycheffmethod requires a utopian point Ù. Unfortunately, this point is not known in advance given a multiobjective optimization problem. Thus, we rely on the (+)-CMA-ES to calculate an estimate of the ideal point Þ and set Ù Þ for some ¼ and the Ñ-dimensional vector of ones. Let be a bicritera optimization problem, and let Ü, ¾ Ü ¾ be the objective values of the final points found by the (+)- CMA-ES running on and ¾, respectively. Then we choose Þ Ü and Þ ¾ ¾ Ü ¾. c) Hybrid algorithms: If Tchebycheff scalarization is used, the algorithm reads: Algorithm : (+)-CMA-ES initialize Ü ¼µ Ü Ü µµ (+)-CMA-ES on Ü µ Ù Ü µ initialize Ü ¼µ Ü Ü µµ (+)-CMA-ES on ¾ Ü µ Ù ¾ ¾ Ü µ Ï ««µ«ð µ ¼ Ð foreach Û ¾ Ï do initialize Ü ¼µ Ü Ü µµ (+)-CMA-ES on ÛÙ µ Ü µ In lines 9 a utopian point is estimated based on an estimate of the optimum for each objective. È ¾ Then for each Û ¾ Ï, Û ¼ ¾ and Û, the (+)-CMA-ES is run on ÛÙ µ for a given number of generations (line ), producing a solution Ü and the corresponding objective vector Ü µ. If the weighted-sum approach is used, ÛÙ µ is simplyreplacedby Ûµ inthealgorithmabove.inthe weighted sum approach, an estimate of a utopian point is not required and lines 9 can be replaced by if we set Ï ««µ«ð µ ¼ Ð. For each single-objective optimization of a scalarized function, all parameters of the (+)-CMA-ES are reinitialized, in particular the global step-size. However, given a weighting-vector Û and the corresponding solution Ü Û, Rudolph et al. suggested in [] to use Ü Û as initial search point in solving the single-objective problem corresponding to Û ¼ Û Ð µ Ð µµ. Apart from the initial search point Ü ¼µ, all parameters of the (+)-CMA-ES are reinitialized. The modified algorithms are denoted by (+)-CMA-ES and (+)-CMA-ES, respectively. For example, the (+)-CMA-ES algorithm reads: Algorithm : (+)-CMA-ES, initialize Ü ¼µ Ï ««µ«ð µ ¼ Ð foreach Û ¾ Ï do Ü Ü µµ (+)-CMA-ES on Ûµ Ü µ Ü ¼µ Ü IV. EXPERIMENTS The experimental setup for the performance evaluation has been chosen similar to the one in []. The results of both variants of the (+)-CMA- ES and (+)-CMA-ES were compared to results of the steady-state MO-CMA-ES using the Ë-measure as second-level sorting criterion. Our evaluation is based on two performance indicators, the - and the hypervolume-indicator. We use the evaluation module of PISA presented in [] for parts of the performance assessment. We briefly outline the evaluation process below and refer to [7], [8] for a detailed description of the methods. Before performance indicators are computed, the data are normalized. We want to compare algorithms on aparticularoptimizationproblem after fitness evaluations and we assume that we have conducted Ø trials. We consider the non-dominated individuals of the union of all Ø populations after evaluations. These individuals make up the reference set Ê. Their objective vectors are normalized by an affine linear transformation such that for every objective the smallest and largest objective function values are mapped to and, respectively. The higher the value for the unary hypervolumeindicator the more volume is covered by the final set of solution and the better the performance. In contrast, a lower value of the binary -indicator means better performance. The -indicator is computed with respect to the reference set Ê, and the smaller the value the closer is the result to Ê. A. Benchmark Functions Several benchmark functions have been chosen from the literature. The constrained functions ZDT, ZDT, ZDT, and ZDT [9] and their rotated variants termed IHR, IHR and IHR [] were selected. Additionally, PISA stands for Platform and Programming Language Independent Interface for Search Algorithms: Note that in [], [], [] the hypervolume is compared to a reference set and therefore a lower value indicates better performance.

6 the unconstrained and rotated functions ELLI, ELLI ¾, CIGTAB, and CIGTAB ¾ proposed in [] were used. The benchmark function WFG introduced in [0] has been chosen to illustrate the ability of a scalarization method to find solutions in concave regions of the Pareto-optimal front. In Table I the definitions of the benchmark functions are summarized. B. Parameter Setup For each of the algorithms, 0 independent trials with 00,000 fitness function evaluations were carried out. For the hybrid algorithms a number of weight steps Ð ¼¼ with the single-objective CMA-ES running for 000 generations was chosen. For the MO-CMA-ES, a population size of ¼¼ and an offspring population size of was chosen. Accordingly, the number of generations was set to 00,000. All other parameters were set to standard values. C. Results Table II shows the results of the performance evaluation after 0 trials with 00,000 fitness evaluations. After 0,000 fitness evaluations the results are similar and therefore not shown here. Some differences are so small that they are not apparent in the result table, although they are statistically significant. For the function ELLI ¾, where the results are very close, we additionally provide boxplots of the indicator values in Fig.. D. Discussion The MO-CMA-ES performs significantly better than the (+)-CMA-ES and the (+)-CMA-ES on all benchmark functions except for ELLI and ELLI ¾. On nine test problems, (+)-CMA-ES performs significantly better than (+)-CMA-ES, while (+)- CMA-ES is only significantly better than (+)- CMA-ES onihr.thus,thewarmstartsinthe(+)- CMA-ES improve the algorithms The function WFG has a non-convex Pareto front, and thus it does not come as a surprise that the weighted-sum approach performs badly on WFG. The good results of the (+)-CMA-ES algorithms on the functions CIGTAB, CIGTAB ¾, ELLI, and ELLI ¾ are striking. But these are ideal test functions for this method. First, as all objective functions are convex, the weighted-sum approach is in principle able to find all Pareto-optimal points. Second, for fixed weighting coefficients, these functions reduce to quadratic optimization problems (i.e., for these two functions Û Üµ Û µ ¾ ܵ are elliptic paraboloids), and exactly these problems are ideal for the CMA-ES. That is, on CIGTAB, CIGTAB ¾, ELLI, and ELLI ¾, the algorithms (+)-CMA-ES and (+)-CMA-ES decompose the multi-objective optimization problem into subproblems perfectly suited for the CMA-ES, and this results in superior performance on these benchmark functions. V. CONCLUSIONS Scalarization approaches to multi-objective optimization (MOO) as considered in this paper suffer from the curse of dimensionality and are not suited for many objectives. However, for bicriteria optimization single-objective evolutionary algorithms applied to a set of scalarized fitness functions may be competitive to population-based approaches, in which the population ofasingletrialisconsideredasthesolutiontothemoo problem. Here we compared the multi-objective CMA evolution strategy (MO-CMA-ES), which is based on the elitist CMA-ES, with application of the elitist CMA- ES to the optimization of scalarized fitness functions. To this end, we introduced new hybrid algorithms combining established scalarization methods with the CMA-ES. These algorithms have similar invariance properties as the MO-CMA-ES. On most standard benchmark functions, the MO- CMA-ES outperformed the elitist CMA-ES combined with scalarization. But when the scalarization produced perfect quadratic fitness functions, the powerful strategy adaptation of the CMA-ES could exploit this ideal structure and the CMA-ES combined with weightedsum scalarization performed on par with the truly population-based MO-CMA-ES. Because of the known problems of the weighted-sum approach with nonconvexity, the hybrid algorithm using the Tchebycheff metric performed better compared to weighted aggregation on non-convex MOO problems. Future work will include the evaluation of the proposed algorithms on more than two objectives with the hypothesis that the MO-CMA-ES will outperform the hybrid methods even more clearly. Further, we will compare the performance in scenarios where the number of fitness evaluations is strictly limited. REFERENCES [] K. Miettinen, Nonlinear Multiobjective Optimization, ser. Kluwer s International Series in Operations Research & Management Science. Kluwer Academic Publishers, 999, vol.. [] N. Hansen and A. Ostermeier, Completely derandomized selfadaptation in evolution strategies, Evolutionary Computation, vol. 9, no., pp. 9 9, 00. [] N. Hansen, The CMA evolution strategy: A comparing review, in Towards a new evolutionary computation. Advances on estimation of distribution algorithms., I. I. J.A. Lozano, P. Larraãga and E. Bengoetxea, Eds. Springer-Verlag, 00. [] A. Auger and N. Hansen, A restart CMA evolution strategy with increasing population size, in Proceedings of the IEEE Congress on Evolutionary Computation (CEC 00). IEEE Press, 00, pp [] C. Igel, N. Hansen, and S. Roth, Covariance matrix adaptation for multi-objective optimization, Evolutionary Computation, vol., no., 00. [] C. Igel, T. Suttorp, and N. Hansen, Steady-state selection and efficient covariance matrix update in the multi-objective cma-es, in Fourth International Conference on Evolutionary Multi-Criterion Optimization (EMO 007), ser. LNCS, vol. 0. Springer-Verlag, 007.

7 TABLE I BENCHMARK PROBLEMS TO BE MINIMIZED, Ý ÇÜ, WHERE Ç ¾ Ê IS AN ORTHOGONAL MATRIX, AND Ý MAX ÑÜ Ó µ. FOR THE DEFINITION OF,, AND WE REFER TO [9]. FOR THE ROTATED BENCHMARK FUNCTIONS, WITH ¼¼¼, ¼¼, Ý ÇÜ, AND Þ Ç¾Ü, WHERE Ç AND Ǿ ARE ORTHOGONAL MATRICES.FOR THE BENCHMARK FUNCTIONWFG, THE INPUT VECTOR Ü IS TRANSFORMED SEVERAL TIMES RESULTING IN THE VECTOR Ø WITH ¾. WE REFER TO [0] FOR THE DEFINITION OF THE TRANSFORMATIONS. Problem Variable Objective Optimal bounds function solution IHR 0 ܵ Ý Ý ¾ ¼ Ý Ô max ¾ ܵ ݵ ݵ ݵ Ý ¼ ݵ ¾ Ý µ µ ¾ IHR 0 ܵ Ý Ý ¾ ¼ Ý Ô max ݵ ¾ ܵ ݵ È Ýµ ݵ ݵ ¼Ý µ Ý ¼ ݵ ¾ Ý µ µ ¾ IHR 0 ܵ ÜÔ Ýµµ ݵ Ý ¾ Ý max Ýmax ¾ ܵ ݵ ܵ ݵµ ¾ È Ý ¼ ݵ ¾ ¼¾ Ý µ µ ¾ WFG 0 ¼ ܵ Ó ¼Ø µ Ü ¾ ¼ ¾ ܵ Ø Ó ¾ Ø ¾µµ ¾ µ ELLI 0 ¼ ¼ ܵ ¾ ¾ ܵ Unconstrained Benchmark Functions ¾ Ý ¾ ¾ ELLI¾ 0 ¼ ¼ ܵ ¾ ¾ ܵ ¾ CIGTAB 0 ¼ ¼ ܵ ¾ ¾ ܵ ¾ CIGTAB¾ 0 ¼ ¼ ܵ ¾ ¾ ܵ ¾ Ý Ý ¾ Ý ¾µ ¾ Ý ¾ ¼ ¾ ¾ Ý ¾ ¾ Þ ¾µ ¾ Ý ¾ È ¾ ݾ ¾ Ý ¾ Ý Ý Ý ¾µ ¾ È ¾ Ý ¾µ ¾ ¾ Ý ¾µ ¾ Ý ¾ ¼ ¾ Ý ¾ È ¾ ݾ ¾ Ý ¾ Þ ¾µ ¾ È ¾ Þ ¾µ ¾ ¾ Þ ¾µ ¾ ¼ (+)-CMA-ES (+)-CMA-ES (+)-CMA-ES (+)-CMA-ES ¼ hypervolume-indicator ¼ -indicator ¼ ¼ MO-CMA-ES (+)-CMA-ES (+)-CMA-ES MO-CMA-ES (+)-CMA-ES (+)-CMA-ES Fig.. Boxplots of theresultsof theperformance assessmentfor thefitnessfunctionelli¾.theleftplot showsthevaluesof thehypervolumeindicator (minus ¾), the right plot shows the -indicator values on logarithmic scale. The boxes refer to the th and 7th percentile. In each plot, from left to right, results of the MO-CMA-ES, (+)-CMA-ES, (+)-CMA-ES, (+)-CMA-ES, (+)-CMA-ES are shown.

8 TABLE II RESULTS ON BENCHMARK PROBLEMS.THE UPPER AND LOWER PART OF EACH TABLE SHOWS THE MEDIAN OF 0 TRIALS AFTER 00,000 FITNESS EVALUATIONS OF THE HYPERVOLUME-INDICATOR(THE HIGHER THE BETTER) AND THE -INDICATOR(THE LOWER THE BETTER), RESPECTIVELY.THE BEST VALUE IN EACH ROW IS UNDERLINED, THE WORST IS PRINTED IN BOLD. THE SUPERSCRIPTSI, II, III, IV AND V INDICATE WHETHER AN ALGORITHM IS STATISTICALLY SIGNIFICANTLY BETTER THAN THE MO-CMA-ES,(+)-CMA-ES, (+)-CMA-ES, (+)-CMA-ES AND (+)-CMA-ES, RESPECTIVELY(TWO-SIDEDWILCOXON RANK SUM TEST, Ô ¼¼¼, SLANTED SUPERSCRIPTS REFER TO A SIGNIFICANCE LEVEL OF Ô ¼¼). MO-CMA-ES (+)-CMA-ES (+)-CMA-ES (+)-CMA-ES (+)-CMA-ES Hypervolume-Indicator CIGTAB.8 II,III,IV,V II.9 II,III.8 II,III,IV CIGTAB¾.000 II,III,IV,V.000 IV,V.000 II,IV,V IV ELLI.08.0 I.08 I.0 I,II,III.089 I,II,III,IV ELLI¾ I,II,IV,V.000 I,II.000 I,II,IV IHR II,IV,V II,IV IHR.7 II,IV I,II,IV.98 II.9809 I,II,III,IV IHR.80 II,III,IV,V III II,III 0.00 II,III,IV WFG ¼¾¼ II,III,IV,V ¼¾ IV,V ¼ IV,V ¼ ¼¼¼ V ¼¼ ZDT.089 II,III,IV,V..07 II,IV,V. II.8 II,IV ZDT.7 II,III,IV,V II,IV,V.0798 II.0878 II,IV ZDT.7 II,III,IV,V II,IV,V.08 II.08 II,IV ZDT.07 II,III,IV,V.99 IV,V.99 II,IV,V Indicator CIGTAB ¾¼ ¼ II,III,IV,V ¼¾¾¼ ¼ ¼ ¼ II ¾ ¼ II,III ¼ II,III,IV CIGTAB¾ ¼ ¼ II,III,IV ¼ ¼ ¼ ¼ II,IV ¾ ¼ ¼ II ¼¾ ¼ II,III,IV ELLI ¾¼ ¼ ¼ I ¼ I ¼¾¼ ¼ I,II,III ¾ ¼ I,II,III,IV ELLI¾ ¼ ¾¾ ¼ I ¾ ¼ I,II ¼ ¼ I,II,III ¼¼ ¼ I,II,III,IV IHR ¼ ¼ ¾ ¼ ¼ ¾IV ¾¼¼ ¼ ¾II,IV,V ¾¼ ¼ ¾ ¼ ¼ ¾II,IV IHR ¼¼ ¼ ¾II,IV ¼¼ ¼ ¾ ¾¼ ¼ ¾I,II,IV ¾ ¼ ¾II ¾ ¼ ¾I,II,III,IV IHR ¼ ¼ II,III,IV,V ¾¼¼ ¼ III ¼¼ ¼ ¼ ¼ II,III ¼ ¼ II,III,IV WFG ¼ ¼ ¾II,III,IV,V ¼ ¼ ¾III,IV,V ¼ ¼ ¾IV,V ¾ ¼ ¼ ¾ ¼ ¼ ¾IV ZDT ¼ ¼ ¾II,III,IV,V ¼ ¼ ¾ ¼¾ ¼ ¾II,IV,V ¼ ¼ ¾II ¼¼¼ ¼ ¾II ZDT ¼¼ ¼ II,III,IV,V ¼¼ ¼ ¼¼ ¼ II,IV,V ¼¼ ¼ II ¼ ¼ ¼ II,IV ZDT ¼ ¼ ¾II,III,IV,V ¾¼ ¼ ¾ ¾ ¼ ¾II,IV,V ¼¾ ¼ ¼ ¾II ¾¼ ¼ ¾II,IV ZDT ¾ ¼ ¾II,III,IV,V ¼ ¼ ¾IV,V ¼ ¼ ¾II,IV,V ¼ ¾V ¼ ¾ [7] S. Gass and T. Saaty, The Computational Algorithm for the Parametric Objective Function, Naval Research Logistics Quarterly, vol., no., pp. 9, 9. [8] L. Zadeh, Optimality and Non-Scalar-Valued Performance Criteria, IEEE Transactions on Automatic Control, vol. 8, no. 8, pp. 9 0, 9. [9] I. Rechenberg, Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Frommann- Holzboog, 97. [0] H.-P. Schwefel, Evolution and Optimum Seeking, ser. Sixth- Generation Computer Technology Series. John Wiley & Sons, 99. [] H.-G. Beyer and H.-P. Schwefel, Evolution strategies: A comprehensive introduction, Natural Computing, vol., no., pp., 00. [] S. Kern, S. D. Müller, N. Hansen, D. Büche, J. Ocenasek, and P. Koumoutsakos, Learning probability distributions in continuous evolutionary algorithms a comparative review, Natural Computing, vol., pp. 77, 00. [] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on Evolutionary Computation, vol., pp. 8 97, 00. [] N. Beume, B. Naujoks, and M. Emmerich, SMS-EMOA: Multiobjective selection based on dominated hypervolume, European Journal of Operational Research, vol. 8, no., pp. 9, 007. [] G. Rudolph, B. Naujoks, and M. Preuss, Capabilities of MOEA to Detect and Preserve Equivalent Pareto Subsets, in Proceedings of the th International Conference on Evolutionary Multi-Criterion Optimization (EMO 007), S. Obayashi, Ed. Springer-Verlag, 007, pp. 0. [] J. Knowles, L. Thiele, and E. Zitzler, A tutorial on the performance assessment of stochastic multiobjective optimizers, Computer Engineering and Networks Laboratory (TIK), ETH Zurich, Switzerland,, 00, revised version. [7] E. Zitzler and L. Thiele, Multiobjective optimization using evolutionary algorithms a comparative case study, in Fifth International Conference on Parallel Problem Solving from Nature (PPSN-V), A. E. Eiben, T. Bäck, M. Schoenauer, and H.-P. Schwefel, Eds. Berlin, Germany: Springer-Verlag, 998, pp [8] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. Grunert da Fonseca, Performance assesment of multiobjective optimizers: An analysis and review, IEEE Transactions on Evolutionary Computation, vol. 7, no., pp. 7, 00. [9] E. Zitzler, K. Deb, and L. Thiele, Comparison of multiobjective evolutionary algorithms: Empirical results, Evolutionary Computation, vol. 8, no., pp. 7 9, 000. [0] S. Huband, P. Hingston, L. Barone, and L. While, A review of multi-objective test problems and a scalable test problem toolkit, in IEEE Transactions on Evolutionary Computation, vol. 0, 007, pp

Speeding Up Many-Objective Optimization by Monte Carlo Approximations

Speeding Up Many-Objective Optimization by Monte Carlo Approximations Speeding Up Many-Objective Optimization by Monte Carlo Approximations Karl Bringmann a, Tobias Friedrich b,, Christian Igel c, Thomas Voß d a Max-Planck-Institut für Informatik, Saarbrücken, Germany b

More information

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences Upali K. Wickramasinghe and Xiaodong Li School of Computer Science and Information Technology, RMIT University,

More information

Approximation-Guided Evolutionary Multi-Objective Optimization

Approximation-Guided Evolutionary Multi-Objective Optimization Approximation-Guided Evolutionary Multi-Objective Optimization Karl Bringmann 1, Tobias Friedrich 1, Frank Neumann 2, Markus Wagner 2 1 Max-Planck-Institut für Informatik, Campus E1.4, 66123 Saarbrücken,

More information

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms H. Ishibuchi, T. Doi, and Y. Nojima, Incorporation of scalarizing fitness functions into evolutionary multiobjective optimization algorithms, Lecture Notes in Computer Science 4193: Parallel Problem Solving

More information

Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts

Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts IEEE Symposium Series on Computational Intelligence Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts Heiner Zille Institute of Knowledge and Language Engineering University of Magdeburg, Germany

More information

R2-IBEA: R2 Indicator Based Evolutionary Algorithm for Multiobjective Optimization

R2-IBEA: R2 Indicator Based Evolutionary Algorithm for Multiobjective Optimization R2-IBEA: R2 Indicator Based Evolutionary Algorithm for Multiobjective Optimization Dũng H. Phan Department of Computer Science University of Massachusetts, Boston Boston, MA 02125, USA Email: phdung@cs.umb.edu

More information

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems Repair and Repair in EMO Algorithms for Multiobjective 0/ Knapsack Problems Shiori Kaige, Kaname Narukawa, and Hisao Ishibuchi Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho,

More information

Computational Intelligence

Computational Intelligence Computational Intelligence Winter Term 2016/17 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund Slides prepared by Dr. Nicola Beume (2012) Multiobjective

More information

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization adfa, p. 1, 2011. Springer-Verlag Berlin Heidelberg 2011 Devang Agarwal and Deepak Sharma Department of Mechanical

More information

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

DEMO: Differential Evolution for Multiobjective Optimization

DEMO: Differential Evolution for Multiobjective Optimization DEMO: Differential Evolution for Multiobjective Optimization Tea Robič and Bogdan Filipič Department of Intelligent Systems, Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana, Slovenia tea.robic@ijs.si

More information

A Fast Approximation-Guided Evolutionary Multi-Objective Algorithm

A Fast Approximation-Guided Evolutionary Multi-Objective Algorithm A Fast Approximation-Guided Evolutionary Multi-Objective Algorithm Markus Wagner and Frank Neumann Evolutionary Computation Group School of Computer Science The University of Adelaide Adelaide, SA 5005,

More information

GECCO 2007 Tutorial / Evolutionary Multiobjective Optimization. Eckart Zitzler ETH Zürich. weight = 750g profit = 5.

GECCO 2007 Tutorial / Evolutionary Multiobjective Optimization. Eckart Zitzler ETH Zürich. weight = 750g profit = 5. Tutorial / Evolutionary Multiobjective Optimization Tutorial on Evolutionary Multiobjective Optimization Introductory Example: The Knapsack Problem weight = 75g profit = 5 weight = 5g profit = 8 weight

More information

Incrementally Maximising Hypervolume for Selection in Multi-objective Evolutionary Algorithms

Incrementally Maximising Hypervolume for Selection in Multi-objective Evolutionary Algorithms Incrementally Maximising Hypervolume for Selection in Multi-objective Evolutionary Algorithms Lucas Bradstreet, Student Member, IEEE, Lyndon While, Senior Member, IEEE, and Luigi Barone, Member, IEEE Abstract

More information

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances Minzhong Liu, Xiufen Zou, Yu Chen, Zhijian Wu Abstract In this paper, the DMOEA-DD, which is an improvement of DMOEA[1,

More information

Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms

Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms Lei Chen 1,2, Kalyanmoy Deb 2, and Hai-Lin Liu 1 1 Guangdong University of Technology,

More information

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Hisao Ishibuchi Graduate School of Engineering Osaka Prefecture University Sakai, Osaka 599-853,

More information

Indicator-Based Selection in Multiobjective Search

Indicator-Based Selection in Multiobjective Search Indicator-Based Selection in Multiobjective Search Eckart Zitzler and Simon Künzli Swiss Federal Institute of Technology Zurich Computer Engineering and Networks Laboratory (TIK) Gloriastrasse 35, CH 8092

More information

Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search

Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search Seventh International Conference on Hybrid Intelligent Systems Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search Crina Grosan and Ajith Abraham Faculty of Information Technology,

More information

Generalized Multiobjective Evolutionary Algorithm Guided by Descent Directions

Generalized Multiobjective Evolutionary Algorithm Guided by Descent Directions DOI.7/s852-4-9255-y Generalized Multiobjective Evolutionary Algorithm Guided by Descent Directions Roman Denysiuk Lino Costa Isabel Espírito Santo Received: August 23 / Accepted: 9 March 24 Springer Science+Business

More information

Recombination of Similar Parents in EMO Algorithms

Recombination of Similar Parents in EMO Algorithms H. Ishibuchi and K. Narukawa, Recombination of parents in EMO algorithms, Lecture Notes in Computer Science 341: Evolutionary Multi-Criterion Optimization, pp. 265-279, Springer, Berlin, March 25. (Proc.

More information

Multiplicative Approximations and the Hypervolume Indicator

Multiplicative Approximations and the Hypervolume Indicator Multiplicative Approximations and the Hypervolume Indicator Tobias Friedrich Algorithms Group International Computer Science Institute Berkeley, CA, USA Christian Horoba Fakultät für Informatik, LS 2 Technische

More information

EVOLUTIONARY algorithms (EAs) are a class of

EVOLUTIONARY algorithms (EAs) are a class of An Investigation on Evolutionary Gradient Search for Multi-objective Optimization C. K. Goh, Y. S. Ong and K. C. Tan Abstract Evolutionary gradient search is a hybrid algorithm that exploits the complementary

More information

A Parameter Study for Differential Evolution

A Parameter Study for Differential Evolution A Parameter Study for Differential Evolution ROGER GÄMPERLE SIBYLLE D MÜLLER PETROS KOUMOUTSAKOS Institute of Computational Sciences Department of Computer Science Swiss Federal Institute of Technology

More information

A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization

A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization Hisao Ishibuchi and Youhei Shibata Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho, Sakai,

More information

OPTIMIZATION METHODS. For more information visit: or send an to:

OPTIMIZATION METHODS. For more information visit:  or send an  to: OPTIMIZATION METHODS modefrontier is a registered product of ESTECO srl Copyright ESTECO srl 1999-2007 For more information visit: www.esteco.com or send an e-mail to: modefrontier@esteco.com NEOS Optimization

More information

The Hypervolume Indicator Revisited: On the Design of Pareto-compliant Indicators Via Weighted Integration

The Hypervolume Indicator Revisited: On the Design of Pareto-compliant Indicators Via Weighted Integration The ypervolume Indicator Revisited: On the Design of Pareto-compliant Indicators Via Weighted Integration Eckart Zitzler, Dimo Brockhoff, and Lothar Thiele Computer Engineering (TIK), ET Zurich {zitzler,brockho,thiele}@tik.ee.ethz.ch

More information

Empirical Performance of the Approximation of the Least Hypervolume Contributor

Empirical Performance of the Approximation of the Least Hypervolume Contributor Empirical Performance of the Approximation of the Least Hypervolume Contributor Krzysztof Nowak 1,MarcusMärtens 2, and Dario Izzo 1 1 European Space Agency, Noordwijk, The Netherlands 2 TU Delft, Delft,

More information

Scan Scheduling Specification and Analysis

Scan Scheduling Specification and Analysis Scan Scheduling Specification and Analysis Bruno Dutertre System Design Laboratory SRI International Menlo Park, CA 94025 May 24, 2000 This work was partially funded by DARPA/AFRL under BAE System subcontract

More information

Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach

Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach Richard Allmendinger,XiaodongLi 2,andJürgen Branke University of Karlsruhe, Institute AIFB, Karlsruhe, Germany 2 RMIT University,

More information

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls Florian Siegmund, Amos H.C. Ng Virtual Systems Research Center University of Skövde P.O. 408, 541 48 Skövde,

More information

A Multi-Tier Adaptive Grid Algorithm for the Evolutionary Multi-Objective Optimisation of Complex Problems

A Multi-Tier Adaptive Grid Algorithm for the Evolutionary Multi-Objective Optimisation of Complex Problems Soft Computing manuscript No. (will be inserted by the editor) A Multi-Tier Adaptive Grid Algorithm for the Evolutionary Multi-Objective Optimisation of Complex Problems Shahin Rostami Alex Shenfield Received:

More information

Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm

Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm Toshiki Kondoh, Tomoaki Tatsukawa, Akira Oyama, Takeshi Watanabe and Kozo Fujii Graduate School of Engineering, Tokyo University

More information

THIS PAPER proposes a hybrid decoding to apply with

THIS PAPER proposes a hybrid decoding to apply with Proceedings of the 01 Federated Conference on Computer Science and Information Systems pp. 9 0 Biased Random Key Genetic Algorithm with Hybrid Decoding for Multi-objective Optimization Panwadee Tangpattanakul

More information

Computing hypervolume contributions in low dimensions: asymptotically optimal algorithm and complexity results

Computing hypervolume contributions in low dimensions: asymptotically optimal algorithm and complexity results EMO 2011, 5-8 April, Ouro Preto, Brazil Computing hypervolume contributions in low dimensions: asymptotically optimal algorithm and complexity results Michael T.M. Emmerich 1,2,3 and Carlos M. Fonseca

More information

Metaheuristics for the Bi-objective Ring Star Problem

Metaheuristics for the Bi-objective Ring Star Problem Metaheuristics for the Bi-objective Ring Star Problem Arnaud Liefooghe 1, Laetitia Jourdan 1, Matthieu Basseur 2, El-Ghazali Talbi 1, and Edmund K. Burke 2 1 INRIA-Futurs, LIFL, CNRS Université des Sciences

More information

Multi-objective Optimization

Multi-objective Optimization Jugal K. Kalita Single vs. Single vs. Single Objective Optimization: When an optimization problem involves only one objective function, the task of finding the optimal solution is called single-objective

More information

An EMO Algorithm Using the Hypervolume Measure as Selection Criterion

An EMO Algorithm Using the Hypervolume Measure as Selection Criterion An EMO Algorithm Using the Hypervolume Measure as Selection Criterion Michael Emmerich, Nicola Beume +, and Boris Naujoks + + University of Dortmund, Chair of Systems Analysis, 44221 Dortmund, Germany

More information

Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows

Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows Abel Garcia-Najera and John A. Bullinaria School of Computer Science, University of Birmingham Edgbaston, Birmingham

More information

An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function Results

An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function Results Syracuse University SURFACE Electrical Engineering and Computer Science College of Engineering and Computer Science -0-005 An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

Evolutionary Algorithms and the Cardinality Constrained Portfolio Optimization Problem

Evolutionary Algorithms and the Cardinality Constrained Portfolio Optimization Problem Evolutionary Algorithms and the Cardinality Constrained Portfolio Optimization Problem Felix Streichert, Holger Ulmer, and Andreas Zell Center for Bioinformatics Tübingen (ZBIT), University of Tübingen,

More information

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Tony Maciejewski, Kyle Tarplee, Ryan Friese, and Howard Jay Siegel Department of Electrical and Computer Engineering Colorado

More information

Finding Knees in Multi-objective Optimization

Finding Knees in Multi-objective Optimization Finding Knees in Multi-objective Optimization Jürgen Branke 1, Kalyanmoy Deb 2, Henning Dierolf 1, and Matthias Osswald 1 1 Institute AIFB, University of Karlsruhe, Germany branke@aifb.uni-karlsruhe.de

More information

arxiv: v1 [math.oc] 23 May 2016

arxiv: v1 [math.oc] 23 May 2016 BMOBench: Black-Box Multi-Objective Optimization Benchmarking Platform Abdullah Al-Dujaili, Suresh Sundaram arxiv:1605.07009v1 [math.oc] 23 May 2016 January 6, 2018 This document briefly describes the

More information

SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2

SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2 SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2 Mifa Kim 1, Tomoyuki Hiroyasu 2, Mitsunori Miki 2, and Shinya Watanabe 3 1 Graduate School, Department of Knowledge Engineering

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department ofmechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

Designing Multi-Objective Variation Operators Using a Predator-Prey Approach

Designing Multi-Objective Variation Operators Using a Predator-Prey Approach Designing Multi-Objective Variation Operators Using a Predator-Prey Approach Christian Grimme and Joachim Lepping Robotics Research Institute - Section Information Technology, Dortmund University, 44221

More information

Computational Intelligence

Computational Intelligence Computational Intelligence Winter Term 2017/18 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund Slides prepared by Dr. Nicola Beume (2012) enriched

More information

HYPERVOLUME [1], also known as the S-metric [2] or

HYPERVOLUME [1], also known as the S-metric [2] or 714 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 6, DECEMBER 2008 A Fast Incremental Hypervolume Algorithm Lucas Bradstreet, Student Member, IEEE, Lyndon While, Senior Member, IEEE, and

More information

Using an outward selective pressure for improving the search quality of the MOEA/D algorithm

Using an outward selective pressure for improving the search quality of the MOEA/D algorithm Comput Optim Appl (25) 6:57 67 DOI.7/s589-5-9733-9 Using an outward selective pressure for improving the search quality of the MOEA/D algorithm Krzysztof Michalak Received: 2 January 24 / Published online:

More information

Fuzzy-Pareto-Dominance and its Application in Evolutionary Multi-Objective Optimization

Fuzzy-Pareto-Dominance and its Application in Evolutionary Multi-Objective Optimization Fuzzy-Pareto-Dominance and its Application in Evolutionary Multi-Objective Optimization Mario Köppen, Raul Vicente-Garcia, and Bertram Nickolay Fraunhofer IPK, Pascalstr. 8-9, 10587 Berlin, Germany {mario.koeppen

More information

Improved S-CDAS using Crossover Controlling the Number of Crossed Genes for Many-objective Optimization

Improved S-CDAS using Crossover Controlling the Number of Crossed Genes for Many-objective Optimization Improved S-CDAS using Crossover Controlling the Number of Crossed Genes for Many-objective Optimization Hiroyuki Sato Faculty of Informatics and Engineering, The University of Electro-Communications -5-

More information

NCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems

NCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems Shinya Watanabe Graduate School of Engineering, Doshisha University 1-3 Tatara Miyakodani,Kyo-tanabe, Kyoto, 10-031,

More information

Universiteit Leiden Opleiding Informatica

Universiteit Leiden Opleiding Informatica Universiteit Leiden Opleiding Informatica A New Approach to Target Region Based Multi-objective Evolutionary Algorithms Yali Wang MASTER S THESIS Leiden Institute of Advanced Computer Science (LIACS) Leiden

More information

A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II

A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II Kalyanmoy Deb, Samir Agrawal, Amrit Pratap, and T Meyarivan Kanpur Genetic Algorithms Laboratory (KanGAL)

More information

Multi-Objective Memetic Algorithm using Pattern Search Filter Methods

Multi-Objective Memetic Algorithm using Pattern Search Filter Methods Multi-Objective Memetic Algorithm using Pattern Search Filter Methods F. Mendes V. Sousa M.F.P. Costa A. Gaspar-Cunha IPC/I3N - Institute of Polymers and Composites, University of Minho Guimarães, Portugal

More information

A Predictive Pareto Dominance Based Algorithm for Many-Objective Problems

A Predictive Pareto Dominance Based Algorithm for Many-Objective Problems 10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA A Predictive Pareto Dominance Based Algorithm for Many-Objective Problems Edgar Galvan 1, Erin

More information

Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm

Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm Nasreddine Hallam, Graham Kendall, and Peter Blanchfield School of Computer Science and IT, The Univeristy

More information

A gradient-based multiobjective optimization technique using an adaptive weighting method

A gradient-based multiobjective optimization technique using an adaptive weighting method 10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA A gradient-based multiobjective optimization technique using an adaptive weighting method Kazuhiro

More information

Hybrid Algorithm for Multi-Objective Optimization by Greedy Hypervolume Maximization

Hybrid Algorithm for Multi-Objective Optimization by Greedy Hypervolume Maximization Hybrid Algorithm for Multi-Objective Optimization by Greedy Hypervolume Maximization Conrado S. Miranda, Fernando J. Von Zuben, Senior Member, IEEE arxiv:56.5424v [cs.ne] 7 Jun 25 Abstract This paper introduces

More information

An External Archive Guided Multiobjective Evolutionary Approach Based on Decomposition for Continuous Optimization

An External Archive Guided Multiobjective Evolutionary Approach Based on Decomposition for Continuous Optimization IEEE Congress on Evolutionary Computation (CEC) July -,, Beijing, China An External Archive Guided Multiobjective Evolutionary Approach Based on Decomposition for Continuous Optimization Yexing Li School

More information

Mutation Operators based on Variable Grouping for Multi-objective Large-scale Optimization

Mutation Operators based on Variable Grouping for Multi-objective Large-scale Optimization Mutation Operators based on Variable Grouping for Multi-objective Large-scale Optimization Heiner Zille, Hisao Ishibuchi, Sanaz Mostaghim and Yusuke Nojima Institute for Intelligent Cooperating Systems

More information

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

MSPOT: Multi-Criteria Sequential Parameter Optimization

MSPOT: Multi-Criteria Sequential Parameter Optimization MSPOT: Multi-Criteria Sequential Parameter Optimization Martin Zae erer, Thomas Bartz-Beielstein, Martina Friese, Boris Naujoks, and Oliver Flasch Schriftenreihe CIplus. TR 2/2012. ISSN 2194-2870 Cologne

More information

Multi-objective Optimization

Multi-objective Optimization Some introductory figures from : Deb Kalyanmoy, Multi-Objective Optimization using Evolutionary Algorithms, Wiley 2001 Multi-objective Optimization Implementation of Constrained GA Based on NSGA-II Optimization

More information

Multiobjective Optimization

Multiobjective Optimization Multiobjective Optimization Concepts, Algorithms and Performance Measures Joshua Knowles School of Computer Science The University of Manchester COMP60342 - Week 5 2.15, 2 May 2014 Introducing Multiobjective

More information

Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources

Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Author Mostaghim, Sanaz, Branke, Jurgen, Lewis, Andrew, Schmeck, Hartmut Published 008 Conference Title IEEE Congress

More information

Multiobjective Formulations of Fuzzy Rule-Based Classification System Design

Multiobjective Formulations of Fuzzy Rule-Based Classification System Design Multiobjective Formulations of Fuzzy Rule-Based Classification System Design Hisao Ishibuchi and Yusuke Nojima Graduate School of Engineering, Osaka Prefecture University, - Gakuen-cho, Sakai, Osaka 599-853,

More information

Multi-objective Optimization Algorithm based on Magnetotactic Bacterium

Multi-objective Optimization Algorithm based on Magnetotactic Bacterium Vol.78 (MulGrab 24), pp.6-64 http://dx.doi.org/.4257/astl.24.78. Multi-obective Optimization Algorithm based on Magnetotactic Bacterium Zhidan Xu Institute of Basic Science, Harbin University of Commerce,

More information

Multi-Objective Pipe Smoothing Genetic Algorithm For Water Distribution Network Design

Multi-Objective Pipe Smoothing Genetic Algorithm For Water Distribution Network Design City University of New York (CUNY) CUNY Academic Works International Conference on Hydroinformatics 8-1-2014 Multi-Objective Pipe Smoothing Genetic Algorithm For Water Distribution Network Design Matthew

More information

A Gentle Introduction to Multi-Criteria Optimization with SPOT

A Gentle Introduction to Multi-Criteria Optimization with SPOT Martin Zaefferer, Boris Naujoks, Thomas Bartz-Beielstein A Gentle Introduction to Multi-Criteria Optimization with SPOT Schriftenreihe CIplus, Band 1/2013 Herausgeber: T. Bartz-Beielstein, W. Konen, H.

More information

Evolutionary Computation

Evolutionary Computation Evolutionary Computation Lecture 9 Mul+- Objec+ve Evolu+onary Algorithms 1 Multi-objective optimization problem: minimize F(X) = ( f 1 (x),..., f m (x)) The objective functions may be conflicting or incommensurable.

More information

Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources

Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Sanaz Mostaghim, Jürgen Branke, Andrew Lewis, Hartmut Schmeck Abstract In this paper, we study parallelization

More information

Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy

Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy Mihai Oltean Babeş-Bolyai University Department of Computer Science Kogalniceanu 1, Cluj-Napoca, 3400, Romania moltean@cs.ubbcluj.ro

More information

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem etic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem R. O. Oladele Department of Computer Science University of Ilorin P.M.B. 1515, Ilorin, NIGERIA

More information

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Ke Li 1, Kalyanmoy Deb 1, Qingfu Zhang 2, and Sam Kwong 2 1 Department of Electrical and Computer

More information

Adaptive Multi-objective Particle Swarm Optimization Algorithm

Adaptive Multi-objective Particle Swarm Optimization Algorithm Adaptive Multi-objective Particle Swarm Optimization Algorithm P. K. Tripathi, Sanghamitra Bandyopadhyay, Senior Member, IEEE and S. K. Pal, Fellow, IEEE Abstract In this article we describe a novel Particle

More information

Two-Dimensional Fitting of Brightness Profiles in Galaxy Images with a Hybrid Algorithm

Two-Dimensional Fitting of Brightness Profiles in Galaxy Images with a Hybrid Algorithm Two-Dimensional Fitting of Brightness Profiles in Galaxy Images with a Hybrid Algorithm Juan Carlos Gomez, Olac Fuentes, and Ivanio Puerari Instituto Nacional de Astrofísica Óptica y Electrónica Luis Enrique

More information

Multiobjective Prototype Optimization with Evolved Improvement Steps

Multiobjective Prototype Optimization with Evolved Improvement Steps Multiobjective Prototype Optimization with Evolved Improvement Steps Jiri Kubalik 1, Richard Mordinyi 2, and Stefan Biffl 3 1 Department of Cybernetics Czech Technical University in Prague Technicka 2,

More information

Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems

Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems Saku Kukkonen and Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Fuzzy Hamming Distance in a Content-Based Image Retrieval System

Fuzzy Hamming Distance in a Content-Based Image Retrieval System Fuzzy Hamming Distance in a Content-Based Image Retrieval System Mircea Ionescu Department of ECECS, University of Cincinnati, Cincinnati, OH 51-3, USA ionescmm@ececs.uc.edu Anca Ralescu Department of

More information

Communication Strategies in Distributed Evolutionary Algorithms for Multi-objective Optimization

Communication Strategies in Distributed Evolutionary Algorithms for Multi-objective Optimization CONTI 2006 The 7 th INTERNATIONAL CONFERENCE ON TECHNICAL INFORMATICS, 8-9 June 2006, TIMISOARA, ROMANIA Communication Strategies in Distributed Evolutionary Algorithms for Multi-objective Optimization

More information

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague.

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague. Evolutionary Algorithms: Lecture 4 Jiří Kubaĺık Department of Cybernetics, CTU Prague http://labe.felk.cvut.cz/~posik/xe33scp/ pmulti-objective Optimization :: Many real-world problems involve multiple

More information

ROBUST MULTI-OBJECTIVE OPTIMIZATION OF WATER DISTRIBUTION NETWORKS

ROBUST MULTI-OBJECTIVE OPTIMIZATION OF WATER DISTRIBUTION NETWORKS ROBUST MULTI-OBJECTIVE OPTIMIZATION OF WATER DISTRIBUTION NETWORKS Taishi Ohno, Hernán Aguirre, Kiyoshi Tanaka Faculty of Engineering, Shinshu University, Wakasato, Nagano-shi, Japan 15tm209f@shinshu-u.ac.jp,

More information

On The Effects of Archiving, Elitism, And Density Based Selection in Evolutionary Multi-Objective Optimization

On The Effects of Archiving, Elitism, And Density Based Selection in Evolutionary Multi-Objective Optimization On The Effects of Archiving, Elitism, And Density Based Selection in Evolutionary Multi-Objective Optimization Marco Laumanns, Eckart Zitzler, and Lothar Thiele ETH Zürich, Institut TIK, CH 8092 Zürich,

More information

EliteNSGA-III: An Improved Evolutionary Many- Objective Optimization Algorithm

EliteNSGA-III: An Improved Evolutionary Many- Objective Optimization Algorithm EliteNSGA-III: An Improved Evolutionary Many- Objective Optimization Algorithm Amin Ibrahim, IEEE Member Faculty of Electrical, Computer, and Software Engineering University of Ontario Institute of Technology

More information

An Empirical Comparison of Several Recent Multi-Objective Evolutionary Algorithms

An Empirical Comparison of Several Recent Multi-Objective Evolutionary Algorithms An Empirical Comparison of Several Recent Multi-Objective Evolutionary Algorithms Thomas White 1 and Shan He 1,2 1 School of Computer Science 2 Center for Systems Biology, School of Biological Sciences,

More information

An Evolutionary Algorithm Approach to Generate Distinct Sets of Non-Dominated Solutions for Wicked Problems

An Evolutionary Algorithm Approach to Generate Distinct Sets of Non-Dominated Solutions for Wicked Problems An Evolutionary Algorithm Approach to Generate Distinct Sets of Non-Dominated Solutions for Wicked Problems Marcio H. Giacomoni Assistant Professor Civil and Environmental Engineering February 6 th 7 Zechman,

More information

Author s Accepted Manuscript

Author s Accepted Manuscript Author s Accepted Manuscript On The Use of Two Reference Points in Decomposition Based Multiobjective Evolutionary Algorithms Zhenkun Wang, Qingfu Zhang, Hui Li, Hisao Ishibuchi, Licheng Jiao www.elsevier.com/locate/swevo

More information

Approximation Model Guided Selection for Evolutionary Multiobjective Optimization

Approximation Model Guided Selection for Evolutionary Multiobjective Optimization Approximation Model Guided Selection for Evolutionary Multiobjective Optimization Aimin Zhou 1, Qingfu Zhang 2, and Guixu Zhang 1 1 Each China Normal University, Shanghai, China 2 University of Essex,

More information

Late Parallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms

Late Parallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms Late arallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms O. Tolga Altinoz Department of Electrical and Electronics Engineering Ankara

More information

Initial Population Construction for Convergence Improvement of MOEAs

Initial Population Construction for Convergence Improvement of MOEAs In Evolutionary Multi-Criterion Optimization by Carlos A. Coello Coello, Arturo Hernández Aguirre, and Eckart Zitzler (Eds.). In Lecture Notes in Computer Science (LNCS), Volume 3410, c Springer, Berlin,

More information

Multiobjective Optimisation. Why? Panorama. General Formulation. Decision Space and Objective Space. 1 of 7 02/03/15 09:49.

Multiobjective Optimisation. Why? Panorama. General Formulation. Decision Space and Objective Space. 1 of 7 02/03/15 09:49. ITNPD8/CSCU9YO Multiobjective Optimisation An Overview Nadarajen Veerapen (nve@cs.stir.ac.uk) University of Stirling Why? Classic optimisation: 1 objective Example: Minimise cost Reality is often more

More information

Finding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm

Finding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm 16th World Congress of the International Fuzzy Systems Association (IFSA) 9th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT) Finding Sets of Non-Dominated Solutions with High

More information

Evolutionary multi-objective algorithm design issues

Evolutionary multi-objective algorithm design issues Evolutionary multi-objective algorithm design issues Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical Information Technology Karthik.sindhya@jyu.fi

More information

Evolving Human Competitive Research Spectra-Based Note Fault Localisation Techniques

Evolving Human Competitive Research Spectra-Based Note Fault Localisation Techniques UCL DEPARTMENT OF COMPUTER SCIENCE Research Note RN/12/03 Evolving Human Competitive Research Spectra-Based Note Fault Localisation Techniques RN/17/07 Deep Parameter Optimisation for Face Detection Using

More information

Bio-inspired Optimization and Design

Bio-inspired Optimization and Design Eckart Zitzler Computer Engineering and Networks Laboratory Introductory Example: The Knapsack Problem weight = 750g profit = 5 weight = 1500g profit = 8 weight = 300g profit = 7 weight = 1000g profit

More information

An Evolutionary Many Objective Optimisation Algorithm with Adaptive Region Decomposition

An Evolutionary Many Objective Optimisation Algorithm with Adaptive Region Decomposition 1 An Evolutionary Many Objective Optimisation Algorithm with Adaptive Region Decomposition Hai-Lin Liu, Lei Chen, Qingfu Zhang, Senior Member, IEEE and Kalyanmoy Deb, Fellow, IEEE COIN Report Number 016014

More information

Very Fast Non-Dominated Sorting

Very Fast Non-Dominated Sorting Decision Making in Manufacturing and Services Vol. 8 2014 No. 1 2 pp. 13 23 Very Fast Non-Dominated Sorting Czesław Smutnicki, Jarosław Rudy, Dominik Żelazny Abstract. A new and very efficient parallel

More information