Development of Evolutionary Multi-Objective Optimization

Size: px
Start display at page:

Download "Development of Evolutionary Multi-Objective Optimization"

Transcription

1 A. Mießen Page 1 of 13 Development of Evolutionary Multi-Objective Optimization Andreas Mießen RWTH Aachen University AVT - Aachener Verfahrenstechnik Process Systems Engineering Turmstrasse 46 D Aachen Abstract: This paper provides an historical view about the development of evolutionary multi-objective optimization, an area of multi-criteria decision making which aims to solve optimization problems of two or more objectives. The key challenge is to solve those objective functions simultaneously despite their conflictive behavior. In the pasts 60 years a lot of research was done to develop methodologies to solve these kind of problems which are systematically reviewed in this work. 1 Introduction Multi-objective optimization (MOO) pursues to find the best setups of variable input parameters which yield to optimal trade-offs between multiple conflicting objectives. In order to receive one solution (set of input parameters leading to optimal objectives), the objective functions are evaluated with one given set of input parameters. By comparing the objective values of solutions according to a defined optimality criterion (e.g., Pareto-dominance), only the best solutions are stored. In contrast to single-objective optimization, a set of optimal solutions (the so-called Pareto front) is obtained, since several objectives are optimized simultaneously. Nowadays, MOO is used in a broad range of applications, from the finance sector over most fields of engineering to applied sciences [3]. Therefore, there is a great interest in developing efficient and effective algorithms to solve multiobjective optimization problems (MOPs). In the 1950s research starts paying more attention to MOPs and the first traditional approaches were developed. Back then, the main idea was to somehow convert a MOP into a single-objective optimization problem (SOP). With the introduction of a principle by Goldberg [7] to evaluate the optimality of solutions in a multi-dimensional manner, a new era of algorithm began. It was then possible to find a set of optimal solutions without simplifying the problem into a SOP. This new field implied the use of evolutionary algorithms to solve MOP and is elaborated in the following

2 chapters. The paper is structured as follows. The second chapter introduces basic concepts and terminologies of multi-objective optimization and a formal problem description. Chapter 3 describes the development of algorithms used for MOP in the last decades which ends with the newest trends in research. Finally, the work in concluded in Chapter 4. 2 Basic Concepts and Terminology Multi-objective optimization (MOO) is part of multiple criteria decision making. It is an optimization strategy for several competing objectives. Compared to the well-known single-objective optimization, where the target value is a scalar, in MOO the input as well as the output are non-scalar values. The dimensions of the input and output vector are independent from each other and are subject to the particular optimization problem. The goal is to obtain those input parameters which provide optimal outputs, or in other words, the objectives are to be optimized. Optimal is defined in the sense of Pareto-optimality, which is explained later in section 2.2. Unless the considered problem is trivial, the result is always given by a set of valid optimal solution vectors, where each solution vector contained in the set optimizes all objectives at the same time. One single optimal solution is called a Pareto design and the set of solutions is called the Pareto-optimal front (PoF) or simply the Pareto front. Usually nonlinear problems are considered, where the relation between design parameters and objectives are very complex and not predictable by the user. 2.1 Multi-Objective Optimization Problem A multi-objective optimization problem (MOP) consists of n parameters (decision variables), k objective functions, and a set of m constraints. Objective functions and constraints are functions of the decision variables. For the sake of convenience, we assume that the objective functions are to be minimized. If the objective functions are maximized, we can convert the maximization problem into a minimization problem by simply multiplying the objectives by 1. The general MOP can be stated as: [13] y min x fpxq minpf 1 pxq, f 2 pxq,..., f k pxqq x s.t. cpxq pc 1 pxq, c 2 pxq,..., c m pxqq T ď 0 x px 1, x 2,..., x n q P X (1) A. Mießen Page 2 of 13

3 where x is the decision vector and f pxq the vector of objective functions. X is denoted as the decision space and the set Y of all possible objectives y py 1, y 2,..., y k q P Y is called the objective space. The set of decision vectors x that satisfy the constraints cpxq is called feasible set X f and is defined as: [13] X f tx P X cpxq ď 0u. (2) The image of fpxq under X f is denoted as Y f and is referred to as the feasible region: Y f fpx f q ď tfpxqu. (3) xpx f 2.2 Pareto Dominance The comparison of two solutions in a single-objective optimization problem (SOP) is rather intuitive: In a minimization problem a solution y 1 is better than y 2 if y 1 ă y 2. However, in a multi-dimensional problem the comparison of two solutions represented as a vector of objectives is not as straight forward as in the one dimensional case. In order to compare two different solutions of a MOP, Goldberg (1989) came up with the concept of Pareto Dominance. A decision vector a is Pareto dominant to a decision vector b if all objectives of a are equal or better (smaller in case of a minimization problem) than the corresponding objective values of b and if there exists at least one objective value of a which is strictly better than its corresponding values in b: a ă b ô `pf a i ď f b P t1, 2,..., kuq ^ pd i P t1, 2,..., ku f a i ă f b i q. (4) A decision vector a weakly dominates a decision vector b if a ĺ b ô `f a i ď f b P t1, 2,..., ku. (5) It can be noted that the difference between a dominating and weakly dominating solution is simply the absence of the strict inequality in Definition 5. With the above stated definitions 4 and 5 it is now possible to compare two different decision vectors and thereby two different solutions. Thus, we can define Pareto Optimality as the following: [13] A decision vector x P X f is said to be non-dominated or Pareto-optimal A. Mießen Page 3 of 13

4 regarding a set A Ď X f iff: E a P A : a ă x. (6) Fig. 1 illustrates the Pareto dominance for a minimization problem with two objectives. f a and f b are the solution vectors corresponding to their decision vectors a and b, respectively. Dominated Area Objective 2 f a f b Objective 2 Objective 1 (a) Dominance relation between two solutions Objective 1 (b) Pareto-optimal front (dotted line) Fig. 1: Pareto dominance principle: Example of a minimization problem with two objective functions Pareto Optimal Front The set containing all decision vectors x which are non-dominated within the entire decision space X f is denoted as the Pareto-optimal set. The corresponding set of solution vectors is called the Pareto-optimal front (PoF). [13] 3 History of Evolutionary Multi-Objective Optimization 3.1 Traditional Approaches The first treat of multi-objective optimization problems can be dated back to the 1950s as Coello Coello has shown in Back then, people mainly tried to solve MOPs by converting them into SOPs because the knowledge about solving those singe-objective optimization problems was already well advanced by that time. Examples for such scalarization techniques are the normalized weighted sum approach (NWS) or the ɛ-constraint method (trade-off method). In the NWS approach the objectives are multiplied by different weight factors and added up in order to obtain a scalar objective function. In contrast, the A. Mießen Page 4 of 13

5 trade-off method achieves a scalar objective function by considering only objective f i with the highest interest for the user, and treating all other objectives f j pj iq as constraints limited by a certain ɛ j. [1] All scalarization techniques share the similarity of producing one solution out of one optimization run. Hence, it takes several runs with different adjustments to acquire different compromises. Furthermore, those approaches require previous knowledge about the problem in order to make a reasonable choice for the required parameters. 3.2 Relevance of Evolutionary Algorithms in Multi-Objective Optimization Evolutionary algorithms (EA) represent a convenient strategy to solve MOPs. EA are global search algorithms, what means they search for solutions in the entire decision space, independent of its complexity. When considering a SOP, EAs are likely to find the global rather than the local extremum, even though no previous knowledge about the problem is required. EA are based on the principals of biological evolution, such as recombination, mutation, and selection. The basic idea is that the environmental pressure causes natural selection, which results in an increase of the population s fitness. As an abstract measure of the population s fitness a quality function is used and intended to be maximized. Based on the fitness, an individual is chosen to seed the next generation as a so-called parent. By recombination and/or mutation of two parent individuals, new individuals (children) are created. Based on their fitness the new individuals compete with the old ones for a place in the new generation. Hence, the fitness of the new population is always at least as good as the fitness of the previous one, or better. This process can be iterated until a desired fitness is attained or a computational limit is reached. In the mathematical understanding one individual represents one possible solution. A single individual is defined by its features, the so-called design parameters. The quality or cost function determines the fitness of each individual describing the quality of the solution. Single attributes of the solution s quality are referred to as objectives. Those objectives are intended to be optimized in order to find the fittest individual and thus the best solution. A. Mießen Page 5 of 13

6 General Steps in an Evolutionary Algorithm The following steps are presented by Zitzler (1999) as an approximation of a general evolutionary algorithm. 1. Initialization: Set start population P 0 H, generation index t Fitness assignment: For each individual i P P t determine scalar fitness value F piq. 3. Selection: Select individuals i P P t according to a given scheme and create temporary population P 1 (mating pool). 4. Recombination: (a) Choose two individuals i, j P P 1 with i j and remove from P 1. (b) Recombine i and j and thus create children k, l. (c) Add k, l to P 2 with probability p c (crossover probability), otherwise add i, j to P Mutation: Mutate i P P 2 with mutation rate p m. Add the resulting individual to mutation set P Termination: Set P t`1 P 3 and t t`1. If a chosen stopping criteria is satisfied, the evaluated objectives of the set of individual P t represent the desired Pareto-optimal front. If not, continue with Step 2. In the next chapter several methods for solving multi-objective optimization problems are presented in a historical development, starting with the first approaches in the 1950s and ending with the newest trends in this field. 3.3 Pareto Based Algorithms With the incorporation of the Pareto-dominance principle by David. E. Goldberg (1989) a new era of multi-objective optimization began. In his seminal book on genetic algorithms [7] Goldberg suggested the use of non-dominated ranking as well as a niching technique to find a well spread set of Pareto-optimal solutions. Pareto based algorithms were the first algorithms which made use of the Pareto dominance principle. Pareto dominance was used to compare solutions in a multi-dimensional manner. Rather than comparing absolut values of each objective it was now possible to define a fitness for each individual according to the dominance relation to other solutions. However, there are different ways to make use of the dominance relation in order to assign the fitness to each A. Mießen Page 6 of 13

7 individual. Some strategies are, e.g., the number of different solutions which are dominated by a particular solution ˆx, or the number of solutions that dominate the particular solution ˆx, or even a combination of these two. In any case, the goal is to guide the optimization process towards the PoF. By guiding the search towards the PoF (Fig. 2), Pareto based algorithms maintain convergence as well as diversity of the solutions. Objective 2 Objective 2 Objective 1 (a) Solutions converge towards the PoF Objective 1 (b) Solutions are spread along the PoF Fig. 2: Pareto based algorithms: Maintaining convergence and diversity Examples of Pareto based algorithms are the well known Niched Pareto Genetic Algorithm (NPGA) [8] and Non-dominated Sorting Genetic Algorithm (NSGA) [11]. The key of NPGA is to combine the Pareto domininace with a binary tournament selection. Two randomly chosen individuals are checked against a comparison group of size t dom. According to the dominance of the chosen individuals with respect to the comparison group, one of them is chosen to seed the next generation. If both of them are either dominated or nondominated, the individual with fewer neighboring solutions in its niche defined by the niche radius σ niche is chosen. The downside of this algorithm is that the additional parameters t dom and σ niche, chosen by the user, have a noticeable impact on the performance. Thus, it requires additional knowledge and experience brought by the user. NSGA s basic idea is to sort the solutions according to the number of designs they are dominated by. Every individual of one group is dominated by the same number of individuals, and thus assigned with the same raw fitness value. The group of solutions assigned with the highest fitness contains only non-dominated solutions, the group with the second highest fitness value contains solutions dominated by one solution (the solutions does not have to be the same for all group members), etc. Thereby, the solutions dominated by fewer individuals are more likely to be selected for the mating process. The final fitness value is A. Mießen Page 7 of 13

8 denoted as the quotient between the raw fitness value and the local density in the neighborhood of the selected solution. As a consequence, solutions in a very dense area obtain a smaller final fitness value caused by the large local density. However, this can lead to a loss of physical optimal solutions in the PoF due to the reduced fitness of solutions in crowded regions. 3.4 Elitist Pareto Multi-Objective Optimization Algorithms The introduction of elitism constitutes the next big step in the history of multiobjective evolutionary algorithms (MOEAs). Zitzler (1999) investigated the impact of elitism in his PhD thesis and proposed a new elitist MOEA called Strength Pareto Evolutionary Algorithm (SPEA) which outperformed most of the existing algorithms in terms of computation time and convergence at this time. The idea behind elitism is to keep the best solutions (elite solutions) stored separately in an elite set, usually called archive. From that elite set at least one individual is chosen as a parent in each mating process, which is illustrated in Fig. 3. The probability for each member of the elite set to be chosen as a parent is equal. Since elitism has improved performance and convergence of MOEAs significantly [13], nowadays most of the algorithms are using this approach. However, elitism is not used in the exact same way in every algorithm, e.g., in some approaches the size of the elite set is limited, in others it can be dynamically growing, etc. As an improvement of his own SPEA [15], in 2001 Zitzler introduced the SPEA II [14]. Since SPEA II outperforms its predecessor in several ways, in this section, the most recent version is briefly elaborated. The three major amplifications are an enhanced fitness assignment, a density estimation technique, and a new archive truncation method. The new fitness assignment does not only consider the dominance relation between the solution and existing elite solutions within the archive, it also considers the density of solutions around one solution in the objective space. As density measure the k-nearest neighbor approach is used in which the Euclidean distance between one solution and its k-th nearest neighboring solution is evaluated. The constant k is calculated based on the archive and population size which are defined by the user. Zitzler proposed a so-called environmental selection as a new archive truncation method. In the environmental selection process, solutions that have a smaller distance to their k-nearest neighbor are removed until the desired archive size is reached. Thus, a great diversity is ensured by keeping the maximum number of elite solutions bounded. However, the downside of SPEA II is a high computation time caused by the additional k-nearest neighbor evaluations. A. Mießen Page 8 of 13

9 Update population with new designs Dominance Diversity (a) Elitist Pareto multi-objective optimization Update population with new designs Dominance Diversity (b) Non-Elitist Pareto multi-objective optimization Fig. 3: Comparison of elitist MOEA and non-elitist MOEA principle 3.5 ɛ-pareto Set Based Algorithms With the ɛ-pareto set approach, Laumanns (2002) proposed a new way of improving diversity along with convergence towards the PoF. His main idea was the generalisation of the yet known dominance principle by introducing the ɛ- dominance. The ɛ-dominance principle does not allow two solutions with a difference less than ɛ i in the i-th objective to be non-dominated to each other [4]. As a result, the ɛ-dominance limits the local density of solutions within the objective space, and therewith a good diversity of solutions is ensured. By choosing the ɛ-vector the user can control the resolution of the obtained solutions. Formally ɛ-dominance is defined as: Let f a, f b be objective-vectors of solutions, a, b, respectively. Then f a is said to ɛ-dominate f b for some ɛ ą 0, denoted as f a ă ɛ f b, iff p1 ɛq f a i ď f b i P t1,..., ku. (7) A. Mießen Page 9 of 13

10 To get an insight into this method in practice, consider an example from the field of power electronics. The goal might be to minimize the volume of a converter while maximizing its efficiency. The optimizer might find two designs, one with a converter volume of 1 dm 3 and another with dm 3. In this case the user would consider the two designs as almost equal. The algorithm by contrast would save both designs since a difference (of dm 3 ) is obtained. When defining a minimum difference ɛ of 0.1 dm 3 for two designs, the algorithm would consider the two designs as equal and would store only one of them, depending on the box arrangement in most cases the smaller one. Setting a minimum difference between two designs can be also done for all of the other objectives. The defined minimum step (ɛ i ) for each objective i is saved in the so-called ɛ-vector. Since the user predefines a minimum difference for each objective, the objective space is divided by a (multi-dimensional) grid. Hence, each solution lies in a so-called hyper box within the grid, with an edge length of ɛ i for the i-th objective. Each box has its unique identifier, which is the minimum optimal corner of the hyper box referenced by its grid coordinates. One hyper box can only contain one solution. Thus, the number of obtained solution is bounded by the number of hyper boxes which is illustrated in Fig. 4. Obtained solutions are represented by the red points, the minimum optimal box corners by the green crosses. The red area shows the dominated area by applying the ɛ-dominance principle, the grey area is dominated by solutions according to the normal dominance principle. Furthermore, the obtained ɛ-dominant solutions are expected to be more spaced with respect to each other due to the minimum step size between two solutions. Objective 2 Objective 2 ɛ 2 ɛ 2 ɛ 1 Objective 1 ɛ 1 Objective 1 (a) (b) Fig. 4: ɛ-dominance principle for different sized ɛ A. Mießen Page 10 of 13

11 When using the ɛ-dominance principle instead of the conventional Paretodominance for updating the archive, not the solutions themselves but rather the addresses of their hyper boxes are checked for dominance. In this way three different cases can occur when a solution becomes part of the archive. First, if the hyper box of the new solution is not dominated by any of the already existing solutions and its corresponding hyper box, the new solution enters the archive and the dominated solution is discarded. The second case deals with two solutions lying in the same hyper box. If one solution dominates the other by applying the normal dominance principle (Equation 4), the dominant solution enters the archive. The third case occurs if the new solution is located in a hyper box of a completely new part of the PoF, then there exists neither an archive solution by which the new solution is dominated, nor does the new solution dominate other existing ones. In this case, the new solution is added to the archive. 3.6 New Trends in Research Most recent publications about evolutionary multi-objective optimization deal with the question how to integrate user preferences into the multi-objective optimization algorithm, hence how to make it interactive. One common idea first introduced in [5] is the reference point. The user, also called decision maker (DM), sets one or more reference points with the goal to find feasible Paretooptimal solutions as close as possible to the set reference point(s). Based on this idea, several other publications proposed different ways how to include the guiding of the Pareto front towards the reference point, such as [6], [9], [12], and others. A new algorithm proposed by Bringmann (2011) [2] introduces a formal notion of approximation which outperforms other state of the art algorithms in terms of the quality of the approximated Pareto front as well as the performance. This algorithm called Approximation-Guided Evolutionary Multi-Objective Optimization (AGE) was further extended by the integration of user preferences in [10]. 4 Conclusion The area of multi-objective optimization approaches the problem of finding the optimal trade-offs between multiple objectives. Since the 1990s evolutionary algorithms are widely used to tackle such problems. In the last two decades researchers further developed evolutionary multi-objective optimization algo- A. Mießen Page 11 of 13

12 rithms by introducing concepts such as Pareto-dominance, ɛ-dominance, elitism, and more. Thus, performance and accuracy of the obtained solution could be improved dramatically. The latest research in this field deals with the inclusion of user preferences into existing multi-objective evolutionary algorithms, such as the reference point method were one ort more reference points are set by the decision maker to guide the optimization towards them. Additionally, new approaches such as the Approximation-Guided Evolutionary Algorithm were recently introduced which shows the still ongoing importance of this area. References [1] N. Albunni. Multiobjective optimization of the design of electrical machines using evolutionary algorithms. Master thesis, Institute of Theoretical Electrical Engineering and Microelectronics, University of Bremen, [2] K. Bringmann, T. Friedrich, F. Neumann, and M. Wagner. Approximationguided evolutionary multi-objective optimization. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence - Volume Volume Two, IJCAI 11, pages AAAI Press, [3] C. A. Coello Coello. Evolutionary multi-objective optimization: a historical view of the field. IEEE Computational Intelligence Magazine, 1(1):28 36, [4] K. Deb, M. Mohan, and S. Mishra. A fast multi-objective evolutionary algorithm for finding well-spread pareto-optimal solutions. Technical Report KanGAL Report Number , Indian Institute of Technology Kanpur, [5] K. Deb and J. Sundar. Reference point based multi-objective optimization using evolutionary algorithms. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, GECCO 06, pages ACM, [6] E. Filatovas, O. Kurasova, and K. Sindhya. Synchronous r-nsga-ii: An extended preference-based evolutionary algorithm for multi-objective optimization. Informatica, Lith. Acad. Sci., 26:33 50, [7] D. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading, Mass., [8] J. Horn, N. Nafpliotis, and D. Goldberg. A niched pareto genetic algorithm for multiobjective optimization. In Evolutionary Computation, IEEE A. Mießen Page 12 of 13

13 World Congress on Computational Intelligence., Proceedings of the First IEEE Conference on, volume 1, pages 82 87, [9] A. Mohammadi, M. Omidvar, and X. Li. Reference point based multiobjective optimization through decomposition. In IEEE Congress on Evolutionary Computation, pages 1 8, [10] A. Q. Nguyen, M. Wagner, and F. Neumann. User Preferences for Approximation-Guided Multi-objective Evolution, pages Springer International Publishing, Cham, [11] N. Srinivas and K. Deb. Multiobjective optimization using nondominant sorting in genetic algorithms. Evolutionary Computation, 2(3), [12] L. Thiele, K. Miettinen, P. Korhonen, and M. Julian. A preferencebased evolutionary algorithm for multi-objective optimization. Evolutionary Computation, 17(3): , [13] E. Zitzler. Evolutionary Algorithms for Multiobjective Optimization: Methods and Applications. PhD thesis, Swiss Federal Institute of Technology Zurich, [14] E. Zitzler, M. Laumanns, and L. Thiele. Spea2: Improving the strength pareto evolutionary algorithm. Technical report, Department of Electrical Engineering, Swiss Federal Institute of Technology Zurich, [15] E. Zitzler and L. Thiele. Multiobjective evolutionary algorithms: A comparative case study and the strength pareto approach. IEEE Transactions on Evolutionary Computation, 3(4), A. Mießen Page 13 of 13

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague.

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague. Evolutionary Algorithms: Lecture 4 Jiří Kubaĺık Department of Cybernetics, CTU Prague http://labe.felk.cvut.cz/~posik/xe33scp/ pmulti-objective Optimization :: Many real-world problems involve multiple

More information

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts

Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts IEEE Symposium Series on Computational Intelligence Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts Heiner Zille Institute of Knowledge and Language Engineering University of Magdeburg, Germany

More information

Finding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm

Finding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm 16th World Congress of the International Fuzzy Systems Association (IFSA) 9th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT) Finding Sets of Non-Dominated Solutions with High

More information

SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2

SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2 SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2 Mifa Kim 1, Tomoyuki Hiroyasu 2, Mitsunori Miki 2, and Shinya Watanabe 3 1 Graduate School, Department of Knowledge Engineering

More information

Evolutionary multi-objective algorithm design issues

Evolutionary multi-objective algorithm design issues Evolutionary multi-objective algorithm design issues Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical Information Technology Karthik.sindhya@jyu.fi

More information

Evolutionary Computation

Evolutionary Computation Evolutionary Computation Lecture 9 Mul+- Objec+ve Evolu+onary Algorithms 1 Multi-objective optimization problem: minimize F(X) = ( f 1 (x),..., f m (x)) The objective functions may be conflicting or incommensurable.

More information

Multi-objective Optimization

Multi-objective Optimization Jugal K. Kalita Single vs. Single vs. Single Objective Optimization: When an optimization problem involves only one objective function, the task of finding the optimal solution is called single-objective

More information

A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization

A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization Hisao Ishibuchi and Youhei Shibata Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho, Sakai,

More information

NCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems

NCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems Shinya Watanabe Graduate School of Engineering, Doshisha University 1-3 Tatara Miyakodani,Kyo-tanabe, Kyoto, 10-031,

More information

An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function Results

An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function Results Syracuse University SURFACE Electrical Engineering and Computer Science College of Engineering and Computer Science -0-005 An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function

More information

Multi-objective Optimization

Multi-objective Optimization Some introductory figures from : Deb Kalyanmoy, Multi-Objective Optimization using Evolutionary Algorithms, Wiley 2001 Multi-objective Optimization Implementation of Constrained GA Based on NSGA-II Optimization

More information

Recombination of Similar Parents in EMO Algorithms

Recombination of Similar Parents in EMO Algorithms H. Ishibuchi and K. Narukawa, Recombination of parents in EMO algorithms, Lecture Notes in Computer Science 341: Evolutionary Multi-Criterion Optimization, pp. 265-279, Springer, Berlin, March 25. (Proc.

More information

DEMO: Differential Evolution for Multiobjective Optimization

DEMO: Differential Evolution for Multiobjective Optimization DEMO: Differential Evolution for Multiobjective Optimization Tea Robič and Bogdan Filipič Department of Intelligent Systems, Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana, Slovenia tea.robic@ijs.si

More information

Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm

Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm Nasreddine Hallam, Graham Kendall, and Peter Blanchfield School of Computer Science and IT, The Univeristy

More information

Approximation Model Guided Selection for Evolutionary Multiobjective Optimization

Approximation Model Guided Selection for Evolutionary Multiobjective Optimization Approximation Model Guided Selection for Evolutionary Multiobjective Optimization Aimin Zhou 1, Qingfu Zhang 2, and Guixu Zhang 1 1 Each China Normal University, Shanghai, China 2 University of Essex,

More information

Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach

Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach Richard Allmendinger,XiaodongLi 2,andJürgen Branke University of Karlsruhe, Institute AIFB, Karlsruhe, Germany 2 RMIT University,

More information

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls Florian Siegmund, Amos H.C. Ng Virtual Systems Research Center University of Skövde P.O. 408, 541 48 Skövde,

More information

A Fast Approximation-Guided Evolutionary Multi-Objective Algorithm

A Fast Approximation-Guided Evolutionary Multi-Objective Algorithm A Fast Approximation-Guided Evolutionary Multi-Objective Algorithm Markus Wagner and Frank Neumann Evolutionary Computation Group School of Computer Science The University of Adelaide Adelaide, SA 5005,

More information

DCMOGADES: Distributed Cooperation model of Multi-Objective Genetic Algorithm with Distributed Scheme

DCMOGADES: Distributed Cooperation model of Multi-Objective Genetic Algorithm with Distributed Scheme : Distributed Cooperation model of Multi-Objective Genetic Algorithm with Distributed Scheme Tamaki Okuda, Tomoyuki HIROYASU, Mitsunori Miki, Jiro Kamiura Shinaya Watanabe Department of Knowledge Engineering,

More information

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem An Evolutionary Algorithm for the Multi-objective Shortest Path Problem Fangguo He Huan Qi Qiong Fan Institute of Systems Engineering, Huazhong University of Science & Technology, Wuhan 430074, P. R. China

More information

Multi-Objective Evolutionary Algorithms

Multi-Objective Evolutionary Algorithms Multi-Objective Evolutionary Algorithms Kalyanmoy Deb a Kanpur Genetic Algorithm Laboratory (KanGAL) Indian Institute o Technology Kanpur Kanpur, Pin 0806 INDIA deb@iitk.ac.in http://www.iitk.ac.in/kangal/deb.html

More information

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems Repair and Repair in EMO Algorithms for Multiobjective 0/ Knapsack Problems Shiori Kaige, Kaname Narukawa, and Hisao Ishibuchi Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho,

More information

Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms

Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms Lei Chen 1,2, Kalyanmoy Deb 2, and Hai-Lin Liu 1 1 Guangdong University of Technology,

More information

Combining Convergence and Diversity in Evolutionary Multi-Objective Optimization

Combining Convergence and Diversity in Evolutionary Multi-Objective Optimization Combining Convergence and Diversity in Evolutionary Multi-Objective Optimization Marco Laumanns laumanns@tik.ee.ethz.ch Department of Information Technology and Electrical Engineering, Swiss Federal Institute

More information

Approximation-Guided Evolutionary Multi-Objective Optimization

Approximation-Guided Evolutionary Multi-Objective Optimization Approximation-Guided Evolutionary Multi-Objective Optimization Karl Bringmann 1, Tobias Friedrich 1, Frank Neumann 2, Markus Wagner 2 1 Max-Planck-Institut für Informatik, Campus E1.4, 66123 Saarbrücken,

More information

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Hisao Ishibuchi Graduate School of Engineering Osaka Prefecture University Sakai, Osaka 599-853,

More information

Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources

Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Sanaz Mostaghim, Jürgen Branke, Andrew Lewis, Hartmut Schmeck Abstract In this paper, we study parallelization

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

GECCO 2007 Tutorial / Evolutionary Multiobjective Optimization. Eckart Zitzler ETH Zürich. weight = 750g profit = 5.

GECCO 2007 Tutorial / Evolutionary Multiobjective Optimization. Eckart Zitzler ETH Zürich. weight = 750g profit = 5. Tutorial / Evolutionary Multiobjective Optimization Tutorial on Evolutionary Multiobjective Optimization Introductory Example: The Knapsack Problem weight = 75g profit = 5 weight = 5g profit = 8 weight

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department ofmechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

CHAPTER 2 MULTI-OBJECTIVE REACTIVE POWER OPTIMIZATION

CHAPTER 2 MULTI-OBJECTIVE REACTIVE POWER OPTIMIZATION 19 CHAPTER 2 MULTI-OBJECTIE REACTIE POWER OPTIMIZATION 2.1 INTRODUCTION In this chapter, a fundamental knowledge of the Multi-Objective Optimization (MOO) problem and the methods to solve are presented.

More information

Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources

Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Author Mostaghim, Sanaz, Branke, Jurgen, Lewis, Andrew, Schmeck, Hartmut Published 008 Conference Title IEEE Congress

More information

SPEA2: Improving the strength pareto evolutionary algorithm

SPEA2: Improving the strength pareto evolutionary algorithm Research Collection Working Paper SPEA2: Improving the strength pareto evolutionary algorithm Author(s): Zitzler, Eckart; Laumanns, Marco; Thiele, Lothar Publication Date: 2001 Permanent Link: https://doi.org/10.3929/ethz-a-004284029

More information

Towards Understanding Evolutionary Bilevel Multi-Objective Optimization Algorithm

Towards Understanding Evolutionary Bilevel Multi-Objective Optimization Algorithm Towards Understanding Evolutionary Bilevel Multi-Objective Optimization Algorithm Ankur Sinha and Kalyanmoy Deb Helsinki School of Economics, PO Box, FIN-, Helsinki, Finland (e-mail: ankur.sinha@hse.fi,

More information

The Multi-Objective Genetic Algorithm Based Techniques for Intrusion Detection

The Multi-Objective Genetic Algorithm Based Techniques for Intrusion Detection ISSN (Online): 2409-4285 www.ijcsse.org Page: 23-29 The Multi-Objective Genetic Algorithm Based Techniques for Intrusion Detection Gulshan Kumar Department of Computer Application, SBS State Technical

More information

International Conference on Computer Applications in Shipbuilding (ICCAS-2009) Shanghai, China Vol.2, pp

International Conference on Computer Applications in Shipbuilding (ICCAS-2009) Shanghai, China Vol.2, pp AUTOMATIC DESIGN FOR PIPE ARRANGEMENT CONSIDERING VALVE OPERATIONALITY H Kimura, Kyushu University, Japan S Iehira, Kyushu University, Japan SUMMARY We propose a novel evaluation method of valve operationality

More information

Preferences in Evolutionary Multi-Objective Optimisation with Noisy Fitness Functions: Hardware in the Loop Study

Preferences in Evolutionary Multi-Objective Optimisation with Noisy Fitness Functions: Hardware in the Loop Study Proceedings of the International Multiconference on ISSN 1896-7094 Computer Science and Information Technology, pp. 337 346 2007 PIPS Preferences in Evolutionary Multi-Objective Optimisation with Noisy

More information

Multi-objective Optimization Algorithm based on Magnetotactic Bacterium

Multi-objective Optimization Algorithm based on Magnetotactic Bacterium Vol.78 (MulGrab 24), pp.6-64 http://dx.doi.org/.4257/astl.24.78. Multi-obective Optimization Algorithm based on Magnetotactic Bacterium Zhidan Xu Institute of Basic Science, Harbin University of Commerce,

More information

Indicator-Based Selection in Multiobjective Search

Indicator-Based Selection in Multiobjective Search Indicator-Based Selection in Multiobjective Search Eckart Zitzler and Simon Künzli Swiss Federal Institute of Technology Zurich Computer Engineering and Networks Laboratory (TIK) Gloriastrasse 35, CH 8092

More information

Multicriterial Optimization Using Genetic Algorithm

Multicriterial Optimization Using Genetic Algorithm Multicriterial Optimization Using Genetic Algorithm 180 175 170 165 Fitness 160 155 150 145 140 Best Fitness Mean Fitness 135 130 0 Page 1 100 200 300 Generations 400 500 600 Contents Optimization, Local

More information

Double Archive Pareto Local Search

Double Archive Pareto Local Search Double Archive Pareto Local Search Oded Maler CNRS-VERIMAG University of Grenoble Alpes, France Email: oded.maler@imag.fr Abhinav Srivastav VERIMAG University of Grenoble Alpes, France Email: abhinav.srivastav@imag.fr

More information

Improved Crowding Distance for NSGA-II

Improved Crowding Distance for NSGA-II Improved Crowding Distance for NSGA-II Xiangxiang Chu and Xinjie Yu Department of Electrical Engineering, Tsinghua University, Beijing84, China Abstract:Non-dominated sorting genetic algorithm II (NSGA-II)

More information

PERFORMANCE SCALING OF MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS. Vineet Khare

PERFORMANCE SCALING OF MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS. Vineet Khare PERFORMANCE SCALING OF MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS Vineet Khare School of Computer Science The University of Birmingham Edgbaston, Birmingham B15 2TT, U.K. msc39vxk@cs.bham.ac.uk Project Supervisors

More information

A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II

A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II Kalyanmoy Deb, Samir Agrawal, Amrit Pratap, and T Meyarivan Kanpur Genetic Algorithms Laboratory (KanGAL)

More information

Evolutionary Multi-Objective Optimization Without Additional Parameters

Evolutionary Multi-Objective Optimization Without Additional Parameters Evolutionary Multi-Objective Optimization Without Additional Parameters Kalyanmoy Deb Department of Mechanical Engineering Indian Institute of Technology Kanpur Kanpur, PIN 8, India Email: deb@iitk.ac.in

More information

Using an outward selective pressure for improving the search quality of the MOEA/D algorithm

Using an outward selective pressure for improving the search quality of the MOEA/D algorithm Comput Optim Appl (25) 6:57 67 DOI.7/s589-5-9733-9 Using an outward selective pressure for improving the search quality of the MOEA/D algorithm Krzysztof Michalak Received: 2 January 24 / Published online:

More information

Bio-inspired Optimization and Design

Bio-inspired Optimization and Design Eckart Zitzler Computer Engineering and Networks Laboratory Introductory Example: The Knapsack Problem weight = 750g profit = 5 weight = 1500g profit = 8 weight = 300g profit = 7 weight = 1000g profit

More information

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences Upali K. Wickramasinghe and Xiaodong Li School of Computer Science and Information Technology, RMIT University,

More information

STUDY OF MULTI-OBJECTIVE OPTIMIZATION AND ITS IMPLEMENTATION USING NSGA-II

STUDY OF MULTI-OBJECTIVE OPTIMIZATION AND ITS IMPLEMENTATION USING NSGA-II STUDY OF MULTI-OBJECTIVE OPTIMIZATION AND ITS IMPLEMENTATION USING NSGA-II A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Bachelor of Technology in Electrical Engineering.

More information

Improved S-CDAS using Crossover Controlling the Number of Crossed Genes for Many-objective Optimization

Improved S-CDAS using Crossover Controlling the Number of Crossed Genes for Many-objective Optimization Improved S-CDAS using Crossover Controlling the Number of Crossed Genes for Many-objective Optimization Hiroyuki Sato Faculty of Informatics and Engineering, The University of Electro-Communications -5-

More information

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Tony Maciejewski, Kyle Tarplee, Ryan Friese, and Howard Jay Siegel Department of Electrical and Computer Engineering Colorado

More information

X/$ IEEE

X/$ IEEE IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 1, FEBRUARY 2008 41 RM-MEDA: A Regularity Model-Based Multiobjective Estimation of Distribution Algorithm Qingfu Zhang, Senior Member, IEEE,

More information

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms H. Ishibuchi, T. Doi, and Y. Nojima, Incorporation of scalarizing fitness functions into evolutionary multiobjective optimization algorithms, Lecture Notes in Computer Science 4193: Parallel Problem Solving

More information

A Search Method with User s Preference Direction using Reference Lines

A Search Method with User s Preference Direction using Reference Lines A Search Method with User s Preference Direction using Reference Lines Tomohiro Yoshikawa Graduate School of Engineering, Nagoya University, Nagoya, Japan, {yoshikawa}@cse.nagoya-u.ac.jp Abstract Recently,

More information

EVOLUTIONARY algorithms (EAs) are a class of

EVOLUTIONARY algorithms (EAs) are a class of An Investigation on Evolutionary Gradient Search for Multi-objective Optimization C. K. Goh, Y. S. Ong and K. C. Tan Abstract Evolutionary gradient search is a hybrid algorithm that exploits the complementary

More information

International Journal of Computer Techniques - Volume 3 Issue 2, Mar-Apr 2016

International Journal of Computer Techniques - Volume 3 Issue 2, Mar-Apr 2016 RESEARCH ARTICLE International Journal of Computer Techniques - Volume 3 Issue 2, Mar-Apr 2016 OPEN ACCESS A Comprehensive Review on Multi-Objective Optimization Using Genetic Algorithms Amarbir Singh*

More information

Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems

Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems Saku Kukkonen and Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Evolutionary Multiobjective Bayesian Optimization Algorithm: Experimental Study

Evolutionary Multiobjective Bayesian Optimization Algorithm: Experimental Study Evolutionary Multiobective Bayesian Optimization Algorithm: Experimental Study Josef Schwarz * schwarz@dcse.fee.vutbr.cz Jiří Očenášek * ocenasek@dcse.fee.vutbr.cz Abstract: This paper deals with the utilizing

More information

Multiobjective Optimisation. Why? Panorama. General Formulation. Decision Space and Objective Space. 1 of 7 02/03/15 09:49.

Multiobjective Optimisation. Why? Panorama. General Formulation. Decision Space and Objective Space. 1 of 7 02/03/15 09:49. ITNPD8/CSCU9YO Multiobjective Optimisation An Overview Nadarajen Veerapen (nve@cs.stir.ac.uk) University of Stirling Why? Classic optimisation: 1 objective Example: Minimise cost Reality is often more

More information

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Ke Li 1, Kalyanmoy Deb 1, Qingfu Zhang 2, and Sam Kwong 2 1 Department of Electrical and Computer

More information

Dynamic Uniform Scaling for Multiobjective Genetic Algorithms

Dynamic Uniform Scaling for Multiobjective Genetic Algorithms Dynamic Uniform Scaling for Multiobjective Genetic Algorithms Gerulf K. M. Pedersen 1 and David E. Goldberg 2 1 Aalborg University, Department of Control Engineering, Fredrik Bajers Vej 7, DK-922 Aalborg

More information

Part II. Computational Intelligence Algorithms

Part II. Computational Intelligence Algorithms Part II Computational Intelligence Algorithms 126 Chapter 5 Population-based Single-objective Algorithms One bee makes no swarm. French proverb This chapter provides an overview of two CI algorithms that

More information

Multiobjective Prototype Optimization with Evolved Improvement Steps

Multiobjective Prototype Optimization with Evolved Improvement Steps Multiobjective Prototype Optimization with Evolved Improvement Steps Jiri Kubalik 1, Richard Mordinyi 2, and Stefan Biffl 3 1 Department of Cybernetics Czech Technical University in Prague Technicka 2,

More information

The Genetic Algorithm for finding the maxima of single-variable functions

The Genetic Algorithm for finding the maxima of single-variable functions Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding

More information

minimizing minimizing

minimizing minimizing The Pareto Envelope-based Selection Algorithm for Multiobjective Optimization David W. Corne, Joshua D. Knowles, Martin J. Oates School of Computer Science, Cybernetics and Electronic Engineering University

More information

Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows

Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows Abel Garcia-Najera and John A. Bullinaria School of Computer Science, University of Birmingham Edgbaston, Birmingham

More information

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL., NO., MONTH YEAR 1

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL., NO., MONTH YEAR 1 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL., NO., MONTH YEAR 1 An Efficient Approach to Non-dominated Sorting for Evolutionary Multi-objective Optimization Xingyi Zhang, Ye Tian, Ran Cheng, and

More information

Comparing Algorithms, Representations and Operators for the Multi-Objective Knapsack Problem

Comparing Algorithms, Representations and Operators for the Multi-Objective Knapsack Problem Comparing s, Representations and Operators for the Multi-Objective Knapsack Problem Gualtiero Colombo School of Computer Science Cardiff University United Kingdom G.Colombo@cs.cardiff.ac.uk Christine L.

More information

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances Minzhong Liu, Xiufen Zou, Yu Chen, Zhijian Wu Abstract In this paper, the DMOEA-DD, which is an improvement of DMOEA[1,

More information

An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls

An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls Ankur Sinha, Pekka Korhonen, Jyrki Wallenius Firstname.Secondname@aalto.fi,

More information

Computational Intelligence

Computational Intelligence Computational Intelligence Winter Term 2016/17 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund Slides prepared by Dr. Nicola Beume (2012) Multiobjective

More information

NEW DECISION MAKER MODEL FOR MULTIOBJECTIVE OPTIMIZATION INTERACTIVE METHODS

NEW DECISION MAKER MODEL FOR MULTIOBJECTIVE OPTIMIZATION INTERACTIVE METHODS NEW DECISION MAKER MODEL FOR MULTIOBJECTIVE OPTIMIZATION INTERACTIVE METHODS Andrejs Zujevs 1, Janis Eiduks 2 1 Latvia University of Agriculture, Department of Computer Systems, Liela street 2, Jelgava,

More information

for Locating Apoptotic Cellular Automata

for Locating Apoptotic Cellular Automata A Multi-objective Optimization Nested Evolutionary Algorithm for Locating Apoptotic Cellular Automata by Carolyn Pugh A Thesis Presented to The University of Guelph In partial fulfilment of requirements

More information

R2-IBEA: R2 Indicator Based Evolutionary Algorithm for Multiobjective Optimization

R2-IBEA: R2 Indicator Based Evolutionary Algorithm for Multiobjective Optimization R2-IBEA: R2 Indicator Based Evolutionary Algorithm for Multiobjective Optimization Dũng H. Phan Department of Computer Science University of Massachusetts, Boston Boston, MA 02125, USA Email: phdung@cs.umb.edu

More information

A Fuzzy Logic Controller Based Dynamic Routing Algorithm with SPDE based Differential Evolution Approach

A Fuzzy Logic Controller Based Dynamic Routing Algorithm with SPDE based Differential Evolution Approach A Fuzzy Logic Controller Based Dynamic Routing Algorithm with SPDE based Differential Evolution Approach Debraj De Sonai Ray Amit Konar Amita Chatterjee Department of Electronics & Telecommunication Engineering,

More information

Decomposition of Multi-Objective Evolutionary Algorithm based on Estimation of Distribution

Decomposition of Multi-Objective Evolutionary Algorithm based on Estimation of Distribution Appl. Math. Inf. Sci. 8, No. 1, 249-254 (2014) 249 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.12785/amis/080130 Decomposition of Multi-Objective Evolutionary

More information

Effectiveness and efficiency of non-dominated sorting for evolutionary multi- and many-objective optimization

Effectiveness and efficiency of non-dominated sorting for evolutionary multi- and many-objective optimization Complex Intell. Syst. (217) 3:247 263 DOI 1.17/s4747-17-57-5 ORIGINAL ARTICLE Effectiveness and efficiency of non-dominated sorting for evolutionary multi- and many-objective optimization Ye Tian 1 Handing

More information

Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm

Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm Toshiki Kondoh, Tomoaki Tatsukawa, Akira Oyama, Takeshi Watanabe and Kozo Fujii Graduate School of Engineering, Tokyo University

More information

Bayesian Optimization Algorithms for Multi-Objective Optimization

Bayesian Optimization Algorithms for Multi-Objective Optimization Bayesian Optimization Algorithms for Multi-Objective Optimization Marco Laumanns 1 and Jiri Ocenasek 2 1 ETH Zürich, Computer Engineering and Networks Laboratory, CH 8092 Zürich laumanns@tik.ee.ethz.ch

More information

Late Parallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms

Late Parallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms Late arallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms O. Tolga Altinoz Department of Electrical and Electronics Engineering Ankara

More information

Multiobjective Optimization

Multiobjective Optimization Multiobjective Optimization Concepts, Algorithms and Performance Measures Joshua Knowles School of Computer Science The University of Manchester COMP60342 - Week 5 2.15, 2 May 2014 Introducing Multiobjective

More information

Multi-objective Optimization for Paroxysmal Atrial Fibrillation Diagnosis

Multi-objective Optimization for Paroxysmal Atrial Fibrillation Diagnosis Multi-objective Optimization for Paroxysmal Atrial Fibrillation Diagnosis Francisco de Toro, Eduardo Ros 2, Sonia Mota 2, Julio Ortega 2 Departamento de Ingeniería Electrónica, Sistemas Informáticos y

More information

Ajay Sharma Gaurav Kapur VK Kaushik LC Mangal RC Agarwal. Defence Electronics Applications Laboratory, Dehradun DRDO, India

Ajay Sharma Gaurav Kapur VK Kaushik LC Mangal RC Agarwal. Defence Electronics Applications Laboratory, Dehradun DRDO, India Ajay Sharma Gaurav Kapur VK Kaushik LC Mangal RC Agarwal Defence Electronics Applications Laboratory, Dehradun DRDO, India Problem considered Given a SDR with a set of configurable parameters, user specified

More information

division 1 division 2 division 3 Pareto Optimum Solution f 2 (x) Min Max (x) f 1

division 1 division 2 division 3 Pareto Optimum Solution f 2 (x) Min Max (x) f 1 The New Model of Parallel Genetic Algorithm in Multi-Objective Optimization Problems Divided Range Multi-Objective Genetic Algorithm Tomoyuki HIROYASU Mitsunori MIKI Sinya WATANABE Doshisha University,

More information

MULTIOBJECTIVE INTRINSIC HARDWARE EVOLUTION. Paul Kaufmann, Marco Platzner

MULTIOBJECTIVE INTRINSIC HARDWARE EVOLUTION. Paul Kaufmann, Marco Platzner MULTIOBJECTIVE INTRINSIC HARDWARE EVOLUTION Paul Kaufmann, Marco Platzner University of Paderborn Warburger Str. 00 33098 Paderborn, Germany email: {paul.kaufmann,platzner}@upb.de ABSTRACT Evolutionary

More information

ROBUST MULTI-OBJECTIVE OPTIMIZATION OF WATER DISTRIBUTION NETWORKS

ROBUST MULTI-OBJECTIVE OPTIMIZATION OF WATER DISTRIBUTION NETWORKS ROBUST MULTI-OBJECTIVE OPTIMIZATION OF WATER DISTRIBUTION NETWORKS Taishi Ohno, Hernán Aguirre, Kiyoshi Tanaka Faculty of Engineering, Shinshu University, Wakasato, Nagano-shi, Japan 15tm209f@shinshu-u.ac.jp,

More information

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization adfa, p. 1, 2011. Springer-Verlag Berlin Heidelberg 2011 Devang Agarwal and Deepak Sharma Department of Mechanical

More information

An Evolutionary Algorithm with Advanced Goal and Priority Specification for Multi-objective Optimization

An Evolutionary Algorithm with Advanced Goal and Priority Specification for Multi-objective Optimization Journal of Artificial Intelligence Research 8 (2003) 83-25 Submitted 9/0; published 2/03 An Evolutionary Algorithm with Advanced Goal and Priority Specification for Multi-objective Optimization Kay Chen

More information

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS 6.1 Introduction Gradient-based algorithms have some weaknesses relative to engineering optimization. Specifically, it is difficult to use gradient-based algorithms

More information

Developing Multiple Topologies of Path Generating Compliant Mechanism (PGCM) using Evolutionary Optimization

Developing Multiple Topologies of Path Generating Compliant Mechanism (PGCM) using Evolutionary Optimization Developing Multiple Topologies of Path Generating Compliant Mechanism (PGCM) using Evolutionary Optimization Deepak Sharma, Kalyanmoy Deb, N. N. Kishore KanGAL Report No. 292 Kanpur Genetic Algorithms

More information

An Evolutionary Algorithm Approach to Generate Distinct Sets of Non-Dominated Solutions for Wicked Problems

An Evolutionary Algorithm Approach to Generate Distinct Sets of Non-Dominated Solutions for Wicked Problems An Evolutionary Algorithm Approach to Generate Distinct Sets of Non-Dominated Solutions for Wicked Problems Marcio H. Giacomoni Assistant Professor Civil and Environmental Engineering February 6 th 7 Zechman,

More information

Finding Knees in Multi-objective Optimization

Finding Knees in Multi-objective Optimization Finding Knees in Multi-objective Optimization Jürgen Branke 1, Kalyanmoy Deb 2, Henning Dierolf 1, and Matthias Osswald 1 1 Institute AIFB, University of Karlsruhe, Germany branke@aifb.uni-karlsruhe.de

More information

Improving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms.

Improving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms. Improving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms. Gómez-Skarmeta, A.F. University of Murcia skarmeta@dif.um.es Jiménez, F. University of Murcia fernan@dif.um.es

More information

The Role of -dominance in Multi Objective Particle Swarm Optimization Methods

The Role of -dominance in Multi Objective Particle Swarm Optimization Methods The Role of -dominance in Multi Objective Particle Swarm Optimization Methods Sanaz Mostaghim Electrical Engineering Department, University of Paderborn, Paderborn, Germany mostaghim@dateupbde Jürgen Teich

More information

Using Different Many-Objective Techniques in Particle Swarm Optimization for Many Objective Problems: An Empirical Study

Using Different Many-Objective Techniques in Particle Swarm Optimization for Many Objective Problems: An Empirical Study International Journal of Computer Information Systems and Industrial Management Applications ISSN 2150-7988 Volume 3 (2011) pp.096-107 MIR Labs, www.mirlabs.net/ijcisim/index.html Using Different Many-Objective

More information

Knot Estimation of the B-Spline Curve with Strength Pareto Evolutionary Algorithm 2 (SPEA2)

Knot Estimation of the B-Spline Curve with Strength Pareto Evolutionary Algorithm 2 (SPEA2) Knot Estimation of the B-Spline Curve with Strength Pareto Evolutionary Algorithm 2 (SPEA2) SABAN GÜLCÜ a and ERKAN ÜLKER b Computer Engineering Department Konya University a and Selçuk University b Alaeddin

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Design of Curves and Surfaces Using Multi-Objective Optimization

Design of Curves and Surfaces Using Multi-Objective Optimization Design of Curves and Surfaces Using Multi-Objective Optimization Rony Goldenthal and Michel Bercovier Abstract. Design by optimization of curves and surfaces is a powerful design technique. The mathematical

More information