Towards an Estimation of Nadir Objective Vector Using Hybrid Evolutionary and Local Search Approaches

Size: px
Start display at page:

Download "Towards an Estimation of Nadir Objective Vector Using Hybrid Evolutionary and Local Search Approaches"

Transcription

1 Towards an Estimation of Nadir Objective Vector Using Hybrid Evolutionary and Local Search Approaches Kalyanmoy Deb, Kaisa Miettinen, and Shamik Chaudhuri KanGAL Report Number 279 Abstract Nadir objective vector is constructed with the worst Pareto-optimal objective values in a multi-objective optimization problem and is an important entity to compute because of its importance in estimating the range of objective values in the Pareto-optimal front and also in using many interactive multiobjective optimization techniques. It is needed, for example, for normalizing purposes. The task of estimating the nadir objective vector necessitates information about the complete Paretooptimal front and is reported to be a difficult task using other approaches. In this paper, we propose certain modifications to an existing evolutionary multi-objective optimization procedure to focus its search towards the extreme objective values and combine it with a reference-point based local search approach to constitute a couple of hybrid procedures for a reliable estimation of the nadir objective vector. With up to 2-objective optimization test problems and on a three-objective engineering design optimization problem, the proposed procedures are found to be capable of finding a near nadir objective vector reliably. The study clearly shows the significance of an evolutionary computing based search procedure in assisting to solve an age-old important task of nadir objective vector estimation. Keywords: Nadir point, multi-objective optimization, nondominated sorting GA, evolutionary multi-objective optimization (EMO), hybrid procedure, ideal point. Pareto optimality. I. INTRODUCTION In a multi-objective optimization procedure, the estimation of a nadir objective vector (or simply a nadir point) is often an important task. The nadir objective vector consists of the worst values of each objective function corresponding to the entire Pareto-optimal front. Sometimes, this point is confused with the point representing the worst objective values of the entire search space, which is often an over-estimation of the true nadir objective vector. Along with the ideal objective vector (a point constructed by the best values of each objective), the nadir objective vector is used to normalize objective functions [], a matter often desired for an adequate functioning of Kalyanmoy Deb holds the Finland Distinguished Professor at Department of Business Technology, Helsinki School of Economics, PO Box 2, FI- Helsinki, Finland (Kalyanmoy.Deb@hse.fi) and Deva Raj Chair Professor at Department of Mechanical Engineering, Indian Institute of Technology Kanpur, PIN 286, India (deb@iitk.ac.in) Kaisa Miettinen is a Professor at Department of Business Technology, Helsinki School of Economics, P.O. Box 2, FI- Helsinki, Finland (Kaisa.Miettinen@hse.fi) and Department of Mathematical Information Technology, P.O. Box 35 (Agora), FI-44, University of Jyväskylä, Finland (kaisa.miettinen@jyu.fi) Shamik Chaudhuri is currently working in John F. Welch Technology Centre, Plot 22, Export Promotion Industrial Park Phase 2, Hoodi Village, Whitefield Road, Bangalore, PIN 5666, India (shamikc@gmail.com) multi-objective optimization algorithms in the presence of objective functions with different magnitudes. With these two extreme values, the objective functions can be scaled so that each scaled objective takes values more or less in the same range. These scaled values can be used for optimization with different algorithms like the reference point method, weighting method, compromise programming or the Tchebycheff method (see [] and references therein). Such a scaling procedure may help in reducing the computational cost by solving the problem faster [2]. Apart from normalizing the objective function values, the nadir objective vector is also used for finding Pareto-optimal solutions in different interactive algorithms like the guess method [3] (where the idea is to maximize the minimum weighted deviation from the nadir objective vector), or it is otherwise an integral part of an interactive method like the NIMBUS method [], [4]. Moreover, the knowledge of nadir and ideal objective values helps the decision-maker in adjusting her/his expectations on a realistic level by providing the range of each objective and can then be used to aid in specifying preference information in interactive methods in order to focus on a desired region. Furthermore, in visualizing Pareto-optimal front, the knowledge of the nadir objective vector is essential. Along with the ideal point, the nadir point will then provide the range of each objective in order to facilitate comparison of different Pareto-optimal solutions, that is, visualizing the trade-off information through value paths, bar charts, petal diagrams etc. [], [5]. Researchers dealing with multiple criteria decision-making (MCDM) methodologies have suggested to approximate the nadir point using a so-called payoff table [6]. This involves computing the individual optimum solutions, constructing a payoff table by evaluating other objective values at these optimal solutions, and estimating the nadir point from the worst objective values from the table. This procedure may not guarantee a true estimation of the nadir point for more than two objectives. Moreover, the estimated nadir point can be either an over-estimation or an under-estimation of the true nadir point. For example, Iserman and Steuer [7] have demonstrated these difficulties for finding a nadir point using the payoff table method even for linear problems and emphasized the need of using a better method. Among others, Dessouky et al. [8] suggested three heuristic methods and Korhonen et al. [9] another heuristic method for this purpose. Let us point out that all these methods suggested have been developed for linear multi-objective problems where all objectives and constraints

2 are linear functions of the variables. In [], an algorithm for deriving the nadir point is proposed based on subproblems. In other words, in order to find the nadir point for an M-objective problem, Pareto-optimal solutions of all lower-dimensional problems must first be found. Such a requirement may make the algorithm computationally impractical beyond three objectives, although Szczepanski and Wierzbicki [] implemented the above idea using EAs and showed successful applications up to three and four objective linear optimization problems. Moreover, authors [] did not suggest how to realize the idea in nonlinear problems. It must be emphasized that although the determination of the nadir point depends on finding the worst objective values in the set of Pareto-optimal solutions, even for linear problems, this is a difficult task [2]. Since an estimation of the nadir objective vector necessitates information about the whole Pareto-optimal front, any procedure of estimating this point should involve finding Pareto-optimal solutions. This makes the task more difficult compared to finding the ideal point [9]. Since evolutionary multi-objective optimization (EMO) algorithms can be used to find the entire or a part of the Pareto-optimal front, EMO methodologies stand as viable candidates for this task. However, some thought will reveal that an estimation of the nadir objective vector may not need finding the complete Pareto-optimal front, but only an adequate number of extreme Pareto-optimal solutions may be enough for this task. Based on this concept, in this paper, we suggest two modifications to an existing EMO methodology elitist non-dominated sorting GA or NSGA-II [3] for emphasizing to converge near to the extreme Pareto-optimal solutions, instead of emphasizing the entire the Pareto-optimal front. Thereafter, a local search methodology based on a reference point approach [4] borrowed from the multiple criteria decision-making literature is employed to enhance the convergence properties of the extreme solutions. Simulation results of this hybrid nadir point estimation procedure on problems with up to 2 objectives and on an engineering design problem amply demonstrate that one of the two approaches the extremized crowded NSGA-II is capable of finding a near nadir point more quickly and reliably than the other proposed method and the original NSGA-II approach of first finding the complete Pareto-optimal front and then estimating the nadir point. Results are encouraging and suggest further application of the procedure to a variety of different multiobjective optimization problems. The rest of this paper is organized as follows. In Section II, we introduce basic concepts of multiobjective optimization and discuss the importance and difficulties of estimating the nadir point. In Section III, we describe two modified NSGA-II approaches for finding near extreme Pareto-optimal solutions. The nadir point estimation procedures proposed based on a hybrid evolutionary-cum-local-search concept is described in Section IV. Academic test problems and results are described in Section V. The use of the hybrid nadir point estimation procedure is demonstrated in Section VI by solving a test problem and an engineering design problem. Finally, the paper is concluded in Section VII. II. NADIR OBJECTIVE VECTOR AND DIFFICULTIES OF ITS ESTIMATION We consider multi-objective optimization problems involving M conflicting objectives (f i : S R) as functions of decision variables x: } minimize {f (x), f 2 (x),..., f M (x)} () subject to x S, where S R n denotes the set of feasible solutions. A vector consisting of objective function values calculated at some point x S is called an objective vector f(x) = (f (x),..., f M (x)) T. Problem () gives rise to a set of Pareto-optimal solutions (P ), providing a trade-off among the objectives. In the sense of minimization of objectives, Paretooptimal solutions can be defined as follows []: Definition : A decision vector x S and the corresponding objective vector f(x ) are Pareto-optimal if there does not exist another decision vector x S such that f i (x) f i (x ) for all i =, 2,...,M and f j (x) < f j (x ) for at least one index j. Let us mention that if an objective f j is to be maximized, it is equivalent to minimize f j. In what follows, we assume that the Pareto-optimal front is bounded. We now define a nadir objective vector as follows. Definition 2: An objective vector z nad = (z nad,..., zm nad)t constructed using the worst values of objective functions in the complete Pareto-optimal front P is called a nadir objective vector. Hence, for minimization problems we have zj nad = maxx P f j(x). Estimation of the nadir objective vector is, in general, a difficult task. Unlike the ideal objective vector z = (z, z2,...,zm )T, which can be found by minimizing each objective individually over the feasible set S (or, zj = min x S f j (x)), the nadir point cannot be formed by maximizing objectives individually over S. To find the nadir point, Pareto-optimality of solutions used for constructing the nadir point must be first established. This makes the task of finding the nadir point a difficult one. To illustrate this aspect, let us consider a bi-objective minimization problem shown in Figure. If we maximize f and f 2 individually, we obtain points A and B, respectively. These two points can be used to construct the so-called worst objective vector. In many problems (even in bi-objective optimization problems), the nadir objective vector and the worst objective vector are not the same point, which can also be seen in Figure. A. Payoff Table Method Benayoun et al. [6] introduced the first interactive multiobjective optimization method and used a nadir point (although the authors did not use the term nadir ), which was to be found by using a payoff table. In this method, each objective function is first minimized individually and then a table is constructed where the i-th row of the table represents values of all other objective functions calculated at the point where the i-th objective obtained its minimum value. Thereafter, the maximum value of the j-th column can be considered as an

3 f 2 B Worst objective vector f3 C Z nad Nadir objective vector.8 Z Ideal point Feasible objective space Pareto optimal front A f Ideal point Z* Fig. 2. f.8 A B f2 Payoff table may not produce the true nadir point..8 Fig.. The nadir and worst objective vectors. estimate of the upper bound of the j-th objective in the Paretooptimal front and these maximum values together may be used to construct an approximation of the nadir objective vector. The main difficulty of such an approach is that solutions are not necessarily unique and thus corresponding to the minimum solution of an objective there may exist more than one solutions having different values of other objectives, in problems having more than two objectives. In these problems, the payoff table method may not result in an accurate estimation of the nadir objective vector. Let us consider the Pareto-optimal front of a hypothetical problem involving three objective functions shown in Figure 2. The minimum value of the first objective function is zero. As can be seen from the figure, there exist a number of solutions having a value zero for function f and different values of f 2 and f 3 (all solutions on the line BC). In the payoff table, when the three objectives are minimized one at a time, we may get objective vectors f () = (,, ) T (point C), f (2) = (,, ) T (point A), and f (3) = (,, ) T (point B) corresponding to minimizations of f, f 2, and f 3, respectively, and then the true nadir point z nad = (,, ) T can be found. However, if vectors f () = (,,.8) T, f (2) = (.5,,.5) T and f (3) = (.7,.3, ) T (marked with open circles) are found corresponding minimizations of f, f 2, and f 3, respectively, a wrong estimate z = (.7,.3,.8) T of the nadir point will be made. The figure shows how such a wrong nadir point represents only a portion (shown dark-shaded) of the Paretooptimal front. Here we obtained an underestimation but the result may also be an overestimation of the true nadir point in some other problems. III. EVOLUTIONARY MULTI-OBJECTIVE APPROACHES FOR NADIR POINT ESTIMATION As has been discussed so far, the nadir point is associated with Pareto-optimal solutions and, thus, determining a set of Pareto-optimal solutions will facilitate the estimation of the nadir point. For the past decade or so, evolutionary multiobjective optimization (EMO) algorithms have been gaining popularity because of their ability to find multiple, widespread, Pareto-optimal solutions simultaneously in a single simulation run [5], [6]. Since they aim at finding a set of Pareto-optimal solutions, an EMO approach may be an ideal way to find the nadir objective vector. Simply, a well-distributed set of Pareto-optimal solutions can be attempted to find by an EMO, as was also suggested by and an estimate of the nadir objective vector can be constructed by picking the worst values of each objective. This idea was implemented recently [] and applied to a couple of three and four objective optimization problems. However, this naive procedure of first finding a representative set of Pareto-optimal solutions and then determining the nadir objective vector seems to possess some difficulties. Recall that the main purpose of the nadir objective vector, along with ideal point, is to be able to normalize different objective functions, so an interactive multi-objective optimization algorithm can be used to find the most preferred Pareto-optimal solution. Some may argue that if an EMO has already been used to find a representative Pareto-optimal set for the nadir point estimation, there is no apparent reason for constructing the nadir point for any further analysis. They can further suggest a simple approach in which the decision maker can evaluate the suitability of each obtained Pareto-optimal solution by using some higher-level information and finally choose a particular solution. However, representing and analyzing the set of Pareto optimal solutions is not trivial when we have more than two objectives in question. Furthermore, we can list several other difficulties related to the above-described simple approach. Recent studies have shown that EMO approaches using the domination principle possess a number of difficulties in solving problems having a large number of objectives [7], [8]: ) To represent a high-dimensional Pareto-optimal front requires an exponentially large number of points [5], which, among others, increases computational cost. 2) With a large number of conflicting objectives, a large proportion of points in a random initial population are

4 non-dominated to each other. Since EMO algorithms emphasize all non-dominated solutions in a generation, a large portion of an EA population gets copied to the next generation, thereby allowing only a small number of new solutions to be included in a generation. This severely slows down the convergence of an EMO towards the true Pareto-optimal front. 3) EMO methodologies maintain a good diversity of non-dominated solutions by explicitly using a nichepreserving scheme which uses a diversity metric specifying how diverse the non-dominated solutions are. In a problem with many objectives, defining a computationally fast yet a good indicator of higher-dimensional distances among solutions becomes a difficult task. This aspect also makes the EMO approaches computationally expensive. 4) With a large number of objectives, visualization of a large-dimensional Pareto-optimal front gets difficult. The above-mentioned shortcomings cause EMO approaches to be inadequate for finding the complete Pareto-optimal front in the first place [7]. Thus, for handling a large number of objectives, it may not be advantageous to first use an EMO approach for finding representative points of the entire Paretooptimal front and then estimate the nadir point. Szczepanski and Wierzbicki [] have simulated the idea of employing multiple bi-objective optimization techniques suggested elsewhere [] using an EMO approach to solve a three-objective test problem and construct the nadir point by accumulating all bi-objective Pareto-optimal fronts together. Although the idea seems interesting and theoretically sound, it requires ( ) M 2 bi-objective optimizations to be performed. This may be a daunting task particularly for problems having more than three or four objectives. However, the above idea can be pushed further and instead of finding bi-objective Pareto-optimal fronts, an emphasis can be placed in an EMO approach to find only the extreme points of the Pareto-optimal front. These points are non-dominated extreme points which will be required to estimate the nadir point correctly. With this change in focus, the EMO approach can also be used to handle large-dimensional problems, particularly since the focus would be to only converge to the extreme points on the Pareto-optimal front. For the threeobjective minimization problem of Figure 2, the proposed EMO approach would then distribute its population members near the extreme points A, B, and C, instead of on the entire Pareto-optimal front (on the triangle ABC), so that the nadir point can be estimated quickly. In the following subsections, we describe two EMO approaches for this purpose. A. Worst Crowded NSGA-II Approach In this study, we implement a couple of different nadir point estimation approaches on a particular EMO approach (NSGA-II [3]), but they can also be implemented in other state-of-the-art EMO approaches as well. Since the nadir point must be constructed from the worst objective values of Paretooptimal solutions, it is intuitive to think of an idea in which population members having the worst objective values within a non-dominated front are emphasized. In our first approach, we employ a modified crowing distance scheme in NSGA- II by emphasizing the worst objective values in every nondominated front. In every generation, population members on every nondominated front (having N f members) are first sorted from minimum to maximum based on each objective (for minimization problems) and a rank equal to the position of the solution in the sorted list is assigned. In this way, a member i in a front gets a rank R (m) i from the sorting in m-th objective. The solution with the minimum function value in the m-th objective gets a rank value R (m) i = and the solution with the maximum function value in the m-th objective gets a rank value R (m) i = N f. Such a rank assignment continues for all M objectives. Thus, at the end of this assignment process, each solution in the front gets M ranks, one corresponding to each objective function. Thereafter, the crowding distance d i to a solution i in the front is assigned as the maximum of all M ranks: { } d i = max R () i, R (2) i,...,r (M) i. (2) The diversity preserving operator of NSGA-II emphasizes solutions having higher crowding distance value. In this way, the solution with the maximum objective value of any objective gets the best crowded distance. Like before, the NSGA-II approach emphasizes a solution if it lies on a better nondominated front and for solutions of the same non-dominated front it emphasizes a solution with a higher crowding distance value. Thus, solutions of the final non-dominated front which could not be accepted entirely by NSGA-II s selection operator are selected based on their crowding distance value. Solutions having the worst objective value get emphasized. This dual task of selecting non-dominated solutions and solutions with worst objective values should, in principle, lead to a proper estimation of the nadir point in most problems. However, we realize that an emphasis on the worst nondominated solutions alone may have at least two difficulties in certain problems. First, since the focus is to find only a few solutions (instead of a complete front), the population may lose its diversity early on during the search process, thereby slowing down the progress towards the true extreme points. Moreover, if, for some reason, the convergence is a premature event to wrong solutions, the lack of diversity among population members will make it even harder for the EMO to find the necessary extreme solutions to construct the true nadir point. The second difficulty of the worst-crowded NSGA-II approach appears in certain problems and is a more serious issue. In some problems, solutions involving the worst objective values may give rise to a correct estimate of the nadir point, but piggy-backing with one or more spurious points (which are non-optimal but non-dominated with the obtained worst points) may lead to a wrong estimate of the nadir point. We discuss this important issue with an example problem here. Consider a three-objective minimization problem shown in Figure 3, where the surface ABCD represents the Paretooptimal front. Points A, B, C and D can be used to construct

5 Fig. 3. f3 C.8 B f.8.4 A D f2 A problem which may cause difficulty to the worst crowded approach. the true nadir point z nad = (,, ) T. Now, by using the worst-crowded NSGA-II, we expect to find three individual worst objective values, which are points B=(,, ) T (for f ), D=(,, ) T (for f 2 ) and C=(,, ) T (for f 3 ). Note that there is no motivation for the worst-crowded NSGA-II to find and maintain point A=(.9,.9,.) T in the population, as this point does not correspond to the worst value of any objective in the set of Pareto-optimal solutions. With the three points (B, C and D) in a population, a point E (with an objective vector (.3,.3,.3) T ) if found by EA operators, will become non-dominated to points B, C, and D, and will continue to exist in the population. Thereafter, the worst-crowded NSGA- II will emphasize points C and E as extreme points and the reconstructed nadir point will become F=(.3,.3,.) T, which is a wrong estimation. This difficulty could have been avoided, if the point A was included in the population. A little thought will reveal that the point A is a Paretooptimal solution, but corresponds to the best value of f 3. If point A is present in the population, it will dominate point E and would not allow point E to be present in the nondominated front. Interestingly, this situation does not occur in bi-objective optimization problems. To avoid a wrong estimation of the nadir point due to the above difficulty, ideally, an emphasis on maintaining all Pareto-optimal solutions in the population must be made. But, since this is not practically viable, we suggest another approximate approach which is somewhat better than this worst-crowded approach. B. Extremized Crowded NSGA-II Approach In the extremized crowded approach proposed here, in addition to emphasizing the worst solution corresponding to each objective, we also emphasize the best solution corresponding to every objective. In this approach, solutions on a particular non-dominated front are first sorted from minimum (with rank R (m) i = ) to maximum (with rank = N f ) based on each objective. A solution closer to either extreme objective vectors N F E (minimum or maximum objective values) gets a higher rank compared to that of an intermediate solution. Thus, the rank of solution i for the m-th objective R (m) i is reassigned as max{r (m) i, N f R (m) i + }. Two extreme solutions for every objective get a rank equal to N f (number of solutions in the non-dominated front), the solutions next to these extreme solutions get a rank (N f ), and so on. Figure 4 shows this rank-assignment procedure. After a rank is assigned to a Fig. 4. f (6) 5 (5) (4) 4 (4) 4 (5) Extremized crowding distance calculation. 5 (6) solution by each objective, the maximum value of the assigned ranks is declared as the crowding distance, as in (2). The final crowding distance values are shown within brackets in Figure 4. For a problem having a one-dimensional Pareto-optimal front (such as, in a bi-objective problem), the above crowding distance assignment is similar to the worst crowding distance assignment scheme (as the minimum-rank solution of one objective is the maximum-rank solution of at least one other objective). However, for problems having a higher-dimensional Pareto-optimal hyper-surface, the effect of extremized crowding is different from that of the worst crowded approach. In the three-objective problem shown in Figure 3, the extremized crowded approach will not only emphasize the extreme points A, B, C and D, but also solutions on edges CD and BC (having the smallest f and f 2 values, respectively) and solutions near them. This approach has two advantages: (i) a diversity of solutions in the population may now allow genetic operators (recombination and mutation) to find better solutions and not cause a premature convergence and (ii) the presence of these extreme solutions will reduce the chance of having spurious non-pareto-optimal solutions (like point E in Figure 3) to remain in the non-dominated front, thereby causing a more accurate computation of the nadir point. Moreover, since the intermediate portion of the Pareto-optimal front is not targeted in this approach, finding the extreme solutions should be quicker than the original NSGA-II, especially for problems having a large number of objectives and involving computationally expensive evaluation schemes. 6 f IV. NADIR POINT ESTIMATION PROCEDURE The NSGA-II approach (and for this matter any other EMO method) is usually found to come closer to the Pareto-optimal

6 front quickly and then observed to take many iterations to reach to the exact front. To enhance the performance, often NSGA-II solutions are improved by using a local search approach [5], [9]. In the context of nadir point estimation using the proposed modified NSGA-II approaches, one of the following three scenarios can occur with each obtained extreme non-dominated point: ) It is truly an extreme Pareto-optimal solution which can contribute in estimating the nadir point accurately, 2) It gets dominated by the true extreme Pareto-optimal solution, or 3) It is non-dominated to the true extreme Pareto-optimal solution. In the event of second and third scenario, an additional local search approach may be necessary to reach to the true extreme Pareto-optimal solution. Since, the estimation of the nadir point needs that the extreme Pareto-optimal solutions are found accurately, we suggest a hybrid, two-step procedure of first obtaining near-extreme solutions by using a modified NSGA- II procedure and then improving them by using a local search approach. The following procedure can be used with either worst-crowded or extremized-crowded NSGA-II approaches, although our initial hunch and simulation results presented later suggest the superiority of the latter approach. We call this hybrid method is our proposed nadir point estimation procedure. Steo :Supply or compute ideal and worst objective vectors. Step 2:Apply worst-crowded or extremized-crowded NSGA-II approach to find a set of non-dominated extreme points. Iterations are continued till a termination criterion (described in the next subsection), which uses ideal and worst objective vectors computed in Step, is met. Say, P non-dominated extreme points are found in this step. Step 3:Apply a local search approach from each extreme point x to find the corresponding optimal solution y using the following augmented achievement scalarizing function: ) Minimize max M j= wx j subject to y S, +ρ M k= wx j ( fj(y) z j(x) fj max f ( j min fj(y) z j(x fj max fj min ), (3) where fj max and fj min are the minimum and maximum values of j-th objective function obtained from the set P. The j-th component of the reference point z(x) is identical to f j (x), except if the component corresponds to the worst value (fj max ) of the j-th objective. Then the following value is used: z j (x) = fj max +.5(fj max fj min ). The local search is started from x. For the extreme point x, a pseudo-weight vector is computed as follows: { } w x (f j(x) fj min )/(fj max fj min ) j = max ǫ, M (f. k= k(x) fk min )/(fk max fk min ) (4) A small value of ǫ is used to avoid a weight of zero. Finally, construct the nadir point from the worst objective values of the extreme Pareto-optimal solutions obtained by the above procedure. Next, we describe the details of the local search approach. A. Reference Point Based Local Search Approach We have suggested the use of the achievement scalarizing function derived from a reference point approach [4] as a local search approach. In this way, the reference point approach can be applied to convex or non-convex problems alike. The motivation for using the above-mentioned weight vector and the augmented achievement scalarizing function is discussed here with the help of a hypothetical bi-objective optimization problem. Let us consider a bi-objective optimization problem shown in Figure 5, in which points A and B (close to the true extreme Pareto-optimal solutions P and Q, respectively) are found by one of the above modified NSGA-II approaches. Our f2 P A A P B B Q Fig. 5. The local search approach is expected to find extreme Pareto-optimal solutions exactly. goal for employing the local search approach is to reach the corresponding true extreme point (P or Q) from each of the near-extreme points (A or B). The approach suggested above first constructs a reference point, by identifying the objective function for which the point corresponds to the worst value. In the case of point A, the worst objective vector is f 2 and for point B, it is f. Next, for each point, the corresponding worst function value is degraded further and the other function value is kept the same to form a reference point. For point A, the above task will construct point A as the reference point. Similarly, for B, the point B will be the reference point. The degradation of the worst objective value (suggested in Step 3 of the nadir point estimation procedure) causes the reference point to lie on the attainable part of the Pareto-optimal front which, along with the proposed weighting scheme (discussed in the next paragraph) facilitates in finding the true extreme point. If an extreme solution obtained by a modified NSGA- II approach corresponds to the worst objective value in more f

7 than one objective, all such objective values must be degraded to construct the reference point. We can now discuss the weighting scheme suggested here. Along with the above reference point assignment scheme we suggest to choose a weight vector which will cause the achievement scalarizing problem to have its optimum solution in the corresponding extreme point. In other words, the idea of choosing an appropriate weight vector is to project the chosen reference point to the extreme Pareto-optimal solution. We have suggested equation (4) for this purpose. For point A, this approach assigns a weight vector (ǫ, ) T. The optimization of the achievement scalarizing function can be thought as a process of moving from the reference point (A ) along a direction formed by reciprocal of weights and finding the extreme contour of the achievement scalarizing function corresponding to all feasible solutions. For point A, this direction is marked using a solid arrow and the extreme contour is shown to correspond to the Pareto-optimal solution P. To avoid converging to a weak Pareto-optimal solution, we use the augmented achievement scalarizing function [] with a small value of ρ here. For the near-extreme point B, the corresponding weight vector is (, ǫ) T and as depicted in the figure, the resulting local search solution is the other extreme Pareto-optimal solution Q. Since each of these optimizations are suggested to be started from their respective modified NSGA-II solutions, the computation of the above local search approaches is expected to be fast. We should highlight the fact that any arbitrary weight vector may not result in finding the true extreme Pareto-optimal solution. For example, if for the reference point A, a weight vector shown with a dashed arrow is chosen, the corresponding optimal solution of the achievement scalarizing problem will not be P, but P. Our suggestion of the construction of reference point and corresponding weight vector together seems to be one viable way of converging to an extreme Pareto-optimal solution. Before we leave this subsection, we discuss one further issue. It is mentioned above that the use of augmented achievement scalarizing function allows us not to converge to a weak Pareto-optimal solution by the local search approach. But, in certain problems, the approach may only allow to find an extreme proper Pareto-optimal solution [] depending on the value of the parameter ρ. If for this reason any of the exact extreme points is not found, the estimated nadir point may be inaccurate. In this study, we control the accuracy of our estimated nadir point by choosing an appropriately small ρ value. If it is a problem, it is possible to solve a lexicographic achievement scalarizing function [] instead of the local search approach described in Step 3. B. Termination Criterion for Modified NSGA-II Typically, an NSGA-II simulation is terminated when a prespecified number of generations are elapsed. Here, we suggest a performance based termination criterion which causes a NSGA-II simulation to stop when the performance reaches a desirable level. The performance metric depends on a measure stating how close the estimated nadir point is to the true nadir point. However, for applying the proposed NSGA-II approaches to an arbitrary problem (for which the true Paretooptimal front, hence the true nadir point, is not known a priori), we would need a different concept. Using the ideal point (z ) and the worst objective vectors (z w ) we can define a normalized distance (N D) metric as follows and track the convergence property of this metric to determine the termination of a NSGA-II approach: ND = M M i= ( z est i zi zi w zi ) 2. (5) If in a problem, the worst objective vector z w (refer to Figure ) is the same as the nadir point, the normalized distance metric value will be one. For other scenarios, the normalized distance metric value will be smaller than one. Since the exact final value of this metric for finding the true nadir point is not known a priori, we record the change ( ) in this metric value from one generation to another. When the change is not significant over a continual number of τ generations, the modified NSGA-II approach is terminated and the current non-dominated extreme solutions are sent to the next step for performing a local search. However, in the case of solving some academic test problems, the location of the nadir objective vector is known and a simple error metric (E) between the estimated and the known nadir objective vectors can be used for stopping a NSGA-II simulation: E = M i= ( z nad i z nad i zi est zi ) 2. (6) To make the approach pragmatic, in this paper, we terminate a NSGA-II simulation when the error metric E becomes smaller than a predefined threshold value (η). Each non-dominated extreme solution of the final population is then sent to the local search approach for a possible improvement. V. RESULTS OF NUMERICAL TESTS We are now ready to describe the results of numerical tests obtained using the nadir point estimation procedure. We have chosen problems having two objectives to 2 objectives in this study. In these test problems, the entire description of the objective space and the Pareto-optimal front is known. We choose these problems to test the working of our proposed nadir point estimation procedure. Thus, in these problems, we do not perform Step explicitly. Moreover, if Step 2 of the procedure (modified NSGA-II simulation) successfully finds (using the error metric (E η) for determining termination of a simulation) the known nadir point, we do not employ Step 3 (local search). In all simulations here, we compare three different approaches: ) NSGA-II with the worst crowded approach, 2) NSGA-II with the extremized crowded approach, and 3) A naive NSGA-II approach in which first we find a set of Pareto-optimal solutions using the original NSGA- II and then estimate the nadir point from the obtained solutions.

8 To investigate the robustness of these approaches, parameters associated with them are kept fixed for all problems. We use the SBX recombination operator [2] with a probability of.9 and polynomial mutation operator [5] with a probability of /n (n is the number of variables) and a distribution index of η m = 2. The population size and distribution index for the recombination operator (η c ) are set according to the problem and are mentioned in the respective sections. Each algorithm is run times, each time starting from a different random initial population, however all proposed procedures are started with an identical set of initial populations to be fair. The number of generations required to satisfy the termination criterion (E η) is noted for each simulation run and the corresponding best, median and worst number of generations are presented for a comparison. The following parameter value is used to terminate a simulation run for all test problems: η =.. A. Bi-objective Problems As mentioned earlier, the payoff table can be reliably used to find the nadir point for a bi-objective optimization problem and there is no real need to use an evolutionary approach. However, here we still apply the nadir point estimation procedure with two modified NSGA-II approaches to three bi-objective optimization problems and compare the results with the naive NSGA-II approach mentioned above. Three difficult bi-objective problems (ZDT test problems) described in [2] are chosen here. The ZDT3 problem is a 3-variable problem and possesses a discontinuous Paretooptimal front. The nadir objective vector for this problem is (.85,.) T. The ZDT4 problems is the most difficult among these test problems due to the presence of 99 local nondominated fronts, which an algorithm must overcome before reaching the global Pareto-optimal front. This is a -variable problem with the nadir objective vector located at (, ) T. The test problem ZDT6 is also a -variable problem with a non-convex Pareto-optimal front. This problem causes a nonuniformity in the distribution of solutions along the Paretooptimal front. The nadir objective vector of this problem is (,.92) T. In all these problems, the ideal point (z ) corresponds to a function value of zero for each objective. Table I shows the number of generations needed to find a near nadir point (within η =.) by different approaches. For ZDT3, we use a recombination index of η c = 2 and for ZDT4 and ZDT6, we use η c =. It is clear from the results that the performance indicators of the worst crowded and extremized crowded NSGA-II are more or less the same and are slightly better than those of the naive NSGA-II approach in more complex problems (ZDT4 and ZDT6). Figures 6 and 7 present how the error metric value reduces with the generation counter for problems ZDT4 and ZDT6, respectively. All the three approaches are compared in these figures with an identical termination criterion. It is observed that the convergence patterns are almost the same for all of them. Based on these results, we can conclude that for bi-objective test problems used in this study, the extremized crowded, the worst crowded, and the naive NSGA- II approaches perform quite equally. As mentioned before, there is no real need of using an evolutionary algorithm procedure, because the payoff table method works well in such bi-objective problems. B. Problems with More Objectives To test Step 2 of the nadir point estimation procedure on three and more objectives, we choose three DTLZ test problems [22]. These problems are designed in a manner so that they can be extended to any number of objectives. The first problem, DTLZ, is constructed to have a linear Pareto-optimal front. The true nadir objective vector is z nad = (.5,.5,...,.5) T and the ideal objective vector is z = (,,...,) T. The Pareto-optimal front of the second test problem, DTLZ2, is a quadrant of a unit sphere centered at the origin of the objective space. The nadir objective vector is z nad = (,,...,) T and the ideal objective vector is z = (,,...,) T. The third test problem, DTLZ5, is somewhat modified from the original DTLZ5 and has a one-dimensional Pareto-optimal curve in the M- dimensional space [7]. The ideal objective vector is at z = (,,...,) T and the nadir objective vector is at z nad = ( ) T ( 2 ) M 2, ( 2 ) M 2, ( 2 ) M 3, ( 2 ) M 4,..., ( 2 ). ) Three-Objective DTLZ Problems: All three approaches are run with population members for problems DTLZ, DTLZ2 and DTLZ5 involving three objectives. Table II shows the number of generations needed to find a solution close (within a error metric value of η =. or smaller) to the true nadir point. It can be observed that the worst crowded NSGA-II and the extremized crowded NSGA-II perform in a more or less similar way when compared to each other and are somewhat better than the naive NSGA-II. In the DTLZ5 problem, despite having three objectives, the Pareto-optimal front is one-dimensional. Thus, the naive NSGA-II approach performs as well as the proposed approaches. To show the difference between the working principles of the modified NSGA-II approaches and the naive NSGA-II approach, we show the final populations for the extremized crowded NSGA-II and the naive NSGA-II for DTLZ and DTLZ2 in Figures 8 and 9, respectively. Similar results can be found for the worst crowded NSGA-II approach, but are not shown here for brevity. It is clear that the extremized crowded NSGA-II concentrates its population members near the extreme regions of the Pareto-optimal front, so that a quicker estimation of the nadir point is possible to achieve. However, in the case of the naive NSGA-II approach, a distributed set of Pareto-optimal solutions is first found using the original NSGA-II (as shown in the figure) and the nadir point is constructed from these points. Since the intermediate points do not help in constructing the nadir objective vector, the naive NSGA-II approach is expected to be slow, particularly for problems having a large number of objectives. 2) Five-Objective DTLZ Problems: Next, we study the performance of all three NSGA-II approaches on DTLZ problems involving five objectives. In Table III, we collect information about results as in previous subsections. It is now quite evident from Table III that the modifications proposed to the NSGA- II approach perform much better than the naive NSGA-II

9 TABLE I COMPARATIVE RESULTS FOR BI-OBJECTIVE PROBLEMS. Test Pop. Number of generations Problem size NSGA-II Worst crowd. NSGA-II Extr. crowd. NSGA-II Best Median Worst Best Median Worst Best Median Worst ZDT ZDT ZDT Worst crowded Extr. crowded Worst crowded Extr. crowded Error Metric Error Metric... Generation Number. Generation Number 5 Fig. 6. The error metric for ZDT4. Fig. 7. The error metric for ZDT6. TABLE II COMPARATIVE RESULTS FOR DTLZ PROBLEMS WITH THREE OBJECTIVES. Test Pop. Number of generations problem size NSGA-II Worst crowd. NSGA-II Extr. crowd. NSGA-II Best Median Worst Best Median Worst Best Median Worst DTLZ DTLZ DTLZ f3 Naive Extr. Crowded f2 f.5.5 f3.8 f2.8 Naive Extr. Crowded.8 f Fig. 8. Populations obtained using extremized crowded and naive NSGA- II for DTLZ. Fig. 9. Populations obtained using extremized crowded and naive NSGA- II for DTLZ2.

10 TABLE III COMPARATIVE RESULTS FOR FIVE-OBJECTIVE DTLZ PROBLEMS. Test Pop. Number of generations problem size NSGA-II Worst crowd. NSGA-II Extr. crowded NSGA-II Best Median Worst Best Median Worst Best Median Worst DTLZ 2,342 3,36 3, , ,7 DTLZ2 65 2,42 5, DTLZ approach. For example, for the DTLZ problem, the best simulation of NSGA-II takes 2,342 generations to estimate the nadir point, whereas the extremized crowded NSGA-II requires only 353 generations and the worst-crowded NSGA- II 6 generations. In the case of the DTLZ2 problem, the trend is similar. The median generation counts of the modified NSGA-II approaches for independent runs are also much better than those of the naive NSGA-II approach. The difference between the worst crowded and extremized crowded NSGA-II approaches is also clear from the table. For a problem having a large number of objectives, the extremized crowded NSGA-II emphasizes both best and worst extreme solutions for each objective maintaining an adequate diversity among the population members. The NSGA-II operators are able to exploit such a diverse population and make a faster progress towards the extreme Pareto-optimal solutions needed to estimate the nadir point correctly. However, on the DTLZ5 problem, the performance of all three approaches is similar due to the one-dimensional nature of the Pareto-optimal front. Figures and show the convergence of the error metric value for the best runs of the three algorithms on DTLZ and DTLZ2, respectively. The superiority of the extremized crowded NSGA-II approach is clear from these figures. These results imply that for a problem having more than three objectives, an emphasis on the extreme Pareto-optimal solutions (instead of all Pareto-optimal solutions) is a faster approach for locating the nadir point. So far, we have demonstrated the ability of the nadir point estimation procedure in converging close to the nadir point by tracking the error metric value which requires the knowledge of the true nadir point. It is clear that this metric cannot be used in an arbitrary problem. We have suggested a normalized distance metric for this purpose. To demonstrate how the normalized distance metric can be used as a termination criterion, we record this metric value at every generation for both extremized crowded NSGA-II and the naive NSGA-II simulations and plot them in Figures 2 and 3 for DTLZ and DTLZ2, respectively. Similar trends were observed for the worst crowded NSGA-II, but for clarity the results are not superimposed in the figures here. To show the variation of the metric value over different initial populations, the region between the best and the worst normalized distance metric values is shaded and the median value is shown with a line. Recall that this metric requires the worst objective vector. For the DTLZ problem, the worst objective vector is computed to be zi w = for all each objective i. Figure 2 shows that the normalized distance (N D) metric value converges to.9. When we compute the normalized distance metric value by substituting the estimated nadir objective vector with the true nadir objective vector in equation (5), an identical value of ND is computed. Similarly, for DTLZ2, the worst objective vector is found to be z w i = 3.25 for i =,...,5. Figure 3 shows that the normalized distance metric (N D) value converges to 86, which is identical to that computed by substituting the estimated nadir objective vector with the true nadir objective vector in equation (5). Thus, we can conclude that in both problems, the convergence of the extremized crowded NSGA-II is on the true nadir point. The rate of convergence of both approaches is also interesting to note from Figures 2 and 3. In both problems, the extremized crowded NSGA-II converges to the true nadir point quicker than the naive NSGA-II. 3) Ten-Objective DTLZ Problems: Next, we consider the three DTLZ problems for objectives. Table IV presents the numbers of generations required to find a point close (within η =.) to the nadir point by the three approaches for DTLZ problems with ten objectives. It is clear that the extremized crowded NSGA-II approach performs an order of magnitude better than the naive NSGA-II approach and is also better than the worst crowded NSGA-II approach. Both the DTLZ and DTLZ2 problems have -dimensional Pareto-optimal fronts and the extremized crowded NSGA-II makes a good balance of maintaining diversity and emphasizing extreme Pareto-optimal solutions so that the nadir point estimation is quick. In the case of the DTLZ2 problem with ten objectives, the naive NSGA-II could not find the nadir objective vector even after 5, generations (and achieved an error metric value of 5.936). Figure 4 shows a typical convergence pattern of the extremized crowded NSGA-II and the naive NSGA-II approaches on the -objective DTLZ. The figure demonstrates that for a large number of generations the estimated nadir point is away from the true nadir point, but after some generations (around, in this problem) the estimated nadir point comes quickly near the true nadir point. To understand the dynamics of the movement of the population in the best performed approach (the extremized crowded NSGA-II) with the generation counter, we count the number of solutions in the population which dominate the true nadir point and plot this quantity in Figure 4. In DTLZ, it is seen that the first point dominating the true nadir point appears in the population at around 75 generations. Thereafter, when an adequate number of such solutions appear in the population, the population very quickly converges near the extreme Pareto-optimal front for correctly estimating the nadir point. There is another matter which also helps the extremized crowded NSGA-II to converge quickly to the desired extreme points. Since extreme solutions

An Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions

An Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions An Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions Kalyanmoy Deb, Ankur Sinha, Pekka Korhonen, and Jyrki Wallenius KanGAL Report Number

More information

An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls

An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls Ankur Sinha, Pekka Korhonen, Jyrki Wallenius Firstname.Secondname@aalto.fi,

More information

An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization

An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization Kalyanmoy Deb, Fellow, IEEE and Mohamed Abouhawwash Department of Electrical and Computer Engineering Computational

More information

Towards Understanding Evolutionary Bilevel Multi-Objective Optimization Algorithm

Towards Understanding Evolutionary Bilevel Multi-Objective Optimization Algorithm Towards Understanding Evolutionary Bilevel Multi-Objective Optimization Algorithm Ankur Sinha and Kalyanmoy Deb Helsinki School of Economics, PO Box, FIN-, Helsinki, Finland (e-mail: ankur.sinha@hse.fi,

More information

Solving Bilevel Multi-Objective Optimization Problems Using Evolutionary Algorithms

Solving Bilevel Multi-Objective Optimization Problems Using Evolutionary Algorithms Solving Bilevel Multi-Objective Optimization Problems Using Evolutionary Algorithms Kalyanmoy Deb and Ankur Sinha Department of Mechanical Engineering Indian Institute of Technology Kanpur PIN 2816, India

More information

Light Beam Search Based Multi-objective Optimization using Evolutionary Algorithms

Light Beam Search Based Multi-objective Optimization using Evolutionary Algorithms Light Beam Search Based Multi-objective Optimization using Evolutionary Algorithms Kalyanmoy Deb and Abhay Kumar KanGAL Report Number 275 Abstract For the past decade or so, evolutionary multiobjective

More information

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls Florian Siegmund, Amos H.C. Ng Virtual Systems Research Center University of Skövde P.O. 408, 541 48 Skövde,

More information

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts

Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts IEEE Symposium Series on Computational Intelligence Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts Heiner Zille Institute of Knowledge and Language Engineering University of Magdeburg, Germany

More information

I-MODE: An Interactive Multi-Objective Optimization and Decision-Making using Evolutionary Methods

I-MODE: An Interactive Multi-Objective Optimization and Decision-Making using Evolutionary Methods I-MODE: An Interactive Multi-Objective Optimization and Decision-Making using Evolutionary Methods Kalyanmoy Deb and Shamik Chaudhuri Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute of Technology,

More information

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences Upali K. Wickramasinghe and Xiaodong Li School of Computer Science and Information Technology, RMIT University,

More information

Multi-objective Optimal Path Planning Using Elitist Non-dominated Sorting Genetic Algorithms

Multi-objective Optimal Path Planning Using Elitist Non-dominated Sorting Genetic Algorithms Multi-objective Optimal Path Planning Using Elitist Non-dominated Sorting Genetic Algorithms Faez Ahmed and Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Department of Mechanical Engineering

More information

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Tony Maciejewski, Kyle Tarplee, Ryan Friese, and Howard Jay Siegel Department of Electrical and Computer Engineering Colorado

More information

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization adfa, p. 1, 2011. Springer-Verlag Berlin Heidelberg 2011 Devang Agarwal and Deepak Sharma Department of Mechanical

More information

Multi-objective Optimization

Multi-objective Optimization Jugal K. Kalita Single vs. Single vs. Single Objective Optimization: When an optimization problem involves only one objective function, the task of finding the optimal solution is called single-objective

More information

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Ke Li 1, Kalyanmoy Deb 1, Qingfu Zhang 2, and Sam Kwong 2 1 Department of Electrical and Computer

More information

New Mutation Operator for Multiobjective Optimization with Differential Evolution

New Mutation Operator for Multiobjective Optimization with Differential Evolution TIEJ601, Postgraduate Seminar in Information Technology New Mutation Operator for Multiobjective Optimization with Differential Evolution by Karthik Sindhya Doctoral Student Industrial Optimization Group

More information

A Genetic Algorithm Based Augmented Lagrangian Method for Accurate, Fast and Reliable Constrained Optimization

A Genetic Algorithm Based Augmented Lagrangian Method for Accurate, Fast and Reliable Constrained Optimization A Genetic Algorithm Based Augmented Lagrangian Method for Accurate, Fast and Reliable Constrained Optimization Kalyanmoy Deb and Soumil Srivastava Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Hisao Ishibuchi Graduate School of Engineering Osaka Prefecture University Sakai, Osaka 599-853,

More information

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Developing Multiple Topologies of Path Generating Compliant Mechanism (PGCM) using Evolutionary Optimization

Developing Multiple Topologies of Path Generating Compliant Mechanism (PGCM) using Evolutionary Optimization Developing Multiple Topologies of Path Generating Compliant Mechanism (PGCM) using Evolutionary Optimization Deepak Sharma, Kalyanmoy Deb, N. N. Kishore KanGAL Report No. 292 Kanpur Genetic Algorithms

More information

An Efficient Constraint Handling Method for Genetic Algorithms

An Efficient Constraint Handling Method for Genetic Algorithms An Efficient Constraint Handling Method for Genetic Algorithms Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Department of Mechanical Engineering Indian Institute of Technology Kanpur Kanpur,

More information

Finding Knees in Multi-objective Optimization

Finding Knees in Multi-objective Optimization Finding Knees in Multi-objective Optimization Jürgen Branke 1, Kalyanmoy Deb 2, Henning Dierolf 1, and Matthias Osswald 1 1 Institute AIFB, University of Karlsruhe, Germany branke@aifb.uni-karlsruhe.de

More information

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms H. Ishibuchi, T. Doi, and Y. Nojima, Incorporation of scalarizing fitness functions into evolutionary multiobjective optimization algorithms, Lecture Notes in Computer Science 4193: Parallel Problem Solving

More information

Multimodal Optimization Using a Bi-Objective Evolutionary Algorithm

Multimodal Optimization Using a Bi-Objective Evolutionary Algorithm Multimodal Optimization Using a Bi-Objective Evolutionary Algorithm Kalyanmoy Deb and Amit Saha Kanpur Genetic Algorithms Laboratory Indian Institute of Technology Kanpur PIN 8 6, India {deb,asaha}@iitk.ac.in

More information

CHAPTER 2 MULTI-OBJECTIVE REACTIVE POWER OPTIMIZATION

CHAPTER 2 MULTI-OBJECTIVE REACTIVE POWER OPTIMIZATION 19 CHAPTER 2 MULTI-OBJECTIE REACTIE POWER OPTIMIZATION 2.1 INTRODUCTION In this chapter, a fundamental knowledge of the Multi-Objective Optimization (MOO) problem and the methods to solve are presented.

More information

Balancing Survival of Feasible and Infeasible Solutions in Evolutionary Optimization Algorithms

Balancing Survival of Feasible and Infeasible Solutions in Evolutionary Optimization Algorithms Balancing Survival of Feasible and Infeasible Solutions in Evolutionary Optimization Algorithms Zhichao Lu,, Kalyanmoy Deb, and Hemant Singh Electrical and Computer Engineering Michigan State University,

More information

NEW DECISION MAKER MODEL FOR MULTIOBJECTIVE OPTIMIZATION INTERACTIVE METHODS

NEW DECISION MAKER MODEL FOR MULTIOBJECTIVE OPTIMIZATION INTERACTIVE METHODS NEW DECISION MAKER MODEL FOR MULTIOBJECTIVE OPTIMIZATION INTERACTIVE METHODS Andrejs Zujevs 1, Janis Eiduks 2 1 Latvia University of Agriculture, Department of Computer Systems, Liela street 2, Jelgava,

More information

Effectiveness and efficiency of non-dominated sorting for evolutionary multi- and many-objective optimization

Effectiveness and efficiency of non-dominated sorting for evolutionary multi- and many-objective optimization Complex Intell. Syst. (217) 3:247 263 DOI 1.17/s4747-17-57-5 ORIGINAL ARTICLE Effectiveness and efficiency of non-dominated sorting for evolutionary multi- and many-objective optimization Ye Tian 1 Handing

More information

Evolutionary Computation

Evolutionary Computation Evolutionary Computation Lecture 9 Mul+- Objec+ve Evolu+onary Algorithms 1 Multi-objective optimization problem: minimize F(X) = ( f 1 (x),..., f m (x)) The objective functions may be conflicting or incommensurable.

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems Repair and Repair in EMO Algorithms for Multiobjective 0/ Knapsack Problems Shiori Kaige, Kaname Narukawa, and Hisao Ishibuchi Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho,

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department ofmechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

A Hybrid Bi-Objective Evolutionary-Penalty Approach for Computationally Fast and Accurate Constrained Optimization

A Hybrid Bi-Objective Evolutionary-Penalty Approach for Computationally Fast and Accurate Constrained Optimization A Hybrid Bi-Objective Evolutionary-Penalty Approach for Computationally Fast and Accurate Constrained Optimization Kalyanmoy Deb and Rituparna Datta KanGAL Report Number 214 Abstract Constrained optimization

More information

NEW HEURISTIC APPROACH TO MULTIOBJECTIVE SCHEDULING

NEW HEURISTIC APPROACH TO MULTIOBJECTIVE SCHEDULING European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS 2004 P. Neittaanmäki, T. Rossi, S. Korotov, E. Oñate, J. Périaux, and D. Knörzer (eds.) Jyväskylä, 24 28 July 2004

More information

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague.

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague. Evolutionary Algorithms: Lecture 4 Jiří Kubaĺık Department of Cybernetics, CTU Prague http://labe.felk.cvut.cz/~posik/xe33scp/ pmulti-objective Optimization :: Many real-world problems involve multiple

More information

Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach

Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach Richard Allmendinger,XiaodongLi 2,andJürgen Branke University of Karlsruhe, Institute AIFB, Karlsruhe, Germany 2 RMIT University,

More information

IN many engineering and computational optimization

IN many engineering and computational optimization Optimization of Multi-Scenario Problems Using Multi-Criterion Methods: A Case Study on Byzantine Agreement Problem Ling Zhu, Kalyanmoy Deb and Sandeep Kulkarni Computational Optimization and Innovation

More information

Using an outward selective pressure for improving the search quality of the MOEA/D algorithm

Using an outward selective pressure for improving the search quality of the MOEA/D algorithm Comput Optim Appl (25) 6:57 67 DOI.7/s589-5-9733-9 Using an outward selective pressure for improving the search quality of the MOEA/D algorithm Krzysztof Michalak Received: 2 January 24 / Published online:

More information

Single and Multi-Objective Dynamic Optimization: Two Tales from an Evolutionary Perspective

Single and Multi-Objective Dynamic Optimization: Two Tales from an Evolutionary Perspective Single and Multi-Objective Dynamic Optimization: Two Tales from an Evolutionary Perspective Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Department of Mechanical Engineering Indian Institute

More information

Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems

Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems Florian Siegmund a, Amos H. C. Ng a, and Kalyanmoy Deb b a School of Engineering, University of Skövde,

More information

Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems

Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems Saku Kukkonen and Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

A GENETIC ALGORITHM APPROACH FOR TECHNOLOGY CHARACTERIZATION. A Thesis EDGAR GALVAN

A GENETIC ALGORITHM APPROACH FOR TECHNOLOGY CHARACTERIZATION. A Thesis EDGAR GALVAN A GENETIC ALGORITHM APPROACH FOR TECHNOLOGY CHARACTERIZATION A Thesis by EDGAR GALVAN Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for

More information

Advances in Multiobjective Optimization Dealing with computational cost

Advances in Multiobjective Optimization Dealing with computational cost Advances in Multiobjective Optimization Dealing with computational cost Jussi Hakanen jussi.hakanen@jyu.fi Contents Motivation: why approximation? An example of approximation: Pareto Navigator Software

More information

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL., NO., MONTH YEAR 1

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL., NO., MONTH YEAR 1 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL., NO., MONTH YEAR 1 An Efficient Approach to Non-dominated Sorting for Evolutionary Multi-objective Optimization Xingyi Zhang, Ye Tian, Ran Cheng, and

More information

Unsupervised Feature Selection Using Multi-Objective Genetic Algorithms for Handwritten Word Recognition

Unsupervised Feature Selection Using Multi-Objective Genetic Algorithms for Handwritten Word Recognition Unsupervised Feature Selection Using Multi-Objective Genetic Algorithms for Handwritten Word Recognition M. Morita,2, R. Sabourin 3, F. Bortolozzi 3 and C. Y. Suen 2 École de Technologie Supérieure, Montreal,

More information

Finding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm

Finding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm 16th World Congress of the International Fuzzy Systems Association (IFSA) 9th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT) Finding Sets of Non-Dominated Solutions with High

More information

The Genetic Algorithm for finding the maxima of single-variable functions

The Genetic Algorithm for finding the maxima of single-variable functions Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding

More information

Late Parallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms

Late Parallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms Late arallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms O. Tolga Altinoz Department of Electrical and Electronics Engineering Ankara

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

Reliability-Based Optimization Using Evolutionary Algorithms

Reliability-Based Optimization Using Evolutionary Algorithms Reliability-Based Optimization Using Evolutionary Algorithms The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Multi-objective Optimization

Multi-objective Optimization Some introductory figures from : Deb Kalyanmoy, Multi-Objective Optimization using Evolutionary Algorithms, Wiley 2001 Multi-objective Optimization Implementation of Constrained GA Based on NSGA-II Optimization

More information

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Richa Agnihotri #1, Dr. Shikha Agrawal #1, Dr. Rajeev Pandey #1 # Department of Computer Science Engineering, UIT,

More information

A Domain-Specific Crossover and a Helper Objective for Generating Minimum Weight Compliant Mechanisms

A Domain-Specific Crossover and a Helper Objective for Generating Minimum Weight Compliant Mechanisms A Domain-Specific Crossover and a Helper Objective for Generating Minimum Weight Compliant Mechanisms Deepak Sharma Kalyanmoy Deb N. N. Kishore KanGAL Report Number K28 Indian Institute of Technology Kanpur

More information

Assessing the Convergence Properties of NSGA-II for Direct Crashworthiness Optimization

Assessing the Convergence Properties of NSGA-II for Direct Crashworthiness Optimization 10 th International LS-DYNA Users Conference Opitmization (1) Assessing the Convergence Properties of NSGA-II for Direct Crashworthiness Optimization Guangye Li 1, Tushar Goel 2, Nielen Stander 2 1 IBM

More information

Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows

Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows Abel Garcia-Najera and John A. Bullinaria School of Computer Science, University of Birmingham Edgbaston, Birmingham

More information

An Adaptive Normalization based Constrained Handling Methodology with Hybrid Bi-Objective and Penalty Function Approach

An Adaptive Normalization based Constrained Handling Methodology with Hybrid Bi-Objective and Penalty Function Approach An Adaptive Normalization based Constrained Handling Methodology with Hybrid Bi-Objective and Penalty Function Approach Rituparna Datta and Kalyanmoy Deb Department of Mechanical Engineering, Indian Institute

More information

Multi-objective Genetic Algorithms: Problem Difficulties and Construction of Test Problems

Multi-objective Genetic Algorithms: Problem Difficulties and Construction of Test Problems Multi-objective Genetic Algorithms: Problem Difficulties and Construction of Test Problems Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Department of Mechanical Engineering Indian Institute

More information

An Experimental Multi-Objective Study of the SVM Model Selection problem

An Experimental Multi-Objective Study of the SVM Model Selection problem An Experimental Multi-Objective Study of the SVM Model Selection problem Giuseppe Narzisi Courant Institute of Mathematical Sciences New York, NY 10012, USA narzisi@nyu.edu Abstract. Support Vector machines

More information

Dynamic Uniform Scaling for Multiobjective Genetic Algorithms

Dynamic Uniform Scaling for Multiobjective Genetic Algorithms Dynamic Uniform Scaling for Multiobjective Genetic Algorithms Gerulf K. M. Pedersen 1 and David E. Goldberg 2 1 Aalborg University, Department of Control Engineering, Fredrik Bajers Vej 7, DK-922 Aalborg

More information

Standard Error Dynamic Resampling for Preference-based Evolutionary Multi-objective Optimization

Standard Error Dynamic Resampling for Preference-based Evolutionary Multi-objective Optimization Standard Error Dynamic Resampling for Preference-based Evolutionary Multi-objective Optimization Florian Siegmund a, Amos H. C. Ng a, and Kalyanmoy Deb b a School of Engineering, University of Skövde,

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION Nedim TUTKUN nedimtutkun@gmail.com Outlines Unconstrained Optimization Ackley s Function GA Approach for Ackley s Function Nonlinear Programming Penalty

More information

EXTRACTION OF RELEVANT WEB PAGES USING DATA MINING

EXTRACTION OF RELEVANT WEB PAGES USING DATA MINING Chapter 3 EXTRACTION OF RELEVANT WEB PAGES USING DATA MINING 3.1 INTRODUCTION Generally web pages are retrieved with the help of search engines which deploy crawlers for downloading purpose. Given a query,

More information

Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms

Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms Lei Chen 1,2, Kalyanmoy Deb 2, and Hai-Lin Liu 1 1 Guangdong University of Technology,

More information

Search direction improvement for gradient-based optimization problems

Search direction improvement for gradient-based optimization problems Computer Aided Optimum Design in Engineering IX 3 Search direction improvement for gradient-based optimization problems S Ganguly & W L Neu Aerospace and Ocean Engineering, Virginia Tech, USA Abstract

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A.M. Uhrmacher, eds

Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A.M. Uhrmacher, eds Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A.M. Uhrmacher, eds REFERENCE POINT-BASED EVOLUTIONARY MULTI-OBJECTIVE OPTIMIZATION FOR INDUSTRIAL

More information

Universiteit Leiden Opleiding Informatica

Universiteit Leiden Opleiding Informatica Universiteit Leiden Opleiding Informatica A New Approach to Target Region Based Multi-objective Evolutionary Algorithms Yali Wang MASTER S THESIS Leiden Institute of Advanced Computer Science (LIACS) Leiden

More information

Recombination of Similar Parents in EMO Algorithms

Recombination of Similar Parents in EMO Algorithms H. Ishibuchi and K. Narukawa, Recombination of parents in EMO algorithms, Lecture Notes in Computer Science 341: Evolutionary Multi-Criterion Optimization, pp. 265-279, Springer, Berlin, March 25. (Proc.

More information

A Framework for Large-scale Multi-objective Optimization based on Problem Transformation

A Framework for Large-scale Multi-objective Optimization based on Problem Transformation IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, MAY 7 A Framework for Large-scale Multi-objective Optimization based on Problem Transformation Heiner Zille, Hisao Ishibuchi, Sanaz Mostaghim and Yusuke Nojima

More information

Multiobjective Optimisation. Why? Panorama. General Formulation. Decision Space and Objective Space. 1 of 7 02/03/15 09:49.

Multiobjective Optimisation. Why? Panorama. General Formulation. Decision Space and Objective Space. 1 of 7 02/03/15 09:49. ITNPD8/CSCU9YO Multiobjective Optimisation An Overview Nadarajen Veerapen (nve@cs.stir.ac.uk) University of Stirling Why? Classic optimisation: 1 objective Example: Minimise cost Reality is often more

More information

Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search

Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search Seventh International Conference on Hybrid Intelligent Systems Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search Crina Grosan and Ajith Abraham Faculty of Information Technology,

More information

Introduction to Multiobjective Optimization

Introduction to Multiobjective Optimization Introduction to Multiobjective Optimization Jussi Hakanen jussi.hakanen@jyu.fi Contents Multiple Criteria Decision Making (MCDM) Formulation of a multiobjective problem On solving multiobjective problems

More information

Computational Intelligence

Computational Intelligence Computational Intelligence Winter Term 2016/17 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund Slides prepared by Dr. Nicola Beume (2012) Multiobjective

More information

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS 6.1 Introduction Gradient-based algorithms have some weaknesses relative to engineering optimization. Specifically, it is difficult to use gradient-based algorithms

More information

Graphical Approach to Solve the Transcendental Equations Salim Akhtar 1 Ms. Manisha Dawra 2

Graphical Approach to Solve the Transcendental Equations Salim Akhtar 1 Ms. Manisha Dawra 2 Graphical Approach to Solve the Transcendental Equations Salim Akhtar 1 Ms. Manisha Dawra 2 1 M.Tech. Scholar 2 Assistant Professor 1,2 Department of Computer Science & Engineering, 1,2 Al-Falah School

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION 1 CHAPTER 1 INTRODUCTION 1.1 OPTIMIZATION OF MACHINING PROCESS AND MACHINING ECONOMICS In a manufacturing industry, machining process is to shape the metal parts by removing unwanted material. During the

More information

Preferences in Evolutionary Multi-Objective Optimisation with Noisy Fitness Functions: Hardware in the Loop Study

Preferences in Evolutionary Multi-Objective Optimisation with Noisy Fitness Functions: Hardware in the Loop Study Proceedings of the International Multiconference on ISSN 1896-7094 Computer Science and Information Technology, pp. 337 346 2007 PIPS Preferences in Evolutionary Multi-Objective Optimisation with Noisy

More information

A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization

A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization Hisao Ishibuchi and Youhei Shibata Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho, Sakai,

More information

A Ranking and Selection Strategy for Preference-based Evolutionary Multi-objective Optimization of Variable-Noise Problems

A Ranking and Selection Strategy for Preference-based Evolutionary Multi-objective Optimization of Variable-Noise Problems A Ranking and Selection Strategy for Preference-based Evolutionary Multi-objective Optimization of Variable-Noise Problems COIN Report Number 2016002 Florian Siegmund, Amos H.C. Ng School of Engineering

More information

NCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems

NCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems Shinya Watanabe Graduate School of Engineering, Doshisha University 1-3 Tatara Miyakodani,Kyo-tanabe, Kyoto, 10-031,

More information

A Surrogate-assisted Reference Vector Guided Evolutionary Algorithm for Computationally Expensive Many-objective Optimization

A Surrogate-assisted Reference Vector Guided Evolutionary Algorithm for Computationally Expensive Many-objective Optimization A Surrogate-assisted Reference Vector Guided Evolutionary Algorithm for Computationally Expensive Many-objective Optimization Tinkle Chugh, Yaochu Jin, Fellow, IEEE, Kaisa Miettinen, Jussi Hakanen, Karthik

More information

Optimization of Noisy Fitness Functions by means of Genetic Algorithms using History of Search with Test of Estimation

Optimization of Noisy Fitness Functions by means of Genetic Algorithms using History of Search with Test of Estimation Optimization of Noisy Fitness Functions by means of Genetic Algorithms using History of Search with Test of Estimation Yasuhito Sano and Hajime Kita 2 Interdisciplinary Graduate School of Science and Engineering,

More information

Multicriterial Optimization Using Genetic Algorithm

Multicriterial Optimization Using Genetic Algorithm Multicriterial Optimization Using Genetic Algorithm 180 175 170 165 Fitness 160 155 150 145 140 Best Fitness Mean Fitness 135 130 0 Page 1 100 200 300 Generations 400 500 600 Contents Optimization, Local

More information

Using Excel for Graphical Analysis of Data

Using Excel for Graphical Analysis of Data Using Excel for Graphical Analysis of Data Introduction In several upcoming labs, a primary goal will be to determine the mathematical relationship between two variable physical parameters. Graphs are

More information

Multi-Objective Evolutionary Algorithms

Multi-Objective Evolutionary Algorithms Multi-Objective Evolutionary Algorithms Kalyanmoy Deb a Kanpur Genetic Algorithm Laboratory (KanGAL) Indian Institute o Technology Kanpur Kanpur, Pin 0806 INDIA deb@iitk.ac.in http://www.iitk.ac.in/kangal/deb.html

More information

Multi-Objective Memetic Algorithm using Pattern Search Filter Methods

Multi-Objective Memetic Algorithm using Pattern Search Filter Methods Multi-Objective Memetic Algorithm using Pattern Search Filter Methods F. Mendes V. Sousa M.F.P. Costa A. Gaspar-Cunha IPC/I3N - Institute of Polymers and Composites, University of Minho Guimarães, Portugal

More information

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem etic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem R. O. Oladele Department of Computer Science University of Ilorin P.M.B. 1515, Ilorin, NIGERIA

More information

Development of Evolutionary Multi-Objective Optimization

Development of Evolutionary Multi-Objective Optimization A. Mießen Page 1 of 13 Development of Evolutionary Multi-Objective Optimization Andreas Mießen RWTH Aachen University AVT - Aachener Verfahrenstechnik Process Systems Engineering Turmstrasse 46 D - 52056

More information

DEMO: Differential Evolution for Multiobjective Optimization

DEMO: Differential Evolution for Multiobjective Optimization DEMO: Differential Evolution for Multiobjective Optimization Tea Robič and Bogdan Filipič Department of Intelligent Systems, Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana, Slovenia tea.robic@ijs.si

More information

Solving Large Aircraft Landing Problems on Multiple Runways by Applying a Constraint Programming Approach

Solving Large Aircraft Landing Problems on Multiple Runways by Applying a Constraint Programming Approach Solving Large Aircraft Landing Problems on Multiple Runways by Applying a Constraint Programming Approach Amir Salehipour School of Mathematical and Physical Sciences, The University of Newcastle, Australia

More information

Two-Archive Evolutionary Algorithm for Constrained Multi-Objective Optimization

Two-Archive Evolutionary Algorithm for Constrained Multi-Objective Optimization Two-Archive Evolutionary Algorithm for Constrained Multi-Objective Optimization Ke Li #, Renzhi Chen #2, Guangtao Fu 3 and Xin Yao 2 Department of Computer Science, University of Exeter 2 CERCIA, School

More information

Multi-Scenario, Multi-Objective Optimization Using Evolutionary Algorithms: Initial Results

Multi-Scenario, Multi-Objective Optimization Using Evolutionary Algorithms: Initial Results Multi-Scenario, Multi-Objective Optimization Using Evolutionary Algorithms: Initial Results Kalyanmoy Deb, Ling Zhu, and Sandeep Kulkarni Department of Computer Science Michigan State University East Lansing,

More information

Approximation-Guided Evolutionary Multi-Objective Optimization

Approximation-Guided Evolutionary Multi-Objective Optimization Approximation-Guided Evolutionary Multi-Objective Optimization Karl Bringmann 1, Tobias Friedrich 1, Frank Neumann 2, Markus Wagner 2 1 Max-Planck-Institut für Informatik, Campus E1.4, 66123 Saarbrücken,

More information

An Angle based Constrained Many-objective Evolutionary Algorithm

An Angle based Constrained Many-objective Evolutionary Algorithm APPLIED INTELLIGENCE manuscript No. (will be inserted by the editor) An Angle based Constrained Many-objective Evolutionary Algorithm Yi Xiang Jing Peng Yuren Zhou Li Zefeng Chen Miqing Received: date

More information

International Conference on Computer Applications in Shipbuilding (ICCAS-2009) Shanghai, China Vol.2, pp

International Conference on Computer Applications in Shipbuilding (ICCAS-2009) Shanghai, China Vol.2, pp AUTOMATIC DESIGN FOR PIPE ARRANGEMENT CONSIDERING VALVE OPERATIONALITY H Kimura, Kyushu University, Japan S Iehira, Kyushu University, Japan SUMMARY We propose a novel evaluation method of valve operationality

More information

Evolutionary multi-objective algorithm design issues

Evolutionary multi-objective algorithm design issues Evolutionary multi-objective algorithm design issues Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical Information Technology Karthik.sindhya@jyu.fi

More information

Approximation Model Guided Selection for Evolutionary Multiobjective Optimization

Approximation Model Guided Selection for Evolutionary Multiobjective Optimization Approximation Model Guided Selection for Evolutionary Multiobjective Optimization Aimin Zhou 1, Qingfu Zhang 2, and Guixu Zhang 1 1 Each China Normal University, Shanghai, China 2 University of Essex,

More information