Metamodeling for Multimodal Selection Functions in Evolutionary Multi-Objective Optimization

Size: px
Start display at page:

Download "Metamodeling for Multimodal Selection Functions in Evolutionary Multi-Objective Optimization"

Transcription

1 Metamodeling or Multimodal Selection Functions in Evolutionary Multi-Objective Optimization Proteek Roy, Rayan Hussein, and Kalyanmoy Deb Michigan State University East Lansing. MI 48864, USA COIN Report Number 7 Abstract Most real-world optimization problems involve computationally expensive simulations or evaluating a solution. Despite signiicant progress in the use o metamodels or single-objective optimization, metamodeling methods have received a lukewarm attention or multi-objective optimization. A recent study classiied various metamodeling approaches, o which one particular method is interesting, challenging, and novel. In this paper, we study this so-called M6 method in detail. In this approach, a selection operator s assignment unction, as it is implemented in an evolutionary multi-objective optimization (EMO) algorithm, is directly metamodeled. Thus, this methodology requires only one selection unction to be metamodeled irrespective o multitude o objective and constraint unctions in a problem. However, the lip side o the methodology is that the resulting unction is multimodal having a dierent optimum or every desired Pareto-optimal solution. We have used two dierent selection unctions based on two recent ideas: (i) KKT proximity measure unction and (ii) multimodal based evolutionary multi-objective (MEMO) selection unction. The resulting metamodeling methods are applied to a number o standard two and three-objective constraint and unconstrained test problems. Near Pareto-optimal solutions are ound using only a raction o high-idelity solution evaluations compared to usual EMO applications. Introduction Many practical optimization problem are conronted with the diiculty that objective unctions and constraints are computationally expensive to evaluate. Objective and constraint values oten come rom an expensive simulation o a system. Because o that, most o the time, researchers are bound to have a limited number o solution evaluations to get optimum easible solutions. Although parallel and distributed computing reduce the total simulation time, eective searching is inescapable to get better solutions within limited unction evaluation. Researchers have used metamodels or surrogate models to make a low-idelity approximation o the objective unctions and constraints. A good number o surrogate models or metamodels have been proposed in literature[4][8][]. Among them, Kriging [4] or Gaussian Process model [9] are popular, as they provide an error estimate in addition to the approximate unction value. Although being popular, Kriging doesn t perorm well when the number o variables is large, or the target unction is complicated with multi-modality and nonlinearity. Knowles [5] extended the EGO method (Eicient Global Optimization [4]) proposed or singleobjective problems or evolutionary multi-objective optimization (EMO) and proposed ParEGO algorithm with an aggregated unction. The main drawback o the ParEGO algorithm is that it metamodels each objective unction separately and the overall algorithm cannot generate multiple candidate points at one iteration. S-Metric selection based EGO (SMS- EGO) [7] optimizes the S-metric, a hypervolumebased measure, by using the covariance matrix adaptation evolution strategy (CMS-ES) algorithm by extending the idea o Emmerich et al. []. Similar to ParEGO, SMS-EGO evaluates a single solution at each iteration. MOEA/D-EGO, proposed by Zhang et al. [], combines a decomposition based multiobjective EA (MOEA/D) with EGO. In this approach, a multi-objective optimization problem (MOP) is decomposed into multiple single objective sub-problems. In each iteration, a Gaussian stochastic process model is built or each sub-problem with high-idelity solu-

2 tions which were previously evaluated. Expected improvement o these sub-problems are then optimized simultaneously by using the MOEA/D procedure or generating multiple candidate solutions. The main weakness o this approach is that it needs separate metamodels or each sub-problem. Researchers have tried to classiy metamodel-based EMO studies rom a variety o angles. Jin [3] proposed a taxonomy based on evolution control that explains how one can alternate among high-idelity and low idelity evaluation o solutions. A more recent work [7] classiies the existing methodologies into six categories based on number o models used and complexity o those low-idelity models. From that classiication, one very interesting methodology, namely, the sixth methodology (or M6) turns out to be a promising proposition in terms o its algorithmic novelty, challenges it oers to a metamodeling approach and a new direction or research. This method proposes to have a single metamodel or the complete MOP by considering all objective unctions and all constraint unctions, and by making every desired Pareto-optimal solution as a separate optimal point o the resulting metamodel. As it sounds, the approach is demanding but, i possible to achieve, is extremely appealing or EMO researchers and practioners alike. In this paper, we examine the development and application o M6 using two dierent EMO concepts. Given an MOP with M objectives and J constraints, one can model each objective and constraint unction separately, thus having (M + J) total metamodels. In the recent taxonomy [7], this methodology is called M. One can combine all objectives with some scalarization methods e.g. weighted sum, ɛ- constraint, Tchebychev, or achievement scalarization unction (ASF) [9, 6] and create ( + J) metamodels. Moreover, this method requires as many metamodels to be ormed as the number o desired Paretooptimal points (say H). This methodology is called M3. Not to a great surprise, one can also combine all the constraints and reduce required number o metamodels to (M +) or M and or M3 to achieve two new methodologies M and M4, respectively. With H Pareto-optimal solutions, M3 and M4 requires a total o H(J + ) and H total metamodels. Objectives and constraints can be grouped together and their combined or independent modeling can be attributed to M to M4. However, one challenging way to reduce computational eort is to construct only one lowidelity model with the selection unction o a EMO algorithm directly [7]. As in many decompositionbased EMO methods, the selection unction can be either unimodal (to ind a single Pareto-optimal solution at a time) or multimodal in which multiple Paretooptimal solutions are ound simultaneously (like in NSGA-II [3]. The ormer approach is called M5 [] and the second approach is called M6 and is the ocus o this study. Using reerence point based scalarization method, authors o M5 [] have tried to ind a single optimum at a time, thereby requiring a total o H metamodels. Here, we attempt to ind multiple Pareto-optimal solutions using a single metamodel rom start to inish. The paper is organized as ollows. Section presents the related past works that are relevant to the development o M6. Section 3 discusses our proposed methods. Experimental setting and results are presented in Section 4. Section 5 concludes our study and proposes uture work. Background In this section, we irst discuss the concept o metamodeling the selection unction o an EMO algorithm and proposes two dierent ways o ormulating the selection unction or multi-objective optimization problems.. Metamodeling the Selection Function EMO algorithms are mostly dierent rom each other in their way o constructing the selection operator. Having multiple conlicting objectives and multiple constraints to be satisied, an EMO s selection operator treats two aspects essential or converging near to the Pareto-optimal ront and or inding a diverse set o solutions: (i) emphasis or non-dominated solutions and (ii) emphasis or diverse solutions. NSGA-II [3] uses a non-dominated sorting procedure o the entire population at any generation to achieve the irst aspect and uses a ront-wise crowding distance operator to achieve the second aspect. NSGA-III [5] uses the non-dominated sorting or the irst aspect, but uses a more complex niching operator based on a set o given reerence directions (W ) to achieve the second aspect. No matter what EMO algorithm is considered, it makes a balance between these two aspects to inally provide a ranked list o the population at any generation. Figures a and b shows how NSGA-II s selection operator ranks,5 solutions chosen uniormly rom the entire two-dimensional search space or a -D ZDT problem, in variable and objective spaces, respectively. All irst rank solutions are assigned a selection unction value within [,], with two extreme solutions having a value o zero. Intermediate irst rank solutions are assigned a selection unction value based on their crowding distance value. All second rank solutions have a selection unction value within [,], and

3 so on. It is interesting to note that the selection unction value gets worse as the solutions are away rom the Pareto-optimal ront (or which x = ). The same is relected in the objective-space plot in Figure b. We now argue that instead o metamodeling each o the two objective unctions separately and accumulating approximations rom two metamodels, i instead a single metamodel is perormed to approximate the above selection unction, the number o metamodeling eort can be reduced. Theoretically, such a selection unction has ininite optima, but i we are interested in inding H ( W = H, a inite size) Pareto-optimal solutions dictated by a set o H pre-speciied reerence directions, the above selection unction will reduce to a H-modal selection unction. Such an idea is novel or multi- and many-objective optimization and the procedure shields the number o objectives and constraints rom the number o metamodeling eorts that must be perormed.. KKT Proximity Measure Based Approach The above idea o metamodeling the underlying selection unction o an EMO algorithm, instead o metamodeling each and every objective and constraint unction, opens the door or trying the idea on successul EMO algorithms. Although this is an interesting direction or research and development, in this paper, we suggest two ideas, developed by using two recently proposed perormance metrics or EMO. In this subsection, we discuss the KKT proximity measure approach. Karush-Kuhn-Tucker (KKT) proximity measure was recently developed [6] to determine the level o convergence o non-dominated solutions in an EMO algorithm. At any point, the KKTPM value can be computed by using (exact or numerical) gradients o objectives and constraint unctions. The KKTPM construction is such that i a value within [,] is achieved or a point, the point is guaranteed to be a easible solution and a KKTPM value larger than one means it is ineasible. Moreover, any solution having a KKTPM value equal to zero means that it is guaranteed to be a Pareto-optimal solution. Furthermore, in certain problems it has been observed that KKTPM value o a solution is correlated to its distance rom the Pareto-optimal ront [6]. This latter property o KKTPM motivates us to use it as a selection unction or our M6 metamodeling study. Thus, or a solution x, we simply compute its KKTPM value and set it equal to its Selection(x)..3 MEMO Based Approach Recently, a multi-modal based EMO or MEMO approach [] was proposed by constructing a multimodal single-objective unction rom a multi- or manyobjective problem, which possess a inite number o multiple global optima. The location o each optimum is determined by the chosen reerence direction. The MEMO approach used the Achievement Scalarization Function (ASF), which is given or h-th reerence direction, as ollows: ASF (h) (x) = M max i= i (x) z i. () w (h) i A set o H reerence directions w (h) is used to determine the location o H Pareto-optimal solutions. Reerence points are equally-angled directions rom an ideal point z and are created using Das and Dennis s method []. Any preerence, i necessary, can be incorporated to create the reerence vectors. Objective values are normalized by population-minimum and population-maximum beore calculating ASF value. In this study, we deine the MEMO selection unction, as ollows: MEMO(x) = H min h= ASF(h) (x). () Figures c and d show the MEMO selection unction on the same ZDT problem on the variable space and objective space, respectively, or reerence directions. It is clear that or each reerence direction there is a single Pareto-optimal point which makes the MEMO selection unction to have a local minimum there. It is interesting to note how the MEMO selection unction has become multi-modal and demonstrates the presence o all minima, each corresponding to a dierent Pareto-optimal solution. The original MEMO implementation does not combine constraints into the selection unction, rather constraints (o type g j (x) ) are handled separately by the constrained tournament selection method [3]. Here, we modiy the above MEMO selection unction slightly or constrained problems by combining all normalized constraint violations (ḡ j (x)) as a unction [4], as ollows, and making an adjustment described next. CV(x) = J ḡ j (x), (3) j= where α = α, i α > ; zero, otherwise. A constrained MEMO selection unction is deined as ollows: { MEMO(x), i x is easible, S(x) = MEMO(x) max + CV(x), otherwise. (4) 3

4 (a) NSGA-II selection unction on variable space. tion in objective space. tion on variable space. tion on objectives (b) NSGA-II selection unc- (c) MEMO selection unc- (d) MEMO selection unc- space. Figure : Dierent selection unction in variable and objective space. 3 Proposed Method The overall M6 algorithm or solving computationally expensive MOP is described here. First, we create an archive population P o size ρ with initial random solutions based on Latin Hypercube Sampling (LHS) [] in entire variable space. Then we evaluate the objective and constraint unctions with high-idelity solution evaluations. Thereater, we sort P according to distances rom all reerence directions w W in the objective space. Population P is then classiied into dierent clusters according to the shortest distance rom r. We pick the best solution L w to be the leader o each direction. We then compute a suitable selection unction S(.) with objectives and constraints or each solution. With these selection unction values, we then build a surrogate model o the selection unction. We then employ a niching based real-parameter genetic algorithm (N-RGA) to perorm an optimization run on this model. N-RGA is designed to return at most W points X, where each solution is the best or one niche (or, cluster). In the beginning, some o the clusters may be empty and thus X is expected to be smaller than W. When the size o the predicted set X exceeds the allowed number o solution evaluations SE max we pick only (SE max X ) solutions to restrict our high-idelity evaluation to SE max. We then evaluate objectives and constraints o those H best-niched solutions (X) with high-idelity solution evaluation and report. presented in Algorithm. 3. Initialization The whole algorithm is Due to the multimodal structure o the M6 surrogate model, it is expected that an adequate number o initial points are required to have a reasonable starting metamodel o the problem. In order to conduct an eective search, we require a good representation o solutions over dierent parts o the objective space. Although simple Latin Hypercube Sampling is enough or creating good representative solutions, some variable density problems e.g. ZDT6, DTLZ4 etc. require a more sophisticated initialization procedure to get relatively uniorm distribution in the objective space. Here we propose a methodology or initialization which is solely based on diversity. First, we create η solutions which is a raction o the initial population o size ρ using the LHS sampling and evaluate them with high-idelity evaluations. We then calculate the average distance o each solution rom its τ nearest neighbors in the objective space. This average distance is then used to locate new sample point in the search space in a non-uniorm manner. New points are added in slabs o η% at a time to ill up the whole initial population. Crossover and mutation probability is set as. and /n (where n is the number o variables), respectively. We create η solutions each time and ill up P with ρ solutions in total. Figure shows the eect o incremental initialization procedure on ZDT6 problem which has a bias o solutions on the larger values. Objective ZDT6 with LHS Objective Objective ZDT6 with Incremental Initialization Objective Figure : Distribution o samples using (a) LHS and (b) incremental initialization. 3. Assigning Leader Solutions Leader solutions L w or each cluster w are selected based on the high-idelity itness unction. First, we sort the population according to orthogonal distances rom each o the reerence directions. Each solution is assigned to the nearest reerence direction. Thus, there are W possible clusters, although some clusters may not contain any member. We assign the best solution or each cluster to be the leader o that cluster. This leader is then used or niching in N-RGA 4

5 by using the nearest leader in the variable space. Algorithm : Multimodal-Selection Based Algorithm Input : Objectives: [,..., M ], constraints: [g,..., g J ], n (variables), ρ (sample size), SE max (total high idelity evaluations), N-RGA (multi-modal real-parameter genetic algorithm), Γ (parameters o RGA), W (reerence direction set), S (multi-modal constrained selection unction) Output: P T P LHS(ρ, n) // initialization with Latin Hypercube Sampling F m (P), m {,..., M} // high idelity evaluations (unctions) 3 C g j (P), j {,..., J} // high idelity evaluations (constraints) 4 eval ρ // number o unction evaluations 5 while eval < SE max do 6 or w W do // or each reerence direction w 7 L w Sort P according to distance rom w and pick the best solution 8 end 9 L = {L,..., L W } // vector o W leaders Fitness S(F, C) // Compute selection unction F Create Surrogate M odel(fitness) // Surrogate model or selection unction X N-RGA(F, L, Γ) // returns multiple optimized solutions, one or each reerence line; niching is perormed in x-space with L 3 i X + P > SE max then 4 X X( : (SE max P )) // Choose best (SE max P ) metamodeled solutions 5 end 6 F m X m (X), m {,..., M} // Evaluate objectives o X 7 C j X g j(x), j {,..., J} // Evaluate constraints o X 8 P P X; 9 F F F X ; C C C X ; eval eval + X ; end 3 return P T P( : W ) 3.3 Creating Surrogate Model As described beore, we used two multi-modal selection unction S(.) that can constitute multiple optima, each or a speciic Pareto-optimal solution. Each reerence vector w W targets one global optimum in general. To make the model more accurate, only solutions which are evaluated exactly are used or model construction. In this study, we use either Kriging or Artiicial Neural Network method to perorm this step. We discuss speciic parameters or the surrogate modeling methods in the next section. The choice o our models is two-old. First, we want to investigate the perormance with a structured modeling procedure (Kriging). Knowing the act that a Kriging method may be diicult to model a multi-modal landscape, we include an ANN method to model such a complex landscape, particularly motivated by the recent progress on deep learning [] method s ability to model any arbitrarily complex relationship. The comparison between Kriging and ANN in the context o M6 metamodeling approach is an important part o this study. 3.4 Niched Real-parameter Genetic Algorithm (N-RGA) To solve a multimodal problem or inding multiple optimal solutions, we also need an optimization algorithm which is capable o inding multiple solutions. We devise a niching based genetic algorithm which preserves solutions rom each niche with the help o leader solutions, described above. Objective unctions and constraints are provided by Kriging or ANN, whichever is being used. A mating selection is used to restrict parents to be chosen rom the same cluster. We use other parameters similar to a standard real-parameter genetic algorithm [8]. At the end o N-RGA run, we pick at most one solution rom each cluster or urther high-idelity evaluation. Ater we inish this cycle, we re-evaluate the leader o each cluster or the next iteration. 4 Experimental Results In this section, we compare two proposed selection unction approaches with two dierent models Kriging and ANN. We denote these our combinations as,, and, where KR stands or Kriging and NN stands or ANN. Following parameter settings are used or N-RGA: binary tournament selection operator, simulated binary crossover (SBX), and polynomial mutation, with parameters as ollows: Population size = n, where n is 5

6 a number o variables, number o generations =, crossover probability =.95, mutation probability = /n, distribution index or SBX operator =, and distribution index or polynomial mutation operator =. For ANN, we use two hidden layers with and 7 neurons each. These parameters are ound by limited trial-and-error runs. We train the model until we get a validation error o 6 or 5 epochs have elapsed. We use ReLU activation and initialize the weights randomly each time we train the model. For each methodology, we perorm runs on all problems. We show the obtained solutions o the median IGD run in each case in order to have a graphical comparison. 4. Two-Objective Constrained and Unconstrained Problems We apply our methods to two-objective unconstrained problems ZDT, ZDT, ZDT3 and ZDT6 with ten (n = ) variables and a maximum o only SE max = 5 high-idelity solution evaluations. For each problem, we have used an initial sample size o ρ =. The obtained non-dominated solutions or ZDT, ZDT3 and ZDT6 are shown in Figures 3, 4 and 5. It is clear rom the igures that and are able to solve ZDT problems. The obtained points are very close to the respective true Pareto-optimal ronts and have a good distribution o points on the entire ront. Table shows the average and standard deviation o IGD metric values or all methodologies. In ZDT problem, perorm the best. For ZDT3 problem, which has a disconnected Pareto optimal ront, perorm the best, ollowed by. The multi-modal KKTPM-based NN or KR approach is not able to get quite close to the true Pareto-optimal ront with only 5 evaluations. This is due to the added complexity the multi-modal KKTPM surace oers to the metamodeling approach when dealing with a disconnected Pareto-optimal ront. As mentioned beore, ZDT6 is a variable density problem and all the methods ind it hard to get to optimal ront within limited unction evaluation. shows best perormance comparing with other approaches. It is clear that methodology perorms the best on the unconstrained two-objective test problems. Methodologies having statistically insigniicant perormance rom the best perorming method in each problem are also marked in bold with the respective p-value in Wilcoxon signed-ranked test. Next, we apply our methods to two-objective constrained problems: BNH, SRN, TNK, and OSY [8], For each problem an initial sample o size ρ = 3 with SE max = 8. The obtained non-dominated so- Table : Computed IGD values or test and real world problems. Algorithm Problem Average SD Average SD Average SD Average SD ZDT p=.3 p=.8 - p=. ZDT p=.376 p= p=.3 ZDT p= p=.439 p=.8 ZDT p=.8 p=.8 p=.8 BNH p=. p=.6 p=.73 - SRN p=.8 p=.73 p=.7 TNK p=.7337 p=.474 p=. - OSY p=.43 - p=. p=.57 Welded Beam p=.45 p=.73 p=. CDTLZ p=.43 - p=.8 p=.58 DTLZ p=.8 p=.8 p=.8 - DTLZ p=.3 p=.46 p=.8 DTLZ p=.46 p=.8 p=.8 - Car Side p=.57 p=.376 p= lutions o BHN, SRN and TNK are shown in Figures 6, 7 and 8 respectively. All our methods are able to ind a close and well-distributed set o trade-o points to true Pareto-optimal ront or BNH and SRN problems except. It gets worse perormance in BNH problem while perorms the best or SRN. Although TNK has provided diiculties to all our methods due to discontinuities in its Pareto-optimal ront, all methods come close to the true ront. In this problem, perorms the best, ollowed by, and. In OSY, none o all methods perorm well because o the increased number o constraints. Next, a two-objective welded-beam design problem [8] is solved using our methods using SE max =,. Table shows that, perorms the best ollowed by and while inds it diicult to ind minimum-cost ( ) solutions. This real-world problem requires more high-idelity solution evaluations to ind the near Pareto-optimal solutions. 4. Three-Objective Constrained and Unconstrained Problems We apply our methods to three-objective unconstrained (DTLZ, DTLZ4, and DTLZ5) and constrained (CDTLZ) test problems. Each o these problems has seven variables. We ix SE max =, or DTLZ and DTLZ5, and SE max =, or DTLZ4 due to multi-modality in its landscape. For CDTLZ, we have used SE max =, 5. Figures 9,, and show the Pareto-optimal surace o DTLZ, DTLZ4 and CDTLZ problems with obtained nondominated solutions. First, perorms the best on DTLZ then and MEMO-RK, 6

7 .8 -ZDT.8 -ZDT.8 -ZDT.8 -ZDT Figure 3: Non-dominated solutions or problem ZDT using,,, and KKT- KR..5 -ZDT3.5 -ZDT3.4 -ZDT ZDT Figure 4: Non-dominated solutions or problem ZDT3 using,,, and KKT- KR. while inds it diicult to solve this problem. For DTLZ4, perormed the best while other methods did not perorm well compare to. DTLZ4 is a variable density and multimodal problem which is very hard to solve with lower budget. But is able to create a well distributed set o solutions with only, solution evaluations. On DTLZ5, algorithm perorms the best ollowed by and. Pareto-optimal solutions o DTLZ5 lies on a curve and every method perorms well on this problem. On CDTLZ problem, Algorithm and perorm well and inds a close and welldistributed set o trade-o points near true Paretooptimal ront while other algorithms did not converge close enough. Finally, we apply our methods to a car side-impact problem having three objectives and constraints with SE max =,. Algorithm KKT- KR and perorms the best on this problem, while and methods are also able to ind a widely distributed and well-converged set o solutions. From the above results, it is clear that the use o MEMO selection method and ANN as a metamodeling technique makes a good combination and it perorms well on most o the test and real world multiobjective problems. KKTPM-based method perorms well i constraints are linear. Neural network tends to perorm relatively better when number o objective increases rom to 3. MEMO approach provides a better distribution o solutions in objective space than KKTPM based approach. 5 Conclusions In this paper, we have proposed and evaluated a selection unction based metamodeling methodology or multiobjective optimization. Recently proposed KKTPM and MEMO-based selection unctions have been compared against each other or this purpose. Also, the eicacy o two metamodeling techniques (ANN and Kriging) on the above selection unction approaches has been investigated. On a number o two and three-objective, constrained and unconstrained optimization problems including two engineering design problems, our extensive results has clearly shown that (i) MEMO-based selection unction approach is better and (ii) ANN metamodeling technique is, in general, better able to approximate multimodal selection unction landscape portrayed by the overall approach. It is important to highlight that in solving the above problems, only a raction o solutions evaluations (limited to 5 to,) were allowed, compared to hundreds o thousands o solution evaluations 7

8 . -ZDT ZDT6 8 -ZDT6 8 -ZDT Figure 5: Non-dominated solutions or problem ZDT6 using,,, and KKT- KR BNH 5 4 -BNH 5 4 -BNH 5 4 -BNH Figure 6: Non-dominated solutions or problem BNH using,,, and. usually set or non-metamodeling based EMO studies. Based on these encouraging results, we plan to pursue a number o urther studies: (i) use o dierent architectures and deeper ANNs or better accuracy models, (ii) use o other selection unctions modeling ater successul EMO methods and (iii) extend the approach to many-objective optimization problems. Reerences [] B. Beachkoski and R. Grandhi. Improved Distributed Hypercube Sampling. In 43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conerence, AIAA, pages 7, Denver, Colorado,. ARC. [] I. Das and J.E. Dennis. Normal-boundary intersection: A new method or generating the Pareto surace in nonlinear multicriteria optimization problems. SIAM Journal o Optimization, 8(3):63 657, 998. [3] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan. A ast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6():8 97,. [4] K. Deb and R. Datta. A ast and accurate solution o constrained optimization problems using a hybrid bi-objective and penalty unction approach. In IEEE Congress on Evolutionary Computation, pages 8, Barcelona, Spain, July. IEEE. [5] K. Deb and H. Jain. An evolutionary manyobjective optimization algorithm using reerencepoint based non-dominated sorting approach, Part I: Solving problems with box constraints. IEEE Transactions on Evolutionary Computation, 8(4):577 6, 4. [6] Kalyanmoy Deb, Mohamed Abouhawwash, and Joydeep Dutta. An optimality theory based proximity measure or evolutionary multi-objective and many-objective optimization. In International Conerence on Evolutionary Multi- Criterion Optimization, pages 8 33, Cham, 5. Springer, Springer International Publishing. [7] Kalyanmoy Deb, Rayan Hussein, Proteek Roy, and Gregorio Toscano. Classiying Metamodeling Methods or Evolutionary Multi-objective Optimization: First Results, pages Springer International Publishing, Cham, 7. [8] Kalyanmoy Deb and Deb Kalyanmoy. Multi- Objective Optimization Using Evolutionary Algo- 8

9 5 -SRN 5 -SRN 5 -SRN 5 -SRN Figure 7: Non-dominated solutions or problem SRN using,,, and..5 -TNK.5 -TNK.5 -TNK.5 -TNK Figure 8: Non-dominated solutions or problem TNK using,,, and. rithms. John Wiley & Sons, Inc., New York, NY, USA,. [9] M. Emmerich, K. Giannakoglou, and B. Naujoks. Single and Multiobjective Evolutionary Optimization Assisted by Gaussian Random Field Metamodels. IEEE Transactions on Evolutionary Computation, (4):4 439, aug. 6. [] M.T.M. Emmerich, K.C. Giannakoglou, and B. Naujoks. Single- and multiobjective evolutionary optimization assisted by gaussian random ield metamodels. IEEE Transactions on Evolutionary Computation, :4 439, 6. [] Ian Goodellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, Cambridge, MA, USA, 6. deeplearningbook.org. [] Rayan Hussein and Kalyanmoy Deb. A generative kriging surrogate model or constrained and unconstrained multi-objective optimization. In Proceedings o the Genetic and Evolutionary Computation Conerence 6, GECCO 6, pages , New York, NY, USA, 6. ACM. [3] Yaochu Jin. Surrogate-assisted evolutionary computation: Recent advances and uture challenges. Swarm and Evolutionary Computation, ():6 7,. [4] Donald R Jones. A taxonomy o global optimization methods based on response suraces. Journal o global optimization, (4): ,. [5] J. Knowles. ParEGO: A Hybrid Algorithm with On-line Landscape Approximation or Expensive Multiobjective Optimization Problems. IEEE Transactions on Evolutionary Computation, ():5 66, January 6. [6] K. Miettinen. Nonlinear Multiobjective Optimization. Kluwer, Boston, New York, NY, USA, 999. [7] Wolgang Ponweiser, Tobias Wagner, Dirk Biermann, and Markus Vincze. Multiobjective Optimization on a Limited Budget o Evaluations Using Model-Assisted S-Metric Selection, pages Springer Berlin Heidelberg, Berlin, Heidelberg, 8. [8] P. C. Roy and K. Deb. High dimensional model representation or solving expensive multiobjective optimization problems. In 6 IEEE Congress on Evolutionary Computation (CEC), pages , Vancouver, Canada, July 6. IEEE. [9] R. E. Steuer. Multiple Criteria Optimization: Theory, Computation and Applica- tion. New York: Wiley, New York, NY, USA, 986. [] Cem C. Tutum and Kalyanmoy Deb. A Multimodal Approach or Evolutionary Multi-objective 9

10 DTLZ DTLZ DTLZ DTLZ Figure 9: Non-dominated solutions or problem DTLZ using,,, and KKT- KR. -DTLZ4 -DTLZ4 -DTLZ4 -DTLZ Figure : Non-dominated solutions or problem DTLZ4 using,,, and KKT- KR. Optimization (MEMO): Proo-o-Principle Results, pages 3 8. Springer International Publishing, Cham, 5. [] L. Willmes, T. Back, Yaochu Jin, and B. Sendho. Comparing neural networks and kriging or itness approximation in evolutionary optimization. In Evolutionary Computation, 3. CEC 3. The 3 Congress on, volume, pages Vol., Canberra, Australia, Dec 3. IEEE. [] Qingu Zhang, Wudong Liu, Edward Tsang, and Botond Virginas. Expensive multiobjective optimization by MOEA/D with gaussian process model. IEEE Transactions on Evolutionary Computation, 4(3): ,.

11 -CDTLZ -CDTLZ -CDTLZ -CDTLZ Figure : Non-dominated solutions or problem CDTLZ using,,, and.

KANGAL REPORT

KANGAL REPORT Individual Penalty Based Constraint handling Using a Hybrid Bi-Objective and Penalty Function Approach Rituparna Datta Kalyanmoy Deb Mechanical Engineering IIT Kanpur, India KANGAL REPORT 2013005 Abstract

More information

Performance Assessment of the Hybrid Archive-based Micro Genetic Algorithm (AMGA) on the CEC09 Test Problems

Performance Assessment of the Hybrid Archive-based Micro Genetic Algorithm (AMGA) on the CEC09 Test Problems Perormance Assessment o the Hybrid Archive-based Micro Genetic Algorithm (AMGA) on the CEC9 Test Problems Santosh Tiwari, Georges Fadel, Patrick Koch, and Kalyanmoy Deb 3 Department o Mechanical Engineering,

More information

Himanshu Jain and Kalyanmoy Deb, Fellow, IEEE

Himanshu Jain and Kalyanmoy Deb, Fellow, IEEE An Evolutionary Many-Objective Optimization Algorithm Using Reerence-point Based Non-dominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach Himanshu Jain and Kalyanmoy

More information

Scalable Test Problems for Evolutionary Multi-Objective Optimization

Scalable Test Problems for Evolutionary Multi-Objective Optimization Scalable Test Problems or Evolutionary Multi-Objective Optimization Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory Indian Institute o Technology Kanpur PIN 8 6, India deb@iitk.ac.in Lothar Thiele,

More information

Balancing Survival of Feasible and Infeasible Solutions in Evolutionary Optimization Algorithms

Balancing Survival of Feasible and Infeasible Solutions in Evolutionary Optimization Algorithms Balancing Survival of Feasible and Infeasible Solutions in Evolutionary Optimization Algorithms Zhichao Lu,, Kalyanmoy Deb, and Hemant Singh Electrical and Computer Engineering Michigan State University,

More information

Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms

Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms Lei Chen 1,2, Kalyanmoy Deb 2, and Hai-Lin Liu 1 1 Guangdong University of Technology,

More information

Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts

Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts IEEE Symposium Series on Computational Intelligence Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts Heiner Zille Institute of Knowledge and Language Engineering University of Magdeburg, Germany

More information

High Dimensional Model Representation for solving Expensive Multi-objective Optimization Problems

High Dimensional Model Representation for solving Expensive Multi-objective Optimization Problems High Dimensional Model Representation for solving Expensive Multi-objective Optimization Problems Proteek Chandan Roy and Kalyanmoy Deb Department of Computer Science and Engineering, Michigan State University

More information

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague.

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague. Evolutionary Algorithms: Lecture 4 Jiří Kubaĺık Department of Cybernetics, CTU Prague http://labe.felk.cvut.cz/~posik/xe33scp/ pmulti-objective Optimization :: Many real-world problems involve multiple

More information

An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization

An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization Kalyanmoy Deb, Fellow, IEEE and Mohamed Abouhawwash Department of Electrical and Computer Engineering Computational

More information

Multi-Objective Evolutionary Algorithms

Multi-Objective Evolutionary Algorithms Multi-Objective Evolutionary Algorithms Kalyanmoy Deb a Kanpur Genetic Algorithm Laboratory (KanGAL) Indian Institute o Technology Kanpur Kanpur, Pin 0806 INDIA deb@iitk.ac.in http://www.iitk.ac.in/kangal/deb.html

More information

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls Florian Siegmund, Amos H.C. Ng Virtual Systems Research Center University of Skövde P.O. 408, 541 48 Skövde,

More information

A Proposed Approach for Solving Rough Bi-Level. Programming Problems by Genetic Algorithm

A Proposed Approach for Solving Rough Bi-Level. Programming Problems by Genetic Algorithm Int J Contemp Math Sciences, Vol 6, 0, no 0, 45 465 A Proposed Approach or Solving Rough Bi-Level Programming Problems by Genetic Algorithm M S Osman Department o Basic Science, Higher Technological Institute

More information

RM-MEDA: A Regularity Model Based Multiobjective Estimation of Distribution Algorithm

RM-MEDA: A Regularity Model Based Multiobjective Estimation of Distribution Algorithm RM-MEDA: A Regularity Model Based Multiobjective Estimation o Distribution Algorithm Qingu Zhang, Aimin Zhou and Yaochu Jin Abstract Under mild conditions, it can be induced rom the Karush-Kuhn-Tucker

More information

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences Upali K. Wickramasinghe and Xiaodong Li School of Computer Science and Information Technology, RMIT University,

More information

Approximation Model Guided Selection for Evolutionary Multiobjective Optimization

Approximation Model Guided Selection for Evolutionary Multiobjective Optimization Approximation Model Guided Selection for Evolutionary Multiobjective Optimization Aimin Zhou 1, Qingfu Zhang 2, and Guixu Zhang 1 1 Each China Normal University, Shanghai, China 2 University of Essex,

More information

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Hisao Ishibuchi Graduate School of Engineering Osaka Prefecture University Sakai, Osaka 599-853,

More information

Approximation-Guided Evolutionary Multi-Objective Optimization

Approximation-Guided Evolutionary Multi-Objective Optimization Approximation-Guided Evolutionary Multi-Objective Optimization Karl Bringmann 1, Tobias Friedrich 1, Frank Neumann 2, Markus Wagner 2 1 Max-Planck-Institut für Informatik, Campus E1.4, 66123 Saarbrücken,

More information

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms H. Ishibuchi, T. Doi, and Y. Nojima, Incorporation of scalarizing fitness functions into evolutionary multiobjective optimization algorithms, Lecture Notes in Computer Science 4193: Parallel Problem Solving

More information

Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach

Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach Richard Allmendinger,XiaodongLi 2,andJürgen Branke University of Karlsruhe, Institute AIFB, Karlsruhe, Germany 2 RMIT University,

More information

A Search Method with User s Preference Direction using Reference Lines

A Search Method with User s Preference Direction using Reference Lines A Search Method with User s Preference Direction using Reference Lines Tomohiro Yoshikawa Graduate School of Engineering, Nagoya University, Nagoya, Japan, {yoshikawa}@cse.nagoya-u.ac.jp Abstract Recently,

More information

Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm

Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm Toshiki Kondoh, Tomoaki Tatsukawa, Akira Oyama, Takeshi Watanabe and Kozo Fujii Graduate School of Engineering, Tokyo University

More information

Recombination of Similar Parents in EMO Algorithms

Recombination of Similar Parents in EMO Algorithms H. Ishibuchi and K. Narukawa, Recombination of parents in EMO algorithms, Lecture Notes in Computer Science 341: Evolutionary Multi-Criterion Optimization, pp. 265-279, Springer, Berlin, March 25. (Proc.

More information

MOEA/D with NBI-style Tchebycheff approach for Portfolio Management

MOEA/D with NBI-style Tchebycheff approach for Portfolio Management WCCI 2010 IEEE World Congress on Computational Intelligence July, 18-23, 2010 - CCIB, Barcelona, Spain CEC IEEE with NBI-style Tchebycheff approach for Portfolio Management Qingfu Zhang, Hui Li, Dietmar

More information

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems Repair and Repair in EMO Algorithms for Multiobjective 0/ Knapsack Problems Shiori Kaige, Kaname Narukawa, and Hisao Ishibuchi Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho,

More information

Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems

Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems Saku Kukkonen and Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search

Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search Seventh International Conference on Hybrid Intelligent Systems Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search Crina Grosan and Ajith Abraham Faculty of Information Technology,

More information

R2-IBEA: R2 Indicator Based Evolutionary Algorithm for Multiobjective Optimization

R2-IBEA: R2 Indicator Based Evolutionary Algorithm for Multiobjective Optimization R2-IBEA: R2 Indicator Based Evolutionary Algorithm for Multiobjective Optimization Dũng H. Phan Department of Computer Science University of Massachusetts, Boston Boston, MA 02125, USA Email: phdung@cs.umb.edu

More information

An Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions

An Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions An Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions Kalyanmoy Deb, Ankur Sinha, Pekka Korhonen, and Jyrki Wallenius KanGAL Report Number

More information

Effectiveness and efficiency of non-dominated sorting for evolutionary multi- and many-objective optimization

Effectiveness and efficiency of non-dominated sorting for evolutionary multi- and many-objective optimization Complex Intell. Syst. (217) 3:247 263 DOI 1.17/s4747-17-57-5 ORIGINAL ARTICLE Effectiveness and efficiency of non-dominated sorting for evolutionary multi- and many-objective optimization Ye Tian 1 Handing

More information

An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls

An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls Ankur Sinha, Pekka Korhonen, Jyrki Wallenius Firstname.Secondname@aalto.fi,

More information

Solving Bilevel Multi-Objective Optimization Problems Using Evolutionary Algorithms

Solving Bilevel Multi-Objective Optimization Problems Using Evolutionary Algorithms Solving Bilevel Multi-Objective Optimization Problems Using Evolutionary Algorithms Kalyanmoy Deb and Ankur Sinha Department of Mechanical Engineering Indian Institute of Technology Kanpur PIN 2816, India

More information

Towards an Estimation of Nadir Objective Vector Using Hybrid Evolutionary and Local Search Approaches

Towards an Estimation of Nadir Objective Vector Using Hybrid Evolutionary and Local Search Approaches Towards an Estimation of Nadir Objective Vector Using Hybrid Evolutionary and Local Search Approaches Kalyanmoy Deb, Kaisa Miettinen, and Shamik Chaudhuri KanGAL Report Number 279 Abstract Nadir objective

More information

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances Minzhong Liu, Xiufen Zou, Yu Chen, Zhijian Wu Abstract In this paper, the DMOEA-DD, which is an improvement of DMOEA[1,

More information

An Efficient Constraint Handling Method for Genetic Algorithms

An Efficient Constraint Handling Method for Genetic Algorithms An Efficient Constraint Handling Method for Genetic Algorithms Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Department of Mechanical Engineering Indian Institute of Technology Kanpur Kanpur,

More information

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization adfa, p. 1, 2011. Springer-Verlag Berlin Heidelberg 2011 Devang Agarwal and Deepak Sharma Department of Mechanical

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

Multi-objective Optimization

Multi-objective Optimization Some introductory figures from : Deb Kalyanmoy, Multi-Objective Optimization using Evolutionary Algorithms, Wiley 2001 Multi-objective Optimization Implementation of Constrained GA Based on NSGA-II Optimization

More information

A Surrogate-assisted Reference Vector Guided Evolutionary Algorithm for Computationally Expensive Many-objective Optimization

A Surrogate-assisted Reference Vector Guided Evolutionary Algorithm for Computationally Expensive Many-objective Optimization A Surrogate-assisted Reference Vector Guided Evolutionary Algorithm for Computationally Expensive Many-objective Optimization Tinkle Chugh, Yaochu Jin, Fellow, IEEE, Kaisa Miettinen, Jussi Hakanen, Karthik

More information

Evolutionary Computation

Evolutionary Computation Evolutionary Computation Lecture 9 Mul+- Objec+ve Evolu+onary Algorithms 1 Multi-objective optimization problem: minimize F(X) = ( f 1 (x),..., f m (x)) The objective functions may be conflicting or incommensurable.

More information

Multi-objective Optimization

Multi-objective Optimization Jugal K. Kalita Single vs. Single vs. Single Objective Optimization: When an optimization problem involves only one objective function, the task of finding the optimal solution is called single-objective

More information

Late Parallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms

Late Parallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms Late arallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms O. Tolga Altinoz Department of Electrical and Electronics Engineering Ankara

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department ofmechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Ke Li 1, Kalyanmoy Deb 1, Qingfu Zhang 2, and Sam Kwong 2 1 Department of Electrical and Computer

More information

Data-Driven Evolutionary Optimization of Complex Systems: Big vs Small Data

Data-Driven Evolutionary Optimization of Complex Systems: Big vs Small Data Data-Driven Evolutionary Optimization of Complex Systems: Big vs Small Data Yaochu Jin Head, Nature Inspired Computing and Engineering (NICE) Department of Computer Science, University of Surrey, United

More information

Finding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm

Finding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm 16th World Congress of the International Fuzzy Systems Association (IFSA) 9th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT) Finding Sets of Non-Dominated Solutions with High

More information

EVOLUTIONARY algorithms (EAs) are a class of

EVOLUTIONARY algorithms (EAs) are a class of An Investigation on Evolutionary Gradient Search for Multi-objective Optimization C. K. Goh, Y. S. Ong and K. C. Tan Abstract Evolutionary gradient search is a hybrid algorithm that exploits the complementary

More information

A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization

A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization Hisao Ishibuchi and Youhei Shibata Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho, Sakai,

More information

DEMO: Differential Evolution for Multiobjective Optimization

DEMO: Differential Evolution for Multiobjective Optimization DEMO: Differential Evolution for Multiobjective Optimization Tea Robič and Bogdan Filipič Department of Intelligent Systems, Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana, Slovenia tea.robic@ijs.si

More information

Evolutionary multi-objective algorithm design issues

Evolutionary multi-objective algorithm design issues Evolutionary multi-objective algorithm design issues Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical Information Technology Karthik.sindhya@jyu.fi

More information

X/$ IEEE

X/$ IEEE IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 1, FEBRUARY 2008 41 RM-MEDA: A Regularity Model-Based Multiobjective Estimation of Distribution Algorithm Qingfu Zhang, Senior Member, IEEE,

More information

SIMULATION OPTIMIZER AND OPTIMIZATION METHODS TESTING ON DISCRETE EVENT SIMULATIONS MODELS AND TESTING FUNCTIONS

SIMULATION OPTIMIZER AND OPTIMIZATION METHODS TESTING ON DISCRETE EVENT SIMULATIONS MODELS AND TESTING FUNCTIONS SIMULATION OPTIMIZER AND OPTIMIZATION METHODS TESTING ON DISCRETE EVENT SIMULATIONS MODELS AND TESTING UNCTIONS Pavel Raska (a), Zdenek Ulrych (b), Petr Horesi (c) (a) Department o Industrial Engineering

More information

Multi-Objective Memetic Algorithm using Pattern Search Filter Methods

Multi-Objective Memetic Algorithm using Pattern Search Filter Methods Multi-Objective Memetic Algorithm using Pattern Search Filter Methods F. Mendes V. Sousa M.F.P. Costa A. Gaspar-Cunha IPC/I3N - Institute of Polymers and Composites, University of Minho Guimarães, Portugal

More information

EFFICIENT SAMPLING FOR DESIGN OPTIMIZATION OF AN SLS PRODUCT. Nancy Xu*, Cem C. Tutum

EFFICIENT SAMPLING FOR DESIGN OPTIMIZATION OF AN SLS PRODUCT. Nancy Xu*, Cem C. Tutum Solid Freeform Fabrication 2017: Proceedings of the 28th Annual International Solid Freeform Fabrication Symposium An Additive Manufacturing Conference EFFICIENT SAMPLING FOR DESIGN OPTIMIZATION OF AN

More information

An Experimental Multi-Objective Study of the SVM Model Selection problem

An Experimental Multi-Objective Study of the SVM Model Selection problem An Experimental Multi-Objective Study of the SVM Model Selection problem Giuseppe Narzisi Courant Institute of Mathematical Sciences New York, NY 10012, USA narzisi@nyu.edu Abstract. Support Vector machines

More information

Using an outward selective pressure for improving the search quality of the MOEA/D algorithm

Using an outward selective pressure for improving the search quality of the MOEA/D algorithm Comput Optim Appl (25) 6:57 67 DOI.7/s589-5-9733-9 Using an outward selective pressure for improving the search quality of the MOEA/D algorithm Krzysztof Michalak Received: 2 January 24 / Published online:

More information

EliteNSGA-III: An Improved Evolutionary Many- Objective Optimization Algorithm

EliteNSGA-III: An Improved Evolutionary Many- Objective Optimization Algorithm EliteNSGA-III: An Improved Evolutionary Many- Objective Optimization Algorithm Amin Ibrahim, IEEE Member Faculty of Electrical, Computer, and Software Engineering University of Ontario Institute of Technology

More information

Assessing the Convergence Properties of NSGA-II for Direct Crashworthiness Optimization

Assessing the Convergence Properties of NSGA-II for Direct Crashworthiness Optimization 10 th International LS-DYNA Users Conference Opitmization (1) Assessing the Convergence Properties of NSGA-II for Direct Crashworthiness Optimization Guangye Li 1, Tushar Goel 2, Nielen Stander 2 1 IBM

More information

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem An Evolutionary Algorithm for the Multi-objective Shortest Path Problem Fangguo He Huan Qi Qiong Fan Institute of Systems Engineering, Huazhong University of Science & Technology, Wuhan 430074, P. R. China

More information

Difficulty Adjustable and Scalable Constrained Multi-objective Test Problem Toolkit

Difficulty Adjustable and Scalable Constrained Multi-objective Test Problem Toolkit JOURNAL OF LATEX CLASS FILES, VOL. 4, NO. 8, AUGUST 5 Difficulty Adjustable and Scalable Constrained Multi-objective Test Problem Toolkit Zhun Fan, Senior Member, IEEE, Wenji Li, Xinye Cai, Hui Li, Caimin

More information

I-MODE: An Interactive Multi-Objective Optimization and Decision-Making using Evolutionary Methods

I-MODE: An Interactive Multi-Objective Optimization and Decision-Making using Evolutionary Methods I-MODE: An Interactive Multi-Objective Optimization and Decision-Making using Evolutionary Methods Kalyanmoy Deb and Shamik Chaudhuri Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute of Technology,

More information

Adaptive Reference Vector Generation for Inverse Model Based Evolutionary Multiobjective Optimization with Degenerate and Disconnected Pareto Fronts

Adaptive Reference Vector Generation for Inverse Model Based Evolutionary Multiobjective Optimization with Degenerate and Disconnected Pareto Fronts Adaptive Reference Vector Generation for Inverse Model Based Evolutionary Multiobjective Optimization with Degenerate and Disconnected Pareto Fronts Ran Cheng, Yaochu Jin,3, and Kaname Narukawa 2 Department

More information

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems

Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems Florian Siegmund a, Amos H. C. Ng a, and Kalyanmoy Deb b a School of Engineering, University of Skövde,

More information

A Novel Accurate Genetic Algorithm for Multivariable Systems

A Novel Accurate Genetic Algorithm for Multivariable Systems World Applied Sciences Journal 5 (): 137-14, 008 ISSN 1818-495 IDOSI Publications, 008 A Novel Accurate Genetic Algorithm or Multivariable Systems Abdorreza Alavi Gharahbagh and Vahid Abolghasemi Department

More information

An Evolutionary Many Objective Optimisation Algorithm with Adaptive Region Decomposition

An Evolutionary Many Objective Optimisation Algorithm with Adaptive Region Decomposition 1 An Evolutionary Many Objective Optimisation Algorithm with Adaptive Region Decomposition Hai-Lin Liu, Lei Chen, Qingfu Zhang, Senior Member, IEEE and Kalyanmoy Deb, Fellow, IEEE COIN Report Number 016014

More information

A Fast Approximation-Guided Evolutionary Multi-Objective Algorithm

A Fast Approximation-Guided Evolutionary Multi-Objective Algorithm A Fast Approximation-Guided Evolutionary Multi-Objective Algorithm Markus Wagner and Frank Neumann Evolutionary Computation Group School of Computer Science The University of Adelaide Adelaide, SA 5005,

More information

Generating Uniformly Distributed Pareto Optimal Points for Constrained and Unconstrained Multicriteria Optimization

Generating Uniformly Distributed Pareto Optimal Points for Constrained and Unconstrained Multicriteria Optimization Generating Uniformly Distributed Pareto Optimal Points for Constrained and Unconstrained Multicriteria Optimization Crina Grosan Department of Computer Science Babes-Bolyai University Cluj-Napoca, Romania

More information

Finding Knees in Multi-objective Optimization

Finding Knees in Multi-objective Optimization Finding Knees in Multi-objective Optimization Jürgen Branke 1, Kalyanmoy Deb 2, Henning Dierolf 1, and Matthias Osswald 1 1 Institute AIFB, University of Karlsruhe, Germany branke@aifb.uni-karlsruhe.de

More information

Particle Swarm Optimization to Solve Optimization Problems

Particle Swarm Optimization to Solve Optimization Problems Particle Swarm Optimization to Solve Optimization Problems Gregorio Toscano-Pulido and Carlos A. Coello Coello Evolutionary Computation Group at CINVESTAV-IPN (EVOCINV) Electrical Eng. Department, Computer

More information

A Model-Based Evolutionary Algorithm for Bi-objective Optimization

A Model-Based Evolutionary Algorithm for Bi-objective Optimization A Model-Based Evolutionary Algorithm for Bi-objective Optimization Aimin Zhou 1, Qingfu Zhang 1, Yaochu Jin 2, Edward Tsang 1 and Tatsuya Okabe 3 1 Department of Computer Science, University of Essex,

More information

Towards Understanding Evolutionary Bilevel Multi-Objective Optimization Algorithm

Towards Understanding Evolutionary Bilevel Multi-Objective Optimization Algorithm Towards Understanding Evolutionary Bilevel Multi-Objective Optimization Algorithm Ankur Sinha and Kalyanmoy Deb Helsinki School of Economics, PO Box, FIN-, Helsinki, Finland (e-mail: ankur.sinha@hse.fi,

More information

An Angle based Constrained Many-objective Evolutionary Algorithm

An Angle based Constrained Many-objective Evolutionary Algorithm APPLIED INTELLIGENCE manuscript No. (will be inserted by the editor) An Angle based Constrained Many-objective Evolutionary Algorithm Yi Xiang Jing Peng Yuren Zhou Li Zefeng Chen Miqing Received: date

More information

Multimodal Optimization Using a Bi-Objective Evolutionary Algorithm

Multimodal Optimization Using a Bi-Objective Evolutionary Algorithm Multimodal Optimization Using a Bi-Objective Evolutionary Algorithm Kalyanmoy Deb and Amit Saha Kanpur Genetic Algorithms Laboratory Indian Institute of Technology Kanpur PIN 8 6, India {deb,asaha}@iitk.ac.in

More information

A Genetic Algorithm Based Augmented Lagrangian Method for Accurate, Fast and Reliable Constrained Optimization

A Genetic Algorithm Based Augmented Lagrangian Method for Accurate, Fast and Reliable Constrained Optimization A Genetic Algorithm Based Augmented Lagrangian Method for Accurate, Fast and Reliable Constrained Optimization Kalyanmoy Deb and Soumil Srivastava Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

An Adaptive Normalization based Constrained Handling Methodology with Hybrid Bi-Objective and Penalty Function Approach

An Adaptive Normalization based Constrained Handling Methodology with Hybrid Bi-Objective and Penalty Function Approach An Adaptive Normalization based Constrained Handling Methodology with Hybrid Bi-Objective and Penalty Function Approach Rituparna Datta and Kalyanmoy Deb Department of Mechanical Engineering, Indian Institute

More information

Author s Accepted Manuscript

Author s Accepted Manuscript Author s Accepted Manuscript On The Use of Two Reference Points in Decomposition Based Multiobjective Evolutionary Algorithms Zhenkun Wang, Qingfu Zhang, Hui Li, Hisao Ishibuchi, Licheng Jiao www.elsevier.com/locate/swevo

More information

Two-Archive Evolutionary Algorithm for Constrained Multi-Objective Optimization

Two-Archive Evolutionary Algorithm for Constrained Multi-Objective Optimization Two-Archive Evolutionary Algorithm for Constrained Multi-Objective Optimization Ke Li #, Renzhi Chen #2, Guangtao Fu 3 and Xin Yao 2 Department of Computer Science, University of Exeter 2 CERCIA, School

More information

SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2

SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2 SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2 Mifa Kim 1, Tomoyuki Hiroyasu 2, Mitsunori Miki 2, and Shinya Watanabe 3 1 Graduate School, Department of Knowledge Engineering

More information

Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows

Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows Abel Garcia-Najera and John A. Bullinaria School of Computer Science, University of Birmingham Edgbaston, Birmingham

More information

Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm

Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm Nasreddine Hallam, Graham Kendall, and Peter Blanchfield School of Computer Science and IT, The Univeristy

More information

ScienceDirect. Testing Optimization Methods on Discrete Event Simulation Models and Testing Functions

ScienceDirect. Testing Optimization Methods on Discrete Event Simulation Models and Testing Functions Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 69 ( 2014 ) 768 777 24th DAAAM International Symposium on Intelligent Manuacturing and Automation, 2013 Testing Optimization

More information

Multi objective optimization of computationally expensive multi-modal functions with RBF surrogates and multi-rule selection

Multi objective optimization of computationally expensive multi-modal functions with RBF surrogates and multi-rule selection J Glob Optim (06) 64:7 DOI 0.007/s0898-05-070-y Multi objective optimization of computationally expensive multi-modal functions with RBF surrogates and multi-rule selection Taimoor Akhtar Christine A.

More information

NCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems

NCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems Shinya Watanabe Graduate School of Engineering, Doshisha University 1-3 Tatara Miyakodani,Kyo-tanabe, Kyoto, 10-031,

More information

A gradient-based multiobjective optimization technique using an adaptive weighting method

A gradient-based multiobjective optimization technique using an adaptive weighting method 10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA A gradient-based multiobjective optimization technique using an adaptive weighting method Kazuhiro

More information

Mobile Robot Static Path Planning Based on Genetic Simulated Annealing Algorithm

Mobile Robot Static Path Planning Based on Genetic Simulated Annealing Algorithm Mobile Robot Static Path Planning Based on Genetic Simulated Annealing Algorithm Wang Yan-ping 1, Wubing 2 1. School o Electric and Electronic Engineering, Shandong University o Technology, Zibo 255049,

More information

Evolutionary Multi-Objective Optimization of Trace Transform for Invariant Feature Extraction

Evolutionary Multi-Objective Optimization of Trace Transform for Invariant Feature Extraction Evolutionary Multi-Objective Optimization of Trace Transform for Invariant Feature Extraction Wissam A. Albukhanajer, Yaochu Jin, Johann Briffa and Godfried Williams Nature Inspired Computing and Engineering

More information

IN many engineering and computational optimization

IN many engineering and computational optimization Optimization of Multi-Scenario Problems Using Multi-Criterion Methods: A Case Study on Byzantine Agreement Problem Ling Zhu, Kalyanmoy Deb and Sandeep Kulkarni Computational Optimization and Innovation

More information

Multi-Objective Pipe Smoothing Genetic Algorithm For Water Distribution Network Design

Multi-Objective Pipe Smoothing Genetic Algorithm For Water Distribution Network Design City University of New York (CUNY) CUNY Academic Works International Conference on Hydroinformatics 8-1-2014 Multi-Objective Pipe Smoothing Genetic Algorithm For Water Distribution Network Design Matthew

More information

Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A.M. Uhrmacher, eds

Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A.M. Uhrmacher, eds Proceedings of the 2012 Winter Simulation Conference C. Laroque, J. Himmelspach, R. Pasupathy, O. Rose, and A.M. Uhrmacher, eds REFERENCE POINT-BASED EVOLUTIONARY MULTI-OBJECTIVE OPTIMIZATION FOR INDUSTRIAL

More information

Using Different Many-Objective Techniques in Particle Swarm Optimization for Many Objective Problems: An Empirical Study

Using Different Many-Objective Techniques in Particle Swarm Optimization for Many Objective Problems: An Empirical Study International Journal of Computer Information Systems and Industrial Management Applications ISSN 2150-7988 Volume 3 (2011) pp.096-107 MIR Labs, www.mirlabs.net/ijcisim/index.html Using Different Many-Objective

More information

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems 4 The Open Cybernetics and Systemics Journal, 008,, 4-9 Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems K. Kato *, M. Sakawa and H. Katagiri Department of Artificial

More information

An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function Results

An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function Results Syracuse University SURFACE Electrical Engineering and Computer Science College of Engineering and Computer Science -0-005 An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function

More information

Neural Network Regularization and Ensembling Using Multi-objective Evolutionary Algorithms

Neural Network Regularization and Ensembling Using Multi-objective Evolutionary Algorithms Neural Network Regularization and Ensembling Using Multi-objective Evolutionary Algorithms Yaochu Jin Honda Research Institute Europe Carl-Legien-Str 7 Offenbach, GERMANY Email: yaochujin@honda-ride Tatsuya

More information

Generalized Multiobjective Evolutionary Algorithm Guided by Descent Directions

Generalized Multiobjective Evolutionary Algorithm Guided by Descent Directions DOI.7/s852-4-9255-y Generalized Multiobjective Evolutionary Algorithm Guided by Descent Directions Roman Denysiuk Lino Costa Isabel Espírito Santo Received: August 23 / Accepted: 9 March 24 Springer Science+Business

More information

Two Heuristic Operations to Improve the Diversity of Two-objective Pareto Solutions

Two Heuristic Operations to Improve the Diversity of Two-objective Pareto Solutions Two Heuristic Operations to Improve the Diversity of Two-objective Pareto Solutions Rinku Dewri and Darrell Whitley Computer Science, Colorado State University Fort Collins, CO 80524 {rinku,whitley}@cs.colostate.edu

More information

Evolutionary Multitasking for Multiobjective Continuous Optimization: Benchmark Problems, Performance Metrics and Baseline Results

Evolutionary Multitasking for Multiobjective Continuous Optimization: Benchmark Problems, Performance Metrics and Baseline Results Evolutionary Multitasking for Multiobjective Continuous Optimization: Benchmark Problems, Performance Metrics and Baseline Results arxiv:76.766v [cs.ne] 8 Jun 7 Yuan Yuan, Yew-Soon Ong, Liang Feng, A.K.

More information

An Introduction to Evolutionary Algorithms

An Introduction to Evolutionary Algorithms An Introduction to Evolutionary Algorithms Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical Information Technology Karthik.sindhya@jyu.fi http://users.jyu.fi/~kasindhy/

More information

Indicator-Based Selection in Multiobjective Search

Indicator-Based Selection in Multiobjective Search Indicator-Based Selection in Multiobjective Search Eckart Zitzler and Simon Künzli Swiss Federal Institute of Technology Zurich Computer Engineering and Networks Laboratory (TIK) Gloriastrasse 35, CH 8092

More information