RM-MEDA: A Regularity Model Based Multiobjective Estimation of Distribution Algorithm
|
|
- Ashlee Hutchinson
- 5 years ago
- Views:
Transcription
1 RM-MEDA: A Regularity Model Based Multiobjective Estimation o Distribution Algorithm Qingu Zhang, Aimin Zhou and Yaochu Jin Abstract Under mild conditions, it can be induced rom the Karush-Kuhn-Tucker condition that the Pareto set, in the decision space, o a continuous multiobjective optimization problem is (m )-D piecewise continuous, where m is the number o objectives. Based on this regularity property, we propose a Regularity Model based Multiobjective Estimation o Distribution Algorithm (RM-MEDA) or continuous multiobjective optimization problems with variable linkages. At each generation, the proposed algorithm models a promising area in the decision space by a probability distribution whose centroid is a (m )-D piecewise continuous maniold. The Local Principal Component Analysis algorithm is used or building such a model. New trial solutions are sampled rom the model thus built. A nondominated sorting based selection is used or choosing solutions or the next generation. Systematic experiments have shown that, overall, RM-MEDA outperorms three other state-o-theart algorithms, namely, GDE, PCX-NSGA-II and, on a set o test instances with variable linkages. We have demonstrated that, compared with GDE, RM-MEDA is not sensitive to algorithmic parameters, and has good scalability to the number o decision variables in the case o nonlinear variable linkages. A ew shortcomings o RM-MEDA have also been identiied and discussed in this paper. Index Terms Multiobjective optimization, estimation o distribution algorithm, regularity, the Karush-Kuhn-Tucker condition, local principal component analysis, variable linkages, sensitivity, scalability. I. INTRODUCTION Multiobjective optimization problems (MOP) arise in many engineering areas. Very oten, the objectives in a MOP conlict with each other, and no single solution can optimize all the objectives at the same time. The Pareto set/ront is the set o all the optimal tradeos in the decision/objective space. When the preerence o a decision maker is unavailable or very diicult to speciy mathematically as it is in many applications, the decision maker usually requires an approximation to the Pareto set and/or Pareto ront or making their inal choice. Such an approximation can be a inite set o Pareto optimal solutions or a mathematical approximation model o the Pareto set/ront. Since the publication o Schaer s seminal work [], a number o evolutionary algorithms (EA) have been developed or multiobjective optimization problems [] []. The major advantage o these multiobjective evolutionary algorithms (MOEA) over other methods are that they work with a population o candidate solutions and thus can produce a set Q. Zhang and A. Zhou are with Department o Computer Science, University o Essex, Wivenhoe Park, Colchester, CO4 SQ, U.K ({qzhang, azhou}@essex.ac.uk). Y. Jin is with Honda Research Institute Europe, Carl-Legien-Str., 67 Oenbach, Germany (Yaochu.Jin@honda-ri.de). o Pareto optimal solutions to approximate the Pareto ront and/or Pareto set in a single run. The current MOEA research mainly ocuses on the ollowing highly related issues: Fitness Assignment and Diversity Maintenance: Like their counterparts or scalar optimization, most MOEAs employ a selection operator to direct its search into promising areas in the decision space. Since Pareto domination is not a complete ordering, conventional selection operators, which were originally developed or scalar objective optimization, cannot be directly applied to multiobjective optimization. Furthermore, the task o most MOEAs is to produce a set o solutions which are evenly distributed on the Pareto ront. Thus, a selection operator in MOEAs should not encourage the search to converge to a single point. Thereore, it is not a trivial job to assign a relative itness value to each individual solution or relecting its utility in selection in MOEAs. Fitness assignment has been subject to much research over the last two decades [6] [8]. Some techniques such as itness sharing and crowding have been requently used within itness assignment or maintaining the diversity o search [9] []. Very recently, a method based on decomposition or itness assignment and diversity maintenance has been proposed in []. External Population: It is usually very hard to balance diversity and convergence with a single on-line population in current MOEAs. The on-line population may be unable to store some representative solutions ound during the search due to its limited size. To overcome this shortcoming, an external population (archive) is oten used in MOEAs or recording nondominated solutions ound during the search. Some eort has been made to study how to maintain and utilize such an external population [], [4]. Combination o MOEA and Local Search: Combinations o EAs and local search heuristics, oten called memetic algorithms, have been shown to outperorm traditional evolutionary algorithms in a wide variety o scalar objective optimization problems. A ew multiobjective memetic algorithms (MOMA) have been developed over the past decade []. MOMAs need to consider how to evaluate solution quality or their local search operators. In multiobjective genetic local search (MOGLS) [6], [7], a scalarizing unction with random weights is used as its evaluation unction in both its local search and selection. Memetic Pareto archived evolution strategy (M-PAES) evaluates solutions by using domination ranks
2 [8]. Multiobjective evolutionary algorithm based on decomposition (MOEA/D) provides a general ramework or using scalar optimization methods in multiobjective optimization []. Surprisingly, not much work has been done on how to generate new solutions in MOEAs. The implementations o most current MOEAs directly adopt traditional genetic recombination operators such as crossover and mutation. The characteristics o MOPs have not well been utilized in generating new solutions in current MOEAs. Very recently, Deb et al. have compared the perormance o several recombination operators on some test problems with variable linkages [9]. Their experimental results suggest that variable linkages could cause diiculties or MOEAs and recombination operators are crucial to the perormance o a MOEA. It has been observed that under mild smoothness conditions, the Pareto set (in the decision space) o a continuous MOP is a piecewise continuous (m )-dimensional maniold where m is the number o the objectives. This observation has been used in several mathematical programming methods or approximating the Pareto ront or Pareto set [] []. However, as suggested in [], such regularity has not been exploited explicitly by most current MOEAs except or those combining local search that take advantage o the regularity implicitly, such as the dynamic weighted aggregation method [4]. The work in this paper will show that reproduction o new trial solutions based on this regularity property can eectively cope with the variable linkages in continuous MOPs. Estimation o distribution algorithms (EDA) are a new computing paradigm in evolutionary computation []. There is no crossover or mutation in EDAs. Instead, they explicitly extract globally statistical inormation rom the selected solutions and build a posterior probability distribution model o promising solutions, based on the extracted inormation. New solutions are sampled rom the model thus built and ully or in part replace the old population. Several EDAs have been developed or continuous MOPs [6] [8]. However, these EDAs do not take the regularity into consideration in building probability models. Since probability modelling techniques under regularity have been widely investigated in the area o statistical learning [9], [], it is very suitable to take the advantage o the regularity in the design o EDAs or a continuous MOP. As one o the irst attempts to capture and utilize the regularity o the Pareto set in the decision space, we have recently proposed using local principal component analysis or extracting regularity patterns o the Pareto set rom the previous search. We have studied two naive hybrid MOEAs in which some trial solutions are generated by traditional genetic operators and others by sampling rom probability models built based on regularity patterns [], []. Our preliminary experimental results are very encouraging. This paper conducts a urther and thorough investigation along this line. In comparison with our work presented in [] and []. The main new contributions o this paper include: Based on the regularity property o continuous MOPs, an estimation o distribution algorithm or continuous multiobjective optimization has been developed. Unlike our previous algorithms, it does not use crossover or mutation operators or producing new ospring. And besides, the parameter estimation or model building in this paper is more statistically sound and yet simpler than that used in [] and []. Our experimental studies have shown that the algorithm presented in this paper perorms similarly to its predecessors. Systematic experiments have been conducted to compare the proposed algorithm with three other state-o-the art algorithms on a set o bi- or tri-objective test instances with linear or nonlinear variable linkages. Only the original NSGA-II with SBX [6] was compared on a ew test instances in [] and []. The sensitivity to algorithmic parameters, the scalability to the number o decision variables and the CPU-time costs o the proposed algorithm and Generalized Dierential Evolution (GDE) [] have been experimentally investigated. The rest o the paper is organized as ollows. The next section introduces continuous multiobjective optimization problems, Pareto optimality and the regularity property induced rom the Karush-Kuhn-Tucker condition. Section III presents the motivation and the details o the proposed algorithm. Section IV briely describes the other three algorithms used in comparison. Section V presents and analyzes the experimental results. Section VI experimentally studies the sensitivity and scalability, and presents CPU-time costs o the proposed algorithm and GDE. The inal section concludes the paper and outlines uture research work. II. PROBLEM DEFINITION In this paper, we consider the ollowing continuous multiobjective optimization problem (continuous MOP): minimize F (x) = ( (x), (x),..., m (x)) T () subject to x X where X R n is the decision space and x = (x,..., x n ) T R n is the decision variable vector. F : X R m consists o m real-valued continuous objective unctions i (x) (i =,,..., m). R m is the objective space. In the case o m =, this problem is reerred to as a continuous bi-objective optimization problem. Let a = (a,..., a m ) T, b = (b,..., b m ) T R m be two vectors, a is said to dominate b, denoted by a b, i a i b i or all i =,..., n, and a b. A point x X is called (globally) Pareto optimal i there is no x X such that F (x) F (x ). The set o all the Pareto optimal points, denoted by P S, is called the Pareto set. The set o all the Pareto objective vectors, P F = {y R m y = F (x), x P S}, is called the Pareto ront [], []. Under certain smoothness assumptions, it can be induced rom the Karush-Kuhn-Tucker condition that the PS o a Due to the page limit o this paper, the experimental comparison between this algorithm and its predecessors will be presented in a orthcoming working report, but not in this paper.
3 continuous MOP deines a piecewise continuous (m )- dimensional maniold in the decision space [], []. Thereore, the PS o a continuous bi-objective optimization problem is a piecewise continuous curve in R n while the PS o a continuous MOP with three objectives is a piecewise continuous surace. In act, the PSs o the ZDT test problems [], [4], [], the widely-used test instances or continuous bi-objective optimization problems in the evolutionary computation community, are line segments in the decision space. A. Basic Idea III. ALGORITHM EDAs build a probability model or characterizing promising solutions in the search space based on statistical inormation extracted rom the previous search and then sample new trial solutions rom the model thus built. A very essential issue is what kind o model one should use or such a task. A good model should be easy to build and sample, and be able to describe promising areas with good idelity [6] [8]. The population in the decision space in an EA or () will hopeully approximate the PS and be uniormly scattered around the PS as the search goes on. Thereore, we can envisage the points in the population as independent observations o a random vector ξ R n whose centroid is the PS o (). Since the PS is an (m ) dimensional piecewise continuous maniold, ξ can be naturally described by: ξ = ζ + ε () where ζ is uniormly distributed over a piecewise continuous (m )-dimensional maniold. ε is an n-dimensional zeromean noise vector. Fig. illustrates our basic idea. Fig.. Illustration o the basic idea. Individual solutions should be scattered around the PS in the decision space in a successul MOEA. B. Algorithm Framework At each generation t, the proposed algorithm or (), called Regularity Model based Multiobjective Estimation o Distribution Algorithm (RM-MEDA) hereater, maintains: a population o N solutions (i. e. points in X): and P op(t) = {x, x,..., x N } their F -values: F (x ), F (x ),..., F (x N ). The algorithm works as ollows: RM-MEDA Step Initialization: Set t :=. Generate an initial population P op() and compute the F -value o each individual solution in P op(). Step Stopping Condition: I stopping condition is met, stop and return the nondominated solutions in P op(t) and their corresponding F vectors. All these F vectors constitute an approximation to the PF. Step Modelling: Build the probability model () or modelling the distribution o the solutions in P op(t). Step Reproduction: Generate a new solution set Q rom the model (). Evaluate the F -value o each solution in Q. Step 4 Selection: Select N individuals rom Q P op(t) to create P op(t + ). Step Set t := t + and go to Step. In the ollowing, we discuss the implementation o modelling, reproduction and selection in the above algorithm. C. Modelling Fitting the model () to the points in P op(t) is highly related to principal curve or surace analysis, which aims at inding a central curve or surace o a set o points in R n [9]. However, most current algorithms or principal curve or surace analysis are rather expensive due to the intrinsic complexity o their models. Since modelling needs to be conducted at each generation o RM-MEDA, a too complicated model is not aordable in RM-MEDA. On the other hand, a complicated model is not necessary since the actual distribution o the points in P op(t) could hardly be exactly described by (). In our implementation, we assume, or the sake o simplicity, that the centroid o ξ consists o K maniolds Ψ,..., Ψ K (i.e. ζ in () is uniormly distributed on these maniolds), each Ψ j is a (m )-dimensional hyper-rectangle. Particularly, in the case o two objectives (i.e. m = ): Each Ψ j is a line segment in R n. in the case o three objectives (i.e. m = ): Each Ψ j is a -D rectangle in R n. Let A j denote the event that ζ is rom Ψ j. Then K P rob(a j ) =. j= We make the ollowing assumptions that under the condition that the event A j happens: ζ is uniormly randomly generated over Ψ j. ε N(, σ j I), where I is the n n identity matrix and σ j >. In other words, all the components in ε are i. i. d.. The modelling problem, under the above assumptions, becomes estimating Ψ j, σ j and P rob(a j ) (j =,,..., K). To solve it, we need to irstly partition P op(t) into K disjoint clusters S,..., S K such that the points in S j can be regarded as being sampled under the condition o A j. Then we can estimate the parameters.
4 In this paper, we use the (m )-dimensional Local Principal Component Analysis ((m )-D Local PCA) algorithm [4] or partitioning P op(t). Let S be a inite subset o R n, the (sample) mean o S is x = S x x S where S is the cardinality o S. The (sample) covariance matrix o the points in S is Cov = S (x x)(x x) T. x S The i-th principal component U i is a unity eigenvector associated with the i-th largest eigenvalue o the matrix Cov. Then the aine (m )-dimensional principal subspace o the points in S is deined as m {x R n x = x + θ i U i, θ i R, i =,,..., m }. i= The partition S,..., S K obtained by the (m )-D Local PCA algorithm minimizes the ollowing error unction: K dist(x, L m j ) x S j j= where L m j is the aine (m )-dimensional principal subspace o the points in S j, dist(x, L m j ) is the Euclidean distance rom x to its projection in L m j. The (m )-D Local PCA [4] partitions P op(t) = {x,..., x N } into S,..., S K in the ollowing iterative way: Step Randomly initialize L m i to be an aine (m )-dimensional space containing a point randomly chosen rom P op(t). Step Partition the points in P op(t) into K disjoint clusters S,..., S K : Step. For each cluster S j. Let x j be its mean and U j i be its i-th principal component. Compute and a j i = min x S j(x xj ) T U j i () b j i = max x S j (x x j ) T U j i (4) or i =,,..., m. Then set Ψ j = {x R n x = x j + m i= α iu j i, a j i.(bj i aj i ) α i b j i +.(bj i aj i ), i =,..., m.}. Let λ j i be the i th largest eigenvalue o the covariance matrix o the points in S j, set n σ j = λ j i n m +. () i=m We would like to make the ollowing comments on the above modelling method: The smallest hyper-rectangle, containing the projections o all the points o S j in the aine (m )-dimensional principal subspace o S j, is m Φ j = {x R n x = x + α i U j i, i= a j i α i b j i, i =,..., m }. Ψ j in Step. extends Φ j by % along each o the directions U j,..., U j m. The motivation behind this extension is that P op(t) oten could not cover the whole PS and thus the union o all these smallest hyper-rectangles Φ j can only approximate part o the PS, while the union o all the Ψ j s may provide a good approximation to the PS. Fig. illustrates this motivation. S j = {x x P op(t), and dist(x, L m j ) dist(x, L m k ) or all k j}. Step Set L m j, j =,..., K, to be the aine (m )-dimensional principal subspace o the points in S j. Step 4 Iterate Steps and until no change in partition is made. The Local PCA algorithm is more suitable or partitioning P op(t) or building the model () than the widely used K- means clustering method in which a cluster centroid is a point [], since we assume that the centroid o each cluster should be a (m )-D hyper-rectangle instead o a point. Schematically, modelling (i.e. estimating Ψ k and σ k ) in RM-MEDA works as ollows: Modelling by the (m )-D Local PCA Step. Partition P op(t) into K disjoint clusters S,..., S K by the (m )-D Local PCA algorithm. Fig.. Illustration o Extension: The number o clusters (K) is in this illustrative example. Φ, Φ and Φ could not approximate the PS very well, their extensions Ψ, Ψ and Ψ, however, can provide a better approximation. λ j m, λ j m+,..., λj n characterize the deviation o the points in S j rom its centroid. It is ideal to model ε as ε = n λ j i U j i εi (6) i=m when ε m,..., ε n are n m + independent N(, ) noises. Since it is oten the case that m << n and
5 λ j i, i = m,..., n is very small compared with the irst (m ) largest eigenvalues, the dierence, in terms o distribution, between the noise sampled rom N(, σ j I) and that in (6) is tiny when σ j is deined as in (). Moreover, sampling rom (6) is more complicated than rom N(, σ j I). This is why we use N(, σ j I) or acilitating the sampling procedure. σ j also controls the degree o the exploration o the search in the reproduction step. Since both the Local PCA and the noise model N(, σ j I) are rotationally invariant, so is the above modelling method. Thereore, RM-MEDA is invariant o any orthogonal coordinate rotation, which is a property not shared by other algorithms using crossover and mutation. D. Reproduction In our implementation o Reproduction in this paper, N new solutions are generated. The irst issue we need to consider is how many new trial solutions are generated around each Ψ i. Since it is desirable that inal solutions are uniormly distributed on the PS. We set P rob(a j ) = vol(ψ j ) K i= vol(ψi ). where vol(ψ j ) is the (m )-D volume o Ψ j. Thereore, the probability that a new solution is generated around Ψ j is proportional to vol(ψ j ). A simple reproduction scheme or generating a new solution x, used in our experimental studies, works as ollows: Reproduction by Sampling (Reproduction/S) Step Randomly generate an integer τ {,,..., K} with P rob(τ = k) = vol(ψ k ) K j= vol(ψj ). Step Uniormly randomly generate a point x rom Ψ τ. Generate a noise vector ε rom N(, σ τ I). Step Return x = x + ε. In Step o RM-MEDA, N new solutions can be produced by repeating Reproduction/S N times. In Reproduction/S, x, the main part o x is rom Ψ τ, which is the extension o Φ τ. Thereore, the search will, hopeully, perorm exploitation around the PF. Such exploitation, guided by the extension, is mainly along the irst m principal components. ε, the noise part o x is necessary since Ψ τ is just an approximation to part o the PF. Moreover, it provides diversity or the urther search. E. Selection In principle, any selection operators or MOEAs can be used in RM-MEDA. The selection procedure used in the experimental studies o this paper is based on the non-dominated sorting o NSGA-II [6]. It is called NDS-Selection in the ollowing. NDS-Selection partitions Q P op(t) into dierent ronts F,..., F l such that the j-th ront F j contains all the nondominated solutions in {Q P op(t)}\( j i= F i). Thereore, there is no solution in {Q P op(t)}\( j i= F i) that could dominate a solution in F j. Roughly speaking, F is the best non-dominated ront in Q P op(t), F is the second best non-dominated ront and so on. The crowding distance, d c (x, S), o point x in set S is deined as the average side length o the largest m-d rectangle in the objective space subject to the two constraints: a) each o its sides is parallel to a coordinate axis, and b) F (x) is the only point in F (S) = { F (y) y S} that is an interior point in the rectangle. A solution with larger crowding distance should be given priority to enter P op(t + ) since it could increase the diversity o P op(t + ). The details o the selection procedure is given as ollows: NDS-Selection Step Partition Q P op(t) into dierent ronts F,..., F l by using the ast non-dominated sorting approach. Set P op(t + ) = and k =. Do k = k +, P op(t + ) = P op(t + ) F k, Until P op(t + ) N. Step While P op(t + ) > N, Do For all the members in F k P op(t + ), compute their crowding distances in F k P op(t + ). Remove the element in F k P op(t + ) with the smallest crowding distance rom P op(t + ). In the case when there are more than one members with the smallest crowding distance, randomly choose one and remove it. This selection operator selects N members rom Q P op(t) or orming P op(t + ). Step implements an elitism mechanism. The best ronts in Q P op(t) are added to P op(t + ). Ater Step, k P op(t + ) = F j, j= the last ront in P op(t+) is F k, the k-th ront, which contains the worst solutions in P op(t + ). When P op(t + ) is larger than N, F k will be larger than P op(t + ) N. Step removes P op(t+) N worst solutions rom P op(t+) to reduce its size to N. The crowding distance is used to compare the quality o solutions in Step. This selection procedure is the same as that used in NSGA- II except that we remove solutions rom P op(t + ) one by one and we re-calculate the crowding distances beore deciding which solution should be deleted rom P op(t + ), which can increase the diversity o P op(t+) at extra computational cost. We noticed that GDE [] uses a very similar selection. IV. THREE OTHER ALGORITHMS IN COMPARISON The major purpose o this work is to tackle variable linkages in continuous MOPs. The studies conducted in [9] show that PCX-NSGA-II and GDE [] perorm better than other algorithms or continuous MOPs with variable linkages. In this paper, we compare RM-MEDA with these two algorithms.
6 Since RM-MEDA is an EDA based on regularity, we also compare it with [7], which is an EDA without using regularity, to investigate whether or not using regularity can improve the perormance o EDAs. A. Generalized Dierential Evolution (GDE) [] Let x = (x,..., x n ) T be a solution in the current population, its ospring y = (y,..., y n ) T in GDE is generated as ollows:. Randomly select rom the current population three distinct solutions x r, x r, x r and randomly choose index {,,..., n}.. Set z = x r + F (x r x r ).. Set the i-th element o y: { zi i rand < CR or i = index, y i = otherwise. x r i where rand is a random number uniormly generated rom [, ], F and CR are two control parameters. In GDE, all the solutions in the current population irst generate their ospring, these ospring and their parents undergo a selection operator similar to the one used in RM- MEDA and then the selected solutions orm a new population or the next generation. The details o GDE can be ound in []. The code o GDE used in comparison is written in C by ourselves. B. PCX-NSGA-II [9] Parent-Centric Recombination (PCX) [4] generates a trial solution y rom µ parent solutions x,..., x µ as ollows:. Select a solution x p rom x,..., x µ and let it be the index parent.. Compute the direction vector d p to x p rom the center o these µ parent solutions, i.e. d p = x p µ. Set µ i= xi, and µ orthogonal directions e,..., e µ to d p. Compute average perpendicular distance, D, o the other µ parents to the line passing x p and the center. y = x p + ω ζ d p + µ i=,i p ω η De i where ω ζ and ω η are two zero-mean normally distributed random variables with variance σ ζ and σ η, respectively. In PCX-NSGA-II [9], PCX with µ = is used to generate a set o new trial solutions. All o these new solutions are mutated by a polynomial mutation operator. The PCX- NSGA-II used in comparison is implemented in C by modiying the code o SBX-NSGA-II rom KanGAL s webpage ( In our implementation o PCX- NSGA-II, the NDS-selection operator described in Section III.E is used or selecting rom the old solutions and new solutions or creating a new population or the next generation, this is the only dierence rom the implementation o Deb et al. in [9]. Our irst experiments show that our implementation is slightly better in terms o solution quality. The motivation that we use the NDS-selection in the implementation o PCX- NSGA-II is to have a air comparison with RM-MEDA. C. [7] is an EDA or MOPs. It has been applied to continuous MOPs. Its major dierence rom RM-MEDA is that it uses a mixture o Gaussians. It does not take the regularity o the PS into consideration and thus does not impose any constraints on distribution o the centers o the Gaussians. A specialized diversity preserving selection is used in. The number o Gaussians is determined by an adaptive clustering method. The C code o written by its authors is used in our experimental studies. A. Perormance Metric V. COMPARISON STUDIES The inverted generational distance (IGD) is used in assessing the perormance o the algorithms in our experimental studies. Let P be a set o uniormly distributed points in the objective space along the PF. Let P be an approximation to the PF, the inverted generational distance rom P to P is deined as: D(P v P, P ) = d(v, P ) P where d(v, P ) is the minimum Euclidean distance between v and the points in P. I P is large enough to represent the PF very well, D(P, P ) could measure both the diversity and convergence o P in a sense. To have a low value o D(P, P ), P must be very close to the PF and cannot miss any part o the whole PF. In our experiments, we select evenly distributed points in P F and let these points be P or each test instance with objectives, and, points or each test instance with objectives. IGD has been recently used by some other researchers (e.g. [4]). As its name suggests, it is an inverted variation o the widely-used generational distance (GD) perormance metric [4]. The GD represents the average distance o the points in an approximation to the true PF, which cannot eectively measure the diversity o the approximation. There are several dierent ways in computing and averaging the distances in the GD perormance metrics (e.g. [4] and [6]), the version o IGD used in this paper inverts the Υ version o GD in [6]. B. General Experimental Setting RM-MEDA is implemented in C. The machine used in our experiments is Pentium(R) 4 (.4GHz,.GB RAM). In this section, the experimental setting is as ollows: The number o new trial solutions generated at each generation: The number o new solutions generated at each generation in all the our algorithms is set to be or all the test instances with two objectives, and or the test instances with three objectives. Since in RM- MEDA, GDE and PCX-NSGA-II, the population size is
7 TABLE I TEST INSTANCES Instance Variables Objectives Characteristics F [, ] n (x) = x convex PF (x) = g(x)[ (x)/g(x)] linear variable linkage g(x) = + 9( n (x i x ) )/(n ) i= F [, ] n (x) = x concave PF (x) = g(x)[ ( (x)/g(x)) ] linear variable linkage g(x) = + 9( n (x i x ) )/(n ) i= F [, ] n (x) = exp( 4x )sin 6 (6πx ) concave PF (x) = g(x)[ ( (x)/g(x)) ] nonuniormly distributed g(x) = + 9[ n (x i x ) /9]. linear variable linkage i= F 4 [, ] n (x) = cos( π x )cos( π x )( + g(x)) concave PF (x) = cos( π x )sin( π x )( + g(x)) linear variable linkage (x) = sin( π x )( + g(x)) objectives n g(x) = (x i x ) i= F [, ] n (x) = x convex PF (x) = g(x)[ x /g(x)] nonlinear variable linkage g(x) = + 9( n (x i x ) )/(n ) i= F 6 [, ] n (x) = x concave PF (x) = g(x)[ ( (x)/g(x)) ] nonlinear variable linkage g(x) = + 9( n (x i x ) )/(n ) i= F 7 [, ] n (x) = exp( 4x )sin 6 (6πx ) concave PF (x) = g(x)[ ( (x)/g(x)) ] nonuniormly distributed g(x) = + 9[ n (x i x ) /9]. i= nonlinear variable linkage F 8 [, ] n (x) = cos( π x )cos( π x )( + g(x)) concave PF (x) = cos( π x )sin( π x )( + g(x)) nonlinear variable linkage (x) = sin( π x )( + g(x)) objectives n g(x) = (x i x ) i= F 9 [, ] (x) = x concave PF [, ] n (x) = g(x)[ (x)/g(x)] nonlinear variable linkage g(x) = n (x 4 i x ) n cos( (x i x ) i ) + multimodal with Griewank unction i= i= F [, ] (x) = x concave PF [, ] n (x) = g(x)[ (x)/g(x)] nonlinear variable linkage g(x) = + (n ) + n i= [(x i x ) cos(π(x i x ))] multimodal with Rastrigin unction the same as the number o new trial solutions generated at each generation, these three algorithms have the same population size or each test instance in our experiments. The Number o Decision Variables: It is set to be or all the test instances. Parameter Setting in RM-MEDA: In Local PCA algorithm, K, the number o clusters, is set to be. Parameter Setting in GDE: Both CR and F in the DE operator is set to be, which was the best setting or most test instances in the simulation studies in [9]. Parameter Setting in PCX-NSGA-II: σ in PCX is set to be.4, which worked very well or most test instances in the simulation studies in [9]. Parameter Setting in : τ is set to be., thereore, the population size is. = 4 in the case o two objectives and. = 667 in the case o three objectives. In [7], large populations were used in simulation studies. Our irst experiments show that a large population does not improve the perormance o on the test instances with variable linkages signiicantly. Following [7], we set δ =.. Number o Runs and Stopping Condition: We run each
8 algorithm independently times or each test instance. The algorithms stop ater a given number o F -unction evaluations. The maximal number o unction evaluations in each algorithm is, or F, F, F and F6, 4, or F4 and F8, and, or F, F7, F9 and F. Dealing with Boundary in RM-MEDA: In all the test instances, the easible decision space is a hyperrectangle. I an element o a solution x, sampled rom a model in RM-MEDA, is out o boundary, we simple reset its value to be a randomly selected value inside the boundary. This method is very similar to that used in GDE. Initialization: Initial populations in all the algorithms are randomly generated. C. Test Instances with Linear Variable Linkages We irst compare RM-MEDA with the three other algorithms on our continuous MOPs with linear variable linkages. The test instances F to F4 in Table I are used or this purpose. GDE Number o Function Evaluations( 4 ) Fig.. The evolution o the average D-metric o the nondominated solutions in the current populations among independent runs with the number o, -unction evaluations in our algorithms or F. GDE GDE Number o Function Evaluations( 4 ) Fig.. The evolution o the average D-metric o the nondominated solutions in the current populations among independent runs with the number o,-unction evaluations in our algorithms or F. GDE Number o Function Evaluations( 4 ) Fig. 4. The evolution o the average D-metric o the nondominated solutions in the current populations among independent runs with the number o,-unction evaluations in our algorithms or F Number o Function Evaluations( 4 ) Fig. 6. The evolution o the average D-metric o the nondominated solutions in the current populations among independent runs with the number o 4,-unction evaluations in our algorithms or F4. F-F4 are variants o ZDT, ZDT, ZDT6 [4] and DTLZ [44], respectively. Due to g(x) used in these instances, F-F have the same PS. Their PS is a line segment: x =... = x n, x i, i n. The PS o F4 is a -D rectangle: x = x =... = x n, x i, i n. There are linear variable linkages in these test instances. The variable linkages in these instances [4] are obtained by perorming the ollowing linear mapping on the variables in the original ZDT and DTLZ instances: x x x i x i x, i =,..., n. Thereore, our scheme or introducing variable linkages can be regarded as a special implementation o the general strategy
9 GDE GDE GDE GDE Fig. 7. The inal nondominated ronts ound by each algorithm on F. The let panels show the nondominated ronts with the lowest D-metric obtained by each algorithm, while the right panels plot all the ronts together ound Fig. 8. The inal nondominated ronts ound by each algorithm on F. The let panels show the nondominated ronts with the lowest D-metric obtained by each algorithm, while the right panels plot all the ronts together ound proposed in [9], which is based on variable linear or nonlinear mappings. Figs. to 6 show that the evolution o the average D-metric o the nondominated solutions in the current populations among independent runs with the number o unction evaluations in our algorithms. As many researchers pointed out [], no single metric is always able to rank dierent algorithms appropriately. For this reason, Figs. 7 to plot the nondominated ront with the lowest D-metric obtained in runs o each algorithm on each test instance. All the ronts ound are also plotted together or showing their distribution ranges in the objective space in Figs. 7 to. It is clear rom the above experimental results that both RM-MEDA and GDE perorm signiicantly better than PCX- NSGA-II and on these our test instances. Since CR = is set in GDE, an ospring y is a linear combination o three old solutions x r, x r, x r and the PSs o these our test instances are line segments or a -D rectangle. I x r, x r, x r are close to the PS, so is y. Thereore, GDE exploits the regularity o these PSs as RM-MEDA does in a sense. In contrast, PCX-NSGA-II and have no eicient mechanism or using the regularity. This could be a major reason why RM-MEDA and GDE are winners. GDE slightly outperorms RM-MEDA on F, F and F4 in terms o D-metric. The reason might be that RM-MEDA, like many other EDAs, does not directly use the location inormation o previous solutions in generating new solutions, which makes it poorer than GDE at reining a solution,
10 Fig. 9. The inal nondominated ronts ound by each algorithm on F. The let panels show the nondominated ronts with the lowest D-metric obtained by each algorithm, while the right panels plot all the ronts together ound particularly, when it is close to the PS. A possible way to overcome this shortcoming is to hybridize location inormation and globally statistical inormation, which has been proved very successul in the guided mutation or scalar optimization in [46] and [47]. F is the hardest among these our test instances. The distribution o the Pareto optimal solutions in the objective space in this instance is very dierent rom that in the three other ones. I we uniormly sample a number o points in the PS o F in the decision space, most o the corresponding Pareto optimal vectors in the objective space will be more likely to be in the let part o the PF. This makes it very diicult or an algorithm to approximate the whole PF. RMMEDA perorms much better than GDE on F. This could be GDE GDE...6 GDE.6.4 GDE Fig.. The inal nondominated ronts ound by each algorithm on F4. The let panels show the nondominated ronts with the lowest D-metric obtained by each algorithm, while the right panels plot all the ronts together ound attributed to the act that RM-MEDA does extension along the principal directions so that it has a good chance to approximate the whole PF. D. Test Problem with Nonlinear Variable Linkages F-F8 in Table I are test instances with nonlinear variable linkages. The PS o F-F7 is a bounded continuous curve deined by x = xi i =,..., n. x. The PS o F8 is a -D bounded continuous surace deined by: x = xi i =,..., n. x, x.
11 The variable linkages in these instances [4] are obtained by perorming the ollowing nonlinear mapping on the variables in the original ZDT and DTLZ instances: x x x i x i x, i =,..., n. GDE Number o Function Evaluations( 4 ) Fig.. The evolution o the average D-metric o the nondominated solutions in the current populations among independent runs with the number o,-unction evaluations in our algorithms on F. GDE Number o Function Evaluations( 4 ) Fig.. The evolution o the average D-metric o the nondominated solutions in the current populations among independent runs with the number o,-unction evaluations in our algorithms on F7. GDE Number o Function Evaluations( 4 ) GDE Number o Function Evaluations( 4 ) Fig.. The evolution o the average D-metric o the nondominated solutions in the current populations among independent runs with the number o,-unction evaluations in our algorithms on F6. Figs. to 4 present that the evolution o the average D- metric o the nondominated solutions in the current population and Figs. to 8 plot the inal nondominated ronts ound by each algorithm or each instance. The experimental results show that RM-MEDA outperorms the three other algorithms on these our instances. In act, only RM-MEDA is able to produce nondominated solutions which approximate the whole PF well, in all or some runs or all the test instances. These results are not surprising since only RM-MEDA considers modelling nonlinear variable linkages. In the three other algorithms, even i all the parent solutions are Pareto optimal, it is possible that their ospring is ar rom the PS. Fig. 4. The evolution o the average D-metric o the nondominated solutions in the current populations among independent runs with the number o 4,-unction evaluations in our algorithms on F8. It is evident rom Figs. to 4 that PCX-NSGA-II and are stuck in terms o D-metric, which implies that these two algorithms may not be able to make any more improvement on their solution quality even i more computational time is used. This observation, in conjunction with the results o Deb et al. [9], suggests that reproduction operators are crucial in MOEAs. One should careully design these operators or dealing with nonlinear variable linkages. Fig. 7 suggests that F7 is the hardest instance or RM- MEDA. This might be due to the act that the Pareto optimal solutions o F7 are not uniormly distributed as in its linear linkage counterpart F and RM-MEDA samples points uniormly around the PS in the decision variable space. To improve the perormance o RM-MEDA on problems like F and F7, one may need to consider the distribution o solutions in the objective space when sampling solutions rom the models.
12 GDE GDE GDE GDE Fig.. The inal nondominated ronts ound by each algorithm on F. The let panels show the nondominated ronts with the lowest D-metric obtained by each algorithm, while the right panels plot all the ronts together ound Fig. 6. The inal nondominated ronts ound by each algorithm on F6. The let panels show the nondominated ronts with the lowest D-metric obtained by each algorithm, while the right panels plot all the ronts together ound E. Test Instances with Many Local Pareto Fronts There are nonlinear variable linkages in F9 and F. Furthermore, these two instances have many local Pareto ronts since their g(x) has many locally minimal points. The experimental results presented in Fig. 9 show that RM-MEDA can approximate the PF very well in each run and signiicantly outperorms the three other algorithms on F9. It is also evident that all the our algorithms ail in converging to the global Pareto ront o F in Fig.. The ailure could be because, as Deb et al. have noticed on the ability o their NSGA-II [6], it is not an easy task or these evolutionary algorithms to ind the globally minimal point o g(x). To tackle MOPs with many local Pareto-optimal ronts, eicient mechanisms or global optimization o a scalar objective may be worthwhile tailored and used in MOEAs. VI. SENSITIVITY, SCALABILITY AND CPU-TIME COSTS The experimental results in Section V have shown that, overall, RM-MEDA outperorms the three other algorithms, and GDE comes second on these continuous MOP test instances with variable linkages. We have compared the sensitivity o population size in RM- MEDA and GDE and investigated the eect o the number o clusters on the perormance o RM-MEDA on several test instances. We have studied how the computational cost, in terms o the number o F -unction evaluations, increases as the number o decision variables increases in both RM-MEDA and GDE. We have also recorded the CPU time used by each
13 GDE Fig. 7. The inal nondominated ronts ound by each algorithm on F7. The let panels show the nondominated ronts with the lowest D-metric obtained by each algorithm, while the right panels plot all the ronts together ound algorithm. Due to the limit o the paper length, however, only the experimental results on F with nonlinear variable linkages are presented in this section. The parameter settings in the experiments are presented in Tables II and III. independent runs have been conducted or each parameter setting. A. Sensitivity to Population Size in RM-MEDA and GDE To study how the perormances o RM-MEDA and GDE are sensitive to the population size, we have tried dierent values o population size:,, 4,, 6, 8,,,,,,, 4 in both algorithms or F. Fig. shows the average D-metrics v.s. the numbers o F~ -unction evaluations under dierent population size settings in RM-MEDA and GDE, respectively. It is clear that RM GDE GDE.6.6 GDE Fig. 8. The inal nondominated ronts ound by each algorithm on F8. The let panels show the nondominated ronts with the lowest D-metric obtained by each algorithm, while the right panels plot all the ronts together ound TABLE II T HE PARAMETER SETTINGS OF RM-MEDA FOR F IN S ECTION VI VI.A VI.B VI.C VI.D Population Size 4 K in Local PCA, i pop size, otherwise. # o Decision Variables ~ Max. # o F Evaluations,,,, TABLE III T HE PARAMETER SETTINGS OF GDE FOR F IN S ECTION VI VI.A VI.C VI.D Population Size 4 CR F # o Decision Variables ~ Max. # o F Evaluations,,,
14 GDE GDE GDE GDE Fig. 9. The inal nondominated ronts ound by each algorithm on F9. The let panels show the nondominated ronts with the lowest D-metric obtained by each algorithm, while the right panels plot all the ronts together ound Fig.. The inal nondominated ronts ound by each algorithm on F. The let panels show the nondominated ronts with the lowest D-metric obtained by each algorithm, while the right panels plot all the ronts together ound MEDA is much less sensitive to the setting o population size than GDE on F. In act, RM-MEDA can, with the population sizes rom to, lower the D-metric value below. within, unction evaluations. Its convergence speed does not change much over this range. In contrast, the perormance o GDE worsens considerably or small or large population sizes, and GDE could not lower the D-metric value below. within, unction evaluations with any population sizes. B. Sensitivity to the Number o Clusters in RM-MEDA We have tested all the dierent cluster numbers rom to in RM-MEDA. Fig. presents the average D-metric values v.s. the numbers o F -unction evaluations with dierent cluster numbers on F. As clearly shown in this igure, RM- MEDA is able to reduce the D-metric below. with, unction evaluations when the cluster number is rom to. It is also evident rom this igure that the convergence speed does not change dramatically over this range. Thus, we could claim that RM-MEDA is not very sensitive to the setting o the cluster number or MOP instances which are somehow similar to F. We should point out that the range o appropriate cluster numbers in RM-MEDA is still problem-dependent. The experimental results in Fig. reveal that RM-MEDA perorms poorly when the cluster number is or larger than. Part o the reason could be that the Local PCA reduces to classic PCA and consequently underits the population when the cluster number is, and it could overit in the case when
15 the cluster number is larger than ,,, 4, 6, 8,, GDE Population Size 8,,, 4, 6, 8,, Population Size Fig.. The average D-metric value v.s. the number o F -unction evaluations under the dierent settings o population sizes in RM-MEDA and GDE or F , 4,, 7,,,, C. Scalability o RM-MEDA and GDE We have tried RM-MEDA and GDE on dierent numbers o decision variables. Fig. presents the number o the successul runs in which each algorithm has reached each o our given levels o the D-metric within, unction evaluations or F with dierent numbers o decisions variables. The average numbers o unction evaluations among the successul runs in each algorithm are also plotted in this igure. It can be seen that RM-MEDA succeeds in every run or all the numbers o decision variables. It is also clear that the number o unction evaluations, or lowering the D-metric below each o our levels, linearly scales up with the number o decision variables in RM-MEDA. Although the average numbers o unction evaluations among the successul runs in GDE also linearly scale up, its successul rate deceases substantially as the number o decision variables increases. In act, GDE cannot reduce the D-metric value below. in any single run when the number o the decision variables is. Evaluations Evaluations x 4 D metric= Number o Decision Variables x 4.4. D metric= Number o Decision Variables 6 Evaluations Evaluations x 4. D metric= Number o Decision Variables x D metric= Number o Decision Variables Fig.. The average number o unction evaluations among the successul runs or reaching each o our given levels o D-metric v.s. the number o decision variables in two algorithms or F. The marked number is the number o successul runs. The solid line is or RM-MEDA while dash line or GDE Cluster Number Fig.. The average D-metric value v.s. the numbers o F -unction evaluations with dierent cluster numbers in RM-MEDA or F. The linear scalability o RM-MEDA in this test instance should be due to two acts: a) the PS o F is always a - D continuous curve no matter how large its number o the decision variables is, which may explain that the hardness o F does not exponentially increase as its number o the decision variables increases; and b) in RM-MEDA, only - D Local PCA is needed or F and sampling is always perormed along a -D curve, then the curse o dimensionality may not exist in this test instance. Note that many real-world continuous MOPs should meet the regularity condition. These
16 results imply that RM-MEDA could be more suitable or solving large-scale MOPs. D. CPU-Time Cost The good perormance o RM-MEDA does not come without price. Modelling and reproduction in RM-MEDA is more complicated than genetic operators such as crossover and mutation used in other MOEAs. RM-MEDA needs more CPU time or running Local PCA at each generation. The CPU time used by RM-MEDA and GDE or F with dierent numbers o decision variables are given in Table IV. TABLE IV THE CPU TIME (IN SECONDS) USED BY RM-MEDA AND GDE FOR F WITH DIFFERENT NUMBERS OF DECISION VARIABLES The number o decision variables Method GDE RM-MEDA Since we use the setting in Section VI.C, the total number o the unction evaluations is, and the number o generations is in both algorithms. Thereore, the Local PCA was run times in RM-MEDA. From the above experimental results, we can conclude that although RM-MEDA is more time consuming than GDE, it is still aordable in many applications, where the CPU time o each unction evaluation is under hal an hour and a cluster o (say, ) computers is available. To signiicantly reduce the number o unction evaluations in RM-MEDA and make it more applicable or MOPs with very expensive unction evaluations, one may need to incorporate meta-modelling techniques into RM-MEDA. VII. CONCLUSION It has not been well studied how to generate new trial solutions in multiobjective evolutionary optimization. Reproduction operators such as crossover and mutation, which were originally developed or scalar optimization, are directly used in most o current multiobjective evolutionary algorithms. This could be one o the major reasons why these algorithms did not perorm well on MOPs with variable linkages. In this paper, the regularity property o continuous MOPs is used as a basis or an estimation o distribution algorithm or dealing with variable linkages. RM-MEDA, the proposed algorithm, models a promising area in the search space by a probability model whose centroid is a piecewise continuous maniold. The Local PCA algorithm was employed or building such a model. New trial solutions are sampled rom the model thus built. Experimental studies have shown that, overall, RM-MEDA perorms better than GDE, PCX-NSGA-II and on a set o test instances with variable linkages. The sensitivity and scalability o RM-MEDA and GDE have also been experimentally studied. We have ound that RM-MEDA could perorm slightly poorer than GDE on some test instances with linear variable linkages. We argued that it could be because RM-MEDA did not directly use the location inormation o individual solutions or generating new trial solutions. The experimental results also reveal that RM-MEDA may ail in test instances with many local Pareto ronts. The uture research topics along this line should include Combination o location inormation o individual solutions and globally statistical inormation or improving the ability o RM-MEDA to reine a solution. Guided mutation [46], [47] could be worthwhile studying or this purpose. Incorporating other techniques into RM-MEDA or improving its ability or global search. Eective global search techniques or scalar optimization should be considered. Combination o RM-MEDA with meta-modelling techniques [48] or reducing the number o unction evaluations. Use o other machine learning techniques, particularly, generative model learning or building distribution models such as mixtures o probabilistic principal component analyzers [49], or building models in RM-MEDA. Use o experimental design techniques or improving sampling quality. Experimental design techniques have been used in the design o genetic operators (e.g. []). Extension o RM-MEDA to constrained and/or dynamic MOPs. We should exploit properties o these MOPs in modiying or designing algorithms. ACKNOWLEDGMENT The authors would like to acknowledge the help o Mr H. Li, Dr T. Okabe, Dr B. Sendho, and Proessor E. Tsang on this work. They also thank Proessor X. Yao, the anonymous reviewers, and the anonymous associate editor or their insightul comments and suggestions. REFERENCES [] J. D. Schaer, Multiple objective optimization with vector evaluated genetic algorithms, in Proceedings o the st International Conerence on Genetic Algorithms. Pittsburgh, PA, USA: Lawrence Erlbaum Associates, 98, pp. 9. [] K. Deb, Multi-Objective Optimization using Evolutionary Algorithms. Bains Lane, Chichester: John Wiley & Sons, LTD,. [] C. A. Coello Coello, D. A. van Veldhuizen, and G. B. Lamont, Evolutionary Algorithms or Solving Multi-Objective Problems. New York: Kluwer Academic Publishers,. [4] K. C. Tan, E. F. Khor, and T. H. Lee, Multiobjective Evolutionary Algorithms and Applications. New York: Springer-Verlag,. [] J. Knowles and D. Corne, Recent Advances in Memetic Algorithms. Springer, Studies in Fuzziness and Sot Computing, vol. 66,, ch. Memetic Algorithms or Multiobjective Optimization:Issues, Methods and Prospects, pp.. [6] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, A ast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on Evolutionary Computation, vol. 6, no., pp. 8 97,. [7] E. Zitzler, M. Laumanns, and L. Thiele, SPEA: improving the strength Pareto evolutionary algorithm or multiobjective optimization, in Evolutionary Methods or Design, Optimisation and Control. Barcelona, Spain: CIMNE,, pp. 9. [8] C. A. Coello Coello, G. T. Pulido, and M. S. Lechuga, Handling multiple objectives with particle swarm optimization, IEEE Transactions on Evolutionary Computation, vol. 8, no., pp. 6 79, 4. [9] N. Srinivas and K. Deb, Multiobjective optimization using nondominated sorting in genetic algorithms, Evolutionary Computation, vol., no., pp. 48, 994.
X/$ IEEE
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 1, FEBRUARY 2008 41 RM-MEDA: A Regularity Model-Based Multiobjective Estimation of Distribution Algorithm Qingfu Zhang, Senior Member, IEEE,
More informationA Model-Based Evolutionary Algorithm for Bi-objective Optimization
A Model-Based Evolutionary Algorithm for Bi-objective Optimization Aimin Zhou 1, Qingfu Zhang 1, Yaochu Jin 2, Edward Tsang 1 and Tatsuya Okabe 3 1 Department of Computer Science, University of Essex,
More informationApproximation Model Guided Selection for Evolutionary Multiobjective Optimization
Approximation Model Guided Selection for Evolutionary Multiobjective Optimization Aimin Zhou 1, Qingfu Zhang 2, and Guixu Zhang 1 1 Each China Normal University, Shanghai, China 2 University of Essex,
More informationUsing ɛ-dominance for Hidden and Degenerated Pareto-Fronts
IEEE Symposium Series on Computational Intelligence Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts Heiner Zille Institute of Knowledge and Language Engineering University of Magdeburg, Germany
More informationScalable Test Problems for Evolutionary Multi-Objective Optimization
Scalable Test Problems or Evolutionary Multi-Objective Optimization Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory Indian Institute o Technology Kanpur PIN 8 6, India deb@iitk.ac.in Lothar Thiele,
More informationMulti-Objective Evolutionary Algorithms
Multi-Objective Evolutionary Algorithms Kalyanmoy Deb a Kanpur Genetic Algorithm Laboratory (KanGAL) Indian Institute o Technology Kanpur Kanpur, Pin 0806 INDIA deb@iitk.ac.in http://www.iitk.ac.in/kangal/deb.html
More informationFinding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm
16th World Congress of the International Fuzzy Systems Association (IFSA) 9th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT) Finding Sets of Non-Dominated Solutions with High
More informationPerformance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances
Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances Minzhong Liu, Xiufen Zou, Yu Chen, Zhijian Wu Abstract In this paper, the DMOEA-DD, which is an improvement of DMOEA[1,
More informationMOEA/D with NBI-style Tchebycheff approach for Portfolio Management
WCCI 2010 IEEE World Congress on Computational Intelligence July, 18-23, 2010 - CCIB, Barcelona, Spain CEC IEEE with NBI-style Tchebycheff approach for Portfolio Management Qingfu Zhang, Hui Li, Dietmar
More informationPerformance Assessment of the Hybrid Archive-based Micro Genetic Algorithm (AMGA) on the CEC09 Test Problems
Perormance Assessment o the Hybrid Archive-based Micro Genetic Algorithm (AMGA) on the CEC9 Test Problems Santosh Tiwari, Georges Fadel, Patrick Koch, and Kalyanmoy Deb 3 Department o Mechanical Engineering,
More informationHimanshu Jain and Kalyanmoy Deb, Fellow, IEEE
An Evolutionary Many-Objective Optimization Algorithm Using Reerence-point Based Non-dominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach Himanshu Jain and Kalyanmoy
More informationLamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems
Repair and Repair in EMO Algorithms for Multiobjective 0/ Knapsack Problems Shiori Kaige, Kaname Narukawa, and Hisao Ishibuchi Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho,
More informationEVOLUTIONARY algorithms (EAs) are a class of
An Investigation on Evolutionary Gradient Search for Multi-objective Optimization C. K. Goh, Y. S. Ong and K. C. Tan Abstract Evolutionary gradient search is a hybrid algorithm that exploits the complementary
More informationAdaptive Reference Vector Generation for Inverse Model Based Evolutionary Multiobjective Optimization with Degenerate and Disconnected Pareto Fronts
Adaptive Reference Vector Generation for Inverse Model Based Evolutionary Multiobjective Optimization with Degenerate and Disconnected Pareto Fronts Ran Cheng, Yaochu Jin,3, and Kaname Narukawa 2 Department
More informationComparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach
Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Hisao Ishibuchi Graduate School of Engineering Osaka Prefecture University Sakai, Osaka 599-853,
More informationDEMO: Differential Evolution for Multiobjective Optimization
DEMO: Differential Evolution for Multiobjective Optimization Tea Robič and Bogdan Filipič Department of Intelligent Systems, Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana, Slovenia tea.robic@ijs.si
More informationKANGAL REPORT
Individual Penalty Based Constraint handling Using a Hybrid Bi-Objective and Penalty Function Approach Rituparna Datta Kalyanmoy Deb Mechanical Engineering IIT Kanpur, India KANGAL REPORT 2013005 Abstract
More informationEvolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague.
Evolutionary Algorithms: Lecture 4 Jiří Kubaĺık Department of Cybernetics, CTU Prague http://labe.felk.cvut.cz/~posik/xe33scp/ pmulti-objective Optimization :: Many real-world problems involve multiple
More informationA Proposed Approach for Solving Rough Bi-Level. Programming Problems by Genetic Algorithm
Int J Contemp Math Sciences, Vol 6, 0, no 0, 45 465 A Proposed Approach or Solving Rough Bi-Level Programming Problems by Genetic Algorithm M S Osman Department o Basic Science, Higher Technological Institute
More informationMulti-Objective Optimization using Evolutionary Algorithms
Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim
More informationMultiobjective Optimization Problems With Complicated Pareto Sets, MOEA/D and NSGA-II Hui Li and Qingfu Zhang, Senior Member, IEEE
284 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 2, APRIL 2009 Multiobjective Optimization Problems With Complicated Pareto Sets, MOEA/D and NSGA-II Hui Li and Qingfu Zhang, Senior Member,
More informationRecombination of Similar Parents in EMO Algorithms
H. Ishibuchi and K. Narukawa, Recombination of parents in EMO algorithms, Lecture Notes in Computer Science 341: Evolutionary Multi-Criterion Optimization, pp. 265-279, Springer, Berlin, March 25. (Proc.
More informationMulti-Objective Optimization using Evolutionary Algorithms
Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department ofmechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim
More informationEfficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization
Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Ke Li 1, Kalyanmoy Deb 1, Qingfu Zhang 2, and Sam Kwong 2 1 Department of Electrical and Computer
More informationMetamodeling for Multimodal Selection Functions in Evolutionary Multi-Objective Optimization
Metamodeling or Multimodal Selection Functions in Evolutionary Multi-Objective Optimization Proteek Roy, Rayan Hussein, and Kalyanmoy Deb Michigan State University East Lansing. MI 48864, USA Email: {royprote,husseinr,kdeb}@msu.edu
More informationEffectiveness and efficiency of non-dominated sorting for evolutionary multi- and many-objective optimization
Complex Intell. Syst. (217) 3:247 263 DOI 1.17/s4747-17-57-5 ORIGINAL ARTICLE Effectiveness and efficiency of non-dominated sorting for evolutionary multi- and many-objective optimization Ye Tian 1 Handing
More informationA Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences
A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences Upali K. Wickramasinghe and Xiaodong Li School of Computer Science and Information Technology, RMIT University,
More informationIncorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms
H. Ishibuchi, T. Doi, and Y. Nojima, Incorporation of scalarizing fitness functions into evolutionary multiobjective optimization algorithms, Lecture Notes in Computer Science 4193: Parallel Problem Solving
More informationImproved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems
Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems Saku Kukkonen and Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute
More informationFinding Knees in Multi-objective Optimization
Finding Knees in Multi-objective Optimization Jürgen Branke 1, Kalyanmoy Deb 2, Henning Dierolf 1, and Matthias Osswald 1 1 Institute AIFB, University of Karlsruhe, Germany branke@aifb.uni-karlsruhe.de
More informationExploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search
Seventh International Conference on Hybrid Intelligent Systems Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search Crina Grosan and Ajith Abraham Faculty of Information Technology,
More informationAn Efficient Solution Strategy for Bilevel Multiobjective Optimization Problems Using Multiobjective Evolutionary Algorithms
An Efficient Solution Strategy for Bilevel Multiobjective Optimization Problems Using Multiobjective Evolutionary Algorithms Hong Li a,, Li Zhang a, Qingfu Zhang b, Qin Chen c a School of Mathematics and
More informationAn Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function Results
Syracuse University SURFACE Electrical Engineering and Computer Science College of Engineering and Computer Science -0-005 An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function
More informationUsing an outward selective pressure for improving the search quality of the MOEA/D algorithm
Comput Optim Appl (25) 6:57 67 DOI.7/s589-5-9733-9 Using an outward selective pressure for improving the search quality of the MOEA/D algorithm Krzysztof Michalak Received: 2 January 24 / Published online:
More informationMulti-Objective Memetic Algorithm using Pattern Search Filter Methods
Multi-Objective Memetic Algorithm using Pattern Search Filter Methods F. Mendes V. Sousa M.F.P. Costa A. Gaspar-Cunha IPC/I3N - Institute of Polymers and Composites, University of Minho Guimarães, Portugal
More informationEvolutionary multi-objective algorithm design issues
Evolutionary multi-objective algorithm design issues Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical Information Technology Karthik.sindhya@jyu.fi
More informationDecomposition of Multi-Objective Evolutionary Algorithm based on Estimation of Distribution
Appl. Math. Inf. Sci. 8, No. 1, 249-254 (2014) 249 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.12785/amis/080130 Decomposition of Multi-Objective Evolutionary
More informationAuthor s Accepted Manuscript
Author s Accepted Manuscript On The Use of Two Reference Points in Decomposition Based Multiobjective Evolutionary Algorithms Zhenkun Wang, Qingfu Zhang, Hui Li, Hisao Ishibuchi, Licheng Jiao www.elsevier.com/locate/swevo
More informationMechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA
Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute
More informationA Clustering Multi-objective Evolutionary Algorithm Based on Orthogonal and Uniform Design
A Clustering Multi-objective Evolutionary Algorithm Based on Orthogonal and Uniform Design Yuping Wang, Chuangyin Dang, Hecheng Li, Lixia Han and Jingxuan Wei Abstract Designing efficient algorithms for
More informationA Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization
A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization Hisao Ishibuchi and Youhei Shibata Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho, Sakai,
More informationMulti-objective Optimization
Jugal K. Kalita Single vs. Single vs. Single Objective Optimization: When an optimization problem involves only one objective function, the task of finding the optimal solution is called single-objective
More informationMulti-objective Optimization Algorithm based on Magnetotactic Bacterium
Vol.78 (MulGrab 24), pp.6-64 http://dx.doi.org/.4257/astl.24.78. Multi-obective Optimization Algorithm based on Magnetotactic Bacterium Zhidan Xu Institute of Basic Science, Harbin University of Commerce,
More informationAMULTIOBJECTIVE optimization problem (MOP) can
IEEE TRANSACTIONS ON CYBERNETICS, VOL. 46, NO. 9, SEPTEMBER 2016 1997 Regularity Model for Noisy Multiobjective Optimization Handing Wang, Student Member, IEEE, Qingfu Zhang, Senior Member, IEEE, Licheng
More informationAn Evolutionary Algorithm for the Multi-objective Shortest Path Problem
An Evolutionary Algorithm for the Multi-objective Shortest Path Problem Fangguo He Huan Qi Qiong Fan Institute of Systems Engineering, Huazhong University of Science & Technology, Wuhan 430074, P. R. China
More informationAn Experimental Multi-Objective Study of the SVM Model Selection problem
An Experimental Multi-Objective Study of the SVM Model Selection problem Giuseppe Narzisi Courant Institute of Mathematical Sciences New York, NY 10012, USA narzisi@nyu.edu Abstract. Support Vector machines
More informationReference Point-Based Particle Swarm Optimization Using a Steady-State Approach
Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach Richard Allmendinger,XiaodongLi 2,andJürgen Branke University of Karlsruhe, Institute AIFB, Karlsruhe, Germany 2 RMIT University,
More informationA Novel Accurate Genetic Algorithm for Multivariable Systems
World Applied Sciences Journal 5 (): 137-14, 008 ISSN 1818-495 IDOSI Publications, 008 A Novel Accurate Genetic Algorithm or Multivariable Systems Abdorreza Alavi Gharahbagh and Vahid Abolghasemi Department
More informationSolving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm
Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm Nasreddine Hallam, Graham Kendall, and Peter Blanchfield School of Computer Science and IT, The Univeristy
More informationSPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2
SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2 Mifa Kim 1, Tomoyuki Hiroyasu 2, Mitsunori Miki 2, and Shinya Watanabe 3 1 Graduate School, Department of Knowledge Engineering
More informationNCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems
: Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems Shinya Watanabe Graduate School of Engineering, Doshisha University 1-3 Tatara Miyakodani,Kyo-tanabe, Kyoto, 10-031,
More informationA Multi-objective Evolutionary Algorithm of Principal Curve Model Based on Clustering Analysis
A Multi-objective Evolutionary Algorithm of Principal Curve Model Based on Clustering Analysis Qiong Yuan1,2, Guangming Dai1,2* 1 School of Computer Science, China University of Geosciences, Wuhan 430074,
More informationApproximation-Guided Evolutionary Multi-Objective Optimization
Approximation-Guided Evolutionary Multi-Objective Optimization Karl Bringmann 1, Tobias Friedrich 1, Frank Neumann 2, Markus Wagner 2 1 Max-Planck-Institut für Informatik, Campus E1.4, 66123 Saarbrücken,
More informationPiecewise polynomial interpolation
Chapter 2 Piecewise polynomial interpolation In ection.6., and in Lab, we learned that it is not a good idea to interpolate unctions by a highorder polynomials at equally spaced points. However, it transpires
More informationInvestigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms
Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms Lei Chen 1,2, Kalyanmoy Deb 2, and Hai-Lin Liu 1 1 Guangdong University of Technology,
More informationParallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources
Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Sanaz Mostaghim, Jürgen Branke, Andrew Lewis, Hartmut Schmeck Abstract In this paper, we study parallelization
More informationA Parallel Implementation of Multiobjective Particle Swarm Optimization Algorithm Based on Decomposition
25 IEEE Symposium Series on Computational Intelligence A Parallel Implementation of Multiobjective Particle Swarm Optimization Algorithm Based on Decomposition Jin-Zhou Li, * Wei-Neng Chen, Member IEEE,
More informationFinding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls
Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls Florian Siegmund, Amos H.C. Ng Virtual Systems Research Center University of Skövde P.O. 408, 541 48 Skövde,
More informationUsing Different Many-Objective Techniques in Particle Swarm Optimization for Many Objective Problems: An Empirical Study
International Journal of Computer Information Systems and Industrial Management Applications ISSN 2150-7988 Volume 3 (2011) pp.096-107 MIR Labs, www.mirlabs.net/ijcisim/index.html Using Different Many-Objective
More informationIndicator-Based Selection in Multiobjective Search
Indicator-Based Selection in Multiobjective Search Eckart Zitzler and Simon Künzli Swiss Federal Institute of Technology Zurich Computer Engineering and Networks Laboratory (TIK) Gloriastrasse 35, CH 8092
More informationAn External Archive Guided Multiobjective Evolutionary Approach Based on Decomposition for Continuous Optimization
IEEE Congress on Evolutionary Computation (CEC) July -,, Beijing, China An External Archive Guided Multiobjective Evolutionary Approach Based on Decomposition for Continuous Optimization Yexing Li School
More informationExperimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization
Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization adfa, p. 1, 2011. Springer-Verlag Berlin Heidelberg 2011 Devang Agarwal and Deepak Sharma Department of Mechanical
More informationDifficulty Adjustable and Scalable Constrained Multi-objective Test Problem Toolkit
JOURNAL OF LATEX CLASS FILES, VOL. 4, NO. 8, AUGUST 5 Difficulty Adjustable and Scalable Constrained Multi-objective Test Problem Toolkit Zhun Fan, Senior Member, IEEE, Wenji Li, Xinye Cai, Hui Li, Caimin
More informationParallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources
Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Author Mostaghim, Sanaz, Branke, Jurgen, Lewis, Andrew, Schmeck, Hartmut Published 008 Conference Title IEEE Congress
More informationParticle Swarm Optimization to Solve Optimization Problems
Particle Swarm Optimization to Solve Optimization Problems Gregorio Toscano-Pulido and Carlos A. Coello Coello Evolutionary Computation Group at CINVESTAV-IPN (EVOCINV) Electrical Eng. Department, Computer
More informationR2-IBEA: R2 Indicator Based Evolutionary Algorithm for Multiobjective Optimization
R2-IBEA: R2 Indicator Based Evolutionary Algorithm for Multiobjective Optimization Dũng H. Phan Department of Computer Science University of Massachusetts, Boston Boston, MA 02125, USA Email: phdung@cs.umb.edu
More informationMulti-objective single agent stochastic search in non-dominated sorting genetic algorithm
Nonlinear Analysis: Modelling and Control, 23, Vol. 8, No. 3, 293 33 293 Multi-objective single agent stochastic search in non-dominated sorting genetic algorithm Algirdas Lančinskas a, Pilar Martinez
More informationEvolutionary Computation
Evolutionary Computation Lecture 9 Mul+- Objec+ve Evolu+onary Algorithms 1 Multi-objective optimization problem: minimize F(X) = ( f 1 (x),..., f m (x)) The objective functions may be conflicting or incommensurable.
More informationDevelopment of Evolutionary Multi-Objective Optimization
A. Mießen Page 1 of 13 Development of Evolutionary Multi-Objective Optimization Andreas Mießen RWTH Aachen University AVT - Aachener Verfahrenstechnik Process Systems Engineering Turmstrasse 46 D - 52056
More informationImproved S-CDAS using Crossover Controlling the Number of Crossed Genes for Many-objective Optimization
Improved S-CDAS using Crossover Controlling the Number of Crossed Genes for Many-objective Optimization Hiroyuki Sato Faculty of Informatics and Engineering, The University of Electro-Communications -5-
More informationAn Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls
An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls Ankur Sinha, Pekka Korhonen, Jyrki Wallenius Firstname.Secondname@aalto.fi,
More informationAn Evolutionary Many Objective Optimisation Algorithm with Adaptive Region Decomposition
1 An Evolutionary Many Objective Optimisation Algorithm with Adaptive Region Decomposition Hai-Lin Liu, Lei Chen, Qingfu Zhang, Senior Member, IEEE and Kalyanmoy Deb, Fellow, IEEE COIN Report Number 016014
More informationMultiobjective Formulations of Fuzzy Rule-Based Classification System Design
Multiobjective Formulations of Fuzzy Rule-Based Classification System Design Hisao Ishibuchi and Yusuke Nojima Graduate School of Engineering, Osaka Prefecture University, - Gakuen-cho, Sakai, Osaka 599-853,
More informationReflection and Refraction
Relection and Reraction Object To determine ocal lengths o lenses and mirrors and to determine the index o reraction o glass. Apparatus Lenses, optical bench, mirrors, light source, screen, plastic or
More informationEffects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm
Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm Toshiki Kondoh, Tomoaki Tatsukawa, Akira Oyama, Takeshi Watanabe and Kozo Fujii Graduate School of Engineering, Tokyo University
More informationCombining Model-based and Genetics-based Offspring Generation for Multi-objective Optimization Using a Convergence Criterion
Combining Model-based and Genetics-based Offspring Generation for Multi-objective Optimization Using a Convergence Criterion Aimin Zhou, Yaochu Jin, Qingfu Zhang, Bernhard Sendhoff, Edward Tsang Abstract
More informationEvolutionary Multitasking for Multiobjective Continuous Optimization: Benchmark Problems, Performance Metrics and Baseline Results
Evolutionary Multitasking for Multiobjective Continuous Optimization: Benchmark Problems, Performance Metrics and Baseline Results arxiv:76.766v [cs.ne] 8 Jun 7 Yuan Yuan, Yew-Soon Ong, Liang Feng, A.K.
More informationMulti-objective optimization using Trigonometric mutation multi-objective differential evolution algorithm
Multi-objective optimization using Trigonometric mutation multi-objective differential evolution algorithm Ashish M Gujarathi a, Ankita Lohumi, Mansi Mishra, Digvijay Sharma, B. V. Babu b* a Lecturer,
More informationGeneralized Multiobjective Evolutionary Algorithm Guided by Descent Directions
DOI.7/s852-4-9255-y Generalized Multiobjective Evolutionary Algorithm Guided by Descent Directions Roman Denysiuk Lino Costa Isabel Espírito Santo Received: August 23 / Accepted: 9 March 24 Springer Science+Business
More informationMobile Robot Static Path Planning Based on Genetic Simulated Annealing Algorithm
Mobile Robot Static Path Planning Based on Genetic Simulated Annealing Algorithm Wang Yan-ping 1, Wubing 2 1. School o Electric and Electronic Engineering, Shandong University o Technology, Zibo 255049,
More informationNeural Network Regularization and Ensembling Using Multi-objective Evolutionary Algorithms
Neural Network Regularization and Ensembling Using Multi-objective Evolutionary Algorithms Yaochu Jin Honda Research Institute Europe Carl-Legien-Str 7 Offenbach, GERMANY Email: yaochujin@honda-ride Tatsuya
More informationLate Parallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms
Late arallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms O. Tolga Altinoz Department of Electrical and Electronics Engineering Ankara
More informationCHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM
20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:
More informationEvolutionary Multi-Objective Optimization of Trace Transform for Invariant Feature Extraction
Evolutionary Multi-Objective Optimization of Trace Transform for Invariant Feature Extraction Wissam A. Albukhanajer, Yaochu Jin, Johann Briffa and Godfried Williams Nature Inspired Computing and Engineering
More informationImproving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms.
Improving interpretability in approximative fuzzy models via multi-objective evolutionary algorithms. Gómez-Skarmeta, A.F. University of Murcia skarmeta@dif.um.es Jiménez, F. University of Murcia fernan@dif.um.es
More informationOptimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows
Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows Abel Garcia-Najera and John A. Bullinaria School of Computer Science, University of Birmingham Edgbaston, Birmingham
More informationCommunication Strategies in Distributed Evolutionary Algorithms for Multi-objective Optimization
CONTI 2006 The 7 th INTERNATIONAL CONFERENCE ON TECHNICAL INFORMATICS, 8-9 June 2006, TIMISOARA, ROMANIA Communication Strategies in Distributed Evolutionary Algorithms for Multi-objective Optimization
More informationAn Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions
An Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions Kalyanmoy Deb, Ankur Sinha, Pekka Korhonen, and Jyrki Wallenius KanGAL Report Number
More informationPerformance Evaluation of Vector Evaluated Gravitational Search Algorithm II
170 New Trends in Software Methodologies, Tools and Techniques H. Fujita et al. (Eds.) IOS Press, 2014 2014 The authors and IOS Press. All rights reserved. doi:10.3233/978-1-61499-434-3-170 Performance
More informationminimizing minimizing
The Pareto Envelope-based Selection Algorithm for Multiobjective Optimization David W. Corne, Joshua D. Knowles, Martin J. Oates School of Computer Science, Cybernetics and Electronic Engineering University
More informationUnsupervised Feature Selection Using Multi-Objective Genetic Algorithms for Handwritten Word Recognition
Unsupervised Feature Selection Using Multi-Objective Genetic Algorithms for Handwritten Word Recognition M. Morita,2, R. Sabourin 3, F. Bortolozzi 3 and C. Y. Suen 2 École de Technologie Supérieure, Montreal,
More informationMulti-objective Optimization
Some introductory figures from : Deb Kalyanmoy, Multi-Objective Optimization using Evolutionary Algorithms, Wiley 2001 Multi-objective Optimization Implementation of Constrained GA Based on NSGA-II Optimization
More informationImproved Crowding Distance for NSGA-II
Improved Crowding Distance for NSGA-II Xiangxiang Chu and Xinjie Yu Department of Electrical Engineering, Tsinghua University, Beijing84, China Abstract:Non-dominated sorting genetic algorithm II (NSGA-II)
More informationMechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA
Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute
More informationEliteNSGA-III: An Improved Evolutionary Many- Objective Optimization Algorithm
EliteNSGA-III: An Improved Evolutionary Many- Objective Optimization Algorithm Amin Ibrahim, IEEE Member Faculty of Electrical, Computer, and Software Engineering University of Ontario Institute of Technology
More informationSIMULATION OPTIMIZER AND OPTIMIZATION METHODS TESTING ON DISCRETE EVENT SIMULATIONS MODELS AND TESTING FUNCTIONS
SIMULATION OPTIMIZER AND OPTIMIZATION METHODS TESTING ON DISCRETE EVENT SIMULATIONS MODELS AND TESTING UNCTIONS Pavel Raska (a), Zdenek Ulrych (b), Petr Horesi (c) (a) Department o Industrial Engineering
More informationLocating the boundaries of Pareto fronts: A Many-Objective Evolutionary Algorithm Based on Corner Solution Search
Locating the boundaries of Pareto fronts: A Many-Objective Evolutionary Algorithm Based on Corner Solution Search Xinye Cai, Haoran Sun, Chunyang Zhu, Zhenyu Li, Qingfu Zhang arxiv:8.297v [cs.ai] 8 Jun
More informationSPEA2: Improving the strength pareto evolutionary algorithm
Research Collection Working Paper SPEA2: Improving the strength pareto evolutionary algorithm Author(s): Zitzler, Eckart; Laumanns, Marco; Thiele, Lothar Publication Date: 2001 Permanent Link: https://doi.org/10.3929/ethz-a-004284029
More informationAn Empirical Comparison of Several Recent Multi-Objective Evolutionary Algorithms
An Empirical Comparison of Several Recent Multi-Objective Evolutionary Algorithms Thomas White 1 and Shan He 1,2 1 School of Computer Science 2 Center for Systems Biology, School of Biological Sciences,
More informationAn efficient multi-objective optimization algorithm based on swarm intelligence for engineering design
Engineering Optimization Vol. 39, No. 1, January 2007, 49 68 An efficient multi-objective optimization algorithm based on swarm intelligence for engineering design M. JANGA REDDY and D. NAGESH KUMAR* Department
More information