Multi-Objective Optimization with Iterated Density Estimation Evolutionary Algorithms using Mixture Models

Size: px
Start display at page:

Download "Multi-Objective Optimization with Iterated Density Estimation Evolutionary Algorithms using Mixture Models"

Transcription

1 Multi-Objective Optimization with Iterated Density Estimation Evolutionary Algorithms using Mixture Models Dirk Thierens Institute of Information and Computing Sciences Utrecht University, The Netherlands Peter Bosman Institute of Information and Computing Sciences Utrecht University, The Netherlands Abstract We propose an algorithm for multi-objective optimization based on iterated density estimation evolutionary algorithms (M IDE A). By making use of the Pareto dominance concept in the selection step, the algorithm searches for the Pareto front. The M IDE A algorithm builds a mixture distribution to model the dependencies in the population at each generation. This not only allows for a powerful representation of complicated dependencies but also for an elegant way to keep the diversity in the population, needed to cover the Pareto front. As a specific example we implement an algorithm with a normal mixture model applied to real valued multi-objective optimization problems. Introduction In classical evolutionary computation search is driven by two interacting processes: selection focuses the search to more promising points in the search space while mutation and crossover try to generate new and better points from these selected solutions. Efficient exploration requires that some information of what makes the parents good solutions needs to be transfered to the offspring solutions. If there were no correlation between the fitness of the parents and the offspring the search process would essentially be an unbiased random walk. Whether or not information is passed between parents and offspring depends on the representation and accompanying exploration operators. For mutation this is usually accomplished by letting it take small randomized steps in the local neighbourhood of the parent solution. Crossover recombines parts of two parent solutions which results in a more globally oriented exploration step. However, this broader exploration requires a careful choice of genotype representation and crossover operator. A common practice in the design of evolutionary search algorithms is to develop a number of representations and operators by using prior domain knowledge, and picking the best after a considerable amount of experimental runs. An alternative to this labour intensive task is to try to learn the structure of the search landscape automatically, an approach often called linkage learning (see for instance [6]). In a similar effort to learn the structure of the problem representation a number of researchers have taken a more probabilistic view of the evolutionary search process ([9,,, 4, 6, 7, 8, 9, 2, 22, 23, 24]). The general idea here is to build a probabilistic model of the current population and learn the structure of the problem representation by inducing the dependence structure of the problem variables. The exploration operators mutation and crossover are now replaced by generating new samples according to this probabilistic model (for a survey see [2]). In [] we have given a general algorithmic framework for this paradigm called iterated density estimation evolutionary algorithm (IDEA). In this paper we will propose an algorithm for multi-objective optimization within the IDEA framework, called M IDE A. The probabilistic model build is a mixture distribution that not only gives us a powerful and computationally tractable representation to model the dependencies in the population, but also provides us with an elegant way to keep the diversity in the population, needed to cover the Pareto front. In the next section we will discuss some concepts from multiobjective evolutionary computation. Thereafter we will introduce the multi-objective mixture-based IDEA: M IDE A. Finally we give an example by implementing an algorithm with a normal mixture model and applying it to real valued multi-objective optimization problems. 2 Multi-objective optimization Optimization is generally considered to be a search process for optimal or near optimal solutions in some search space where it is implicitly assumed that given any arbitrary solution one can always tell which solution is preferred. However in practice there are many problems where such a single preference criterion does not exist. Moreover it is not always possible to rank the solutions when only considering the optimization objectives. The problem is that the objective functions are opposing each other, meaning that an optimal solution to all objectives cannot be found by simply optimizing each objective independently from the others. In multiobjective optimization problems different objective functions have to be optimized simultaneously. A typical example of a multi-objective optimization problem is the design of a product where one objective is the quality of the design and another objective is its cost. Minimizing the cost will in general lead to products of inferior quality, while maximizing the quality will give rise to an increasing cost. A key characteristic of multi-objective optimization problems is the existence of whole sets of solutions that cannot be ordered in terms of preference when only considering the objective function values. To formalize this we define a number of relevant concepts. Suppose we have a problem with k objective functions f i (x);i =:::k which - without loss of generality - should all be minimized.

2 . Pareto dominance: a solution x is said to dominate a solution y (or x B y) iff8i 2 ;:::;kg : f i (x)» f i (y) V 9i 2;:::;kg : f i (x) <f i (y): 2. Pareto optimal: a solution x is said to be Pareto optimal : y B x: 3. Pareto optimal set: is the set PS of all Pareto optimal solutions: PS = fx : y B xg: 4. Pareto front: is the set PF of objective function values of all Pareto optimal solutions: PF = ff(x) = (f (x);:::;f k (x)) j x 2PSg: Multi-objective problems have been tackled with different solution strategies:. The most straightforward approach is to transform the multi-objective problem into a single-objective optimization problem by aggregating the different objectives into one single criterion. This is typically done by taking a weighted sum of the objectives. The great advantage of this approach is that one can simply use any single-objective optimization algorithm to solve it. The result however is very dependent on the choice of weighting factors. In addition it is hard - or even impossible - to find certain points on the Pareto front when this front is not convex. 2. A second solution strategy is to optimize according to a single objective during one cycle and to switch between the objectives during the whole optimization process. Here too one can use any single-objective optimization algorithm. A disadvantage is that the search tends to focus and switch between the extreme positions on the Pareto front where one of the single objectives is optimal. 3. A third strategy to solve multi-objective optimization problems is to search for the Pareto front - or for a representative set of Pareto optimal solutions - by making use of the Pareto dominance concept. The idea is to maintain a population of solutions that cover the entire Pareto front. The notion of searching a search space through maintaining a population of solutions is a key characteristic of evolutionary algorithms, which makes them natural candidates as multi-objective optimization algorithms. The field of evolutionary multi-objective optimization has indeed seen an explosive growth in recent years (for a survey see [8],[]). 3 The multi-objective mixture-based IDE A In this section we describe the algorithmic component of this paper. First, we review the IDEA framework as applied in single-criterion optimization problems. In section 3.2, we formalize the IDEA framework. We then note how we can find a conditional factorization as a structure, given a vector of selected samples in section 3.3, and show how we can use the normal pdf in conditional factorizations in section 3.4. Clustered structures are proposed in section 3. by using a mixture distribution as probabilistic model. Finally, we introduce the mixture-based multi-objective optimization algorithm M IDE A in section The IDEA The IDEA is a framework for Iterated Density Estimation Evolutionary Algorithms that use probabilistic models in evolutionary optimization [, 2]. Before we can formalize it, we need to define probabilistic models. To this end, we take the elementary building block of probabilistic models to be the probability density function (pdf). We define a probabilistic model M to consist of some structure & and a vector of parameters for the pdfs implied by &. We write M =(&; ). We write the probability distribution that is described by M as PM. The pdf to fit over every factor implied by & is chosen on beforehand. The way in which the parameters are fit, is also predefined on beforehand. We denote the parameter vector that is obtained in this manner by ψ fit &. With these assumptions, we denote the resulting probability distribution by P &. We assume that the dimensionality of our problem is l and write L = (; ;:::; l ). We furthermore assume that we have a continuous optimization problem C(y ;y ;:::;y l ) =C(yhLi). Without loss of generality, we assume that we want to minimize C(yhLi). For every problem variable y i, we introduce a continuous random variable Y i and get Y = Y hli. Without any prior information on C(yhLi), we might as well take a uniform distribution over Y (assuming a bounded continuous search space). Therefore, we generate an initial (population) vector of n samples at random. Now we let P (Y) be a probability distribution that is uniform over all vectors yhli with C(yhLi)». Sampling from P (Y) gives more samples that evaluate to a value below. Moreover, if we know Λ = min yhlifc(yhli)g, a single sample gives an optimal solution. To use this in an iterated algorithm, we select bfinc samples in each iteration t and let t be the worst selected sample cost. We then estimate the distribution of the selected samples and thereby find ^P & t (Y) as an approximation to the true distribution P t (Y). t New samples can then be drawn from ^P & (Y) and be used to replace some of the current samples. 3.2 Algorithmic framework The definition of the IDEA framework is given in the pseudocode figure. In the IDEA, we have N fi = (; ;:::;bfinc ), fi 2 [ ; ], sel() is the selection operator, rep() replaces a subset of P with a subset of O, ter() is the termination condi- n tion, sea() is a structure search algorithm, est() estimates the model parameters and sam() generates a single sample using the estimated pdfs.

3 If we set m to (n bfinc), sel() to selection by taking the best bfinc vectors and rep() to replacing the worst (n bfinc) vectors by the new samples, we have that t+ = t " with ". This assures that the search for Λ is conveyed through a monotonically decreasing series ::: t. We call an IDEA with m, sel() and rep() so chosen, end a monotonic IDEA. IDEA(n,fi,m,sel(),rep(),ter(),sea(),est(),sam()) Initialize an empty vector of samples P ψ () 2 Add and evaluate n random samples for i ψ to n do 2. P ψ P t NEWRANDOMVECTOR() 2.2 c[p i ] ψ C(P i ) 3 Initialize the iteration counter t ψ 4 Iterate until termination while :ter() do 4. Select bfinc samples (y hli;y hli;:::;y bfinc hli) ψ sel() 4.2 Set t to the worst selected cost t ψ c[y k hli] such that 8 i2nfi hc[y i hli]» c[y k hli]i 4.3 Search for a structure & & ψ sea() 4.4 Estimate the parameters ψ fit & ψ est() 4. Create an empty vector of new samples O ψ () 4.6 Draw m new samples from ^P & (Y) for i ψ to m do 4.6. O ψ O t sam() 4.7 Replace a part of P with a part of O rep() 4.8 Evaluate the new samples in P f or each unevaluated P i do 4.8. c[p i ] ψ C(P i ) 4.9 Update the iteration counter t ψ t + Denote the required iterations by t end t end ψ t 3.3 Learning conditional factorizations In step 4:3 of the IDEA, we search for a structure. In this section, we only focus on the specific structure of a conditional factorization. A factorization implies a product of pdfs. If such a product is a function of variables yhai, it is a valid structure over random variables Y hai. We can state that variable Y i is either conditionally dependent on variables Y hai with i 62 a, or it is not. By identifying a vertex with each variable Y i and and arc (Y i ;Y j ) if and only if Y j is conditionally dependent on Y i, we get the conditional factorization graph. A conditional factorization is valid if and only if its factorization graph is acyclic. The resulting probability distribution is the product of l conditional pdfs in which each variable is conditioned on at most l parents. We have to learn such a conditional factorization from the vector of selected samples. To this end, a variety of approaches can be taken [2]. We use an incremental algorithm that starts from the empty graph with no arcs. Each iteration, the arc to add is selected as the arc that increases some metric the most. If no addition of any arc further increases the metric, the final factorization graph has been found. The metric that we use in this paper, is commonly known as the Bayesian Information Criterion (BIC). For a derivation of this metric in the IDEA context, we refer the reader to a more detailed report [3]. Let S = (y ; y ;:::;y jsj ) be the selected vector of l dimensional samples. The BIC metric is parameterized by a regularization parameter that determines the amount of penalization of more complex models in order to favour more simple models. The BIC metric is defined by: ln(l(sj ^PM(Y))) z } z } + ln(jsj)j j Error( ^PM(Y)jS) Complexity( ^PM(Y)jS) In equation, we have used the likelihood of M: 3.4 Gaussian pdfs L(Sj ^PM(Y)) = jsj Y i= () ^PM(Y)(y i ) (2) A widely used parametric pdf is the normal pdf. sample average in dimension j is Y j = jsj The P jsj i= (y i ) j. The sample covariance matrix over variables Y hai is S = P jsj i= (y i hai Y hai)(y i hai Y hai) T. Let s ij = jsj S (i; j). To compute the BIC measure, we require to evaluate the likelihood. However, the average negative logarithm of the likelihood can be seen to be equal to the entropy of the normal pdf [4]. For this pdf, the entropy can be evaluated significantly faster than the likelihood. The required conditional pdf and the entropy can be stated as follows [2]: fn (y a jyha a i)= where p (ya μ) 2 ff 2ß e ( ff = p s μ = Ya s P jaj i= (ya i Ya i )s i s 2ff 2 (3) h(y hai) = 2 (jaj + ln((2ß)jaj det(s))) (4) 3. Factorization mixtures by clustering The structure of the sample vector may be highly non linear. This non linearity can force us to use probabilistic models of a high complexity to retain some of this non linearity. However, especially using relatively simple pdfs such as the normal pdf, the non linear interactions cannot be captured even

4 with higher order models. The key issue is the use of clusters. The use of clusters allows us to efficiently break up non linear interactions so that we can use simple models to get an adequate representation of the sample vector. Furthermore, computationally efficient clustering algorithms exist that provide useful results. A short survey [3] has shown that the randomized euclidic leader algorithm is both fast as well as useful for IDEAs. Each cluster is processed separately in order to have a probability distribution fit over it. We let k be the amount of clusters and let K =(; ;:::;k ). In general, we write f for a factorization. For a mixture of factorizations, we write fhki. The resulting probability distribution is a weighted sum of the individual probability distributions over each cluster: ^P fhki(y) = jkj X i= fi i ^P i f i (Y) () An effective way to set the mixture coefficients fi i,isto proportionally assign larger coefficients to the clusters with a better average cost. By taking the absolute value of the difference of the average cluster cost and the average initial sample vector cost, we allow for both maximization as well as minimization problems. In the randomized euclidic leader algorithm, we use the normalized Euclidean distance measure. The leader algorithm is one of the fastest clustering algorithms. The first sample to make a new cluster is appointed to be its leader. The leader algorithm goes over the sample vector exactly once. For each sample it encounters, it finds the first cluster that has a leader being closer to the sample than a given threshold T d. If no such cluster can be found, a new cluster is created containing only this single sample. To prevent the first clusters from becoming quite a lot larger than the later ones, we randomize the order in which the clusters are inspected. To avoid problems with two clusters to which some samples are equally close, the order in which the clusters are scanned is randomized as well. 3.6 Multi-objective mixture-based IDE A The algorithm discussed so far is still a single-objective optimization algorithm. To change it into a multi-objective Pareto covering optimization algorithm we need to make the following changes:. First, we have to search for the Pareto-front: in step 4: of the IDEA framework selection picks out the best bfinc samples. Making this selection on the basis of Pareto dominance allows us to search for the Pareto front. First all non-dominated solutions are selected and removed from the population. Next the following non-dominated solutions are picked out, and this process continues until bfinc samples are taken. 2. Second, we have to cover the Pareto-front: maintaining diversity or niches is needed to prevent the population to converge to a single Pareto optimal point instead of to form a representative set of the entire front. Since the mixture-based IDEA already constructs a set of clusters we can simply use this to maintain the diversity. This can be done in the parameter space or in the objective space. In the following section we will investigate the validity of the proposed strategy by implementing and testing a multiobjective mixture-based IDEA. The mixture distribution considered here is a mixture of gaussian pdfs and is tested on continuous multi-objective optimization problems. Note that the choice of a mixture of gaussians is just a specific instantiation of the algorithm. Other problem domains might suggest the use of other mixture models. For instance for the discrete multi-objective knapsack problem we have proposed and tested a mixture of discrete univariate factorizations and a mixture of trees ([], [2]). Another possibility would be the use of mixtures of Bayesian networks ([26]) but it can be questioned whether this is still computationally feasible. 4 Experimental results The continuous multi objective optimization problems we used for testing are the following (minimization of both objectives in all cases): Name Objectives Domain f = e P l y i= i 2 p l MOP 2 MOP 3 f = e P l i= f = +(A B ) 2 + (A B ) 2 f = (y +3) 2 +(y +) 2 y i+ p l 2 [ 4; 4] 3 A = g( ; ; 2; ; ; 2; 2 ; 2 2) A = g( 2 ; ; ; ; 2; 2; 2 ; 2) B = g( 2 ;y; 2;y; ;y; 2 ;y) = g( 2 ;y; ;y; 2;y; 2 ;y) B g(x) =x sin(x ) x 2cos(x 3)+ x 4sin(x ) x 6cos(x 7) f P q l 2 = i= e :2 y 2 i +y2 i+ MOP 4 f P l = i= jy ij :8 +sin(y 3 i ) f = y EC 4 f = fl q y fl EC 6 fl P l = 9 + i= f = e 4y sin 6 (6ßy i) = fl ( f f ) fl Pl fl = +9 i= y 2 i cos(4ßy i) :2 y i 9 [ ß; ß] 2 [ ; ] 3 [ ; ] [ ; ] 9 [; ] These functions have been taken from the multi-objective evolutionary optimization literature ([7],[2]). In all our testing, we used monotonic IDEAs. We used the rule of thumb by Mühlenbein and Mahnig [6] for FDA and set fi to :3. We fix the selection size to bfinc =2. Selection is performed by truncation on the domination count. Clustering is applied using the leader algorithm in both the objective space as well as the parameter space. In both cases, we computed the fi i

5 MOP2 MOP3 MOP4 EC4 EC6 Obj Par None Figure : Average amount of evaluations f Figure 4: Result for MOP 4 (objective clustering) f Figure 2: Result for MOP 2 (objective clustering). 2 run run run 2 run 3 run 4 run run 6 run 7 run 8 run 9 based upon the proportional domination count. For the objective space, we set T d =2, giving approximately 3 clusters. For the parameter space, we set T d to such a value that 2 we again get approximately 3 clusters. For comparison, we also tested an approach using no clustering. We searched for conditional factorizations using the BIC metric with =. 2 In this preliminary study, termination is enforced when the domination count of all of the selected samples equals. At such a point, the selected sample vector contains only non dominated solutions. Note that this does not have to imply at all that full convergence has been obtained since the front itself may not be optimal. To prevent the alternative of allowing an arbitrary amount of generations or evaluations, a good termination criterion might be when none of the selected samples is dominated by any of the selected samples in the previous generation. For now, we restrict ourselves to the simple termination criterion, keeping in mind that premature convergence is possible. No single run was allowed more than evaluations. We have performed independent runs and determined the final front from the combined results. The average amount of required evaluations for each type of clustering is stated in figure. In figures 2, 3 and 4, the results using objective clustering on MOP 2, MOP 3 and MOP 4 are shown respectively Figure : All runs for EC 4 (objective clustering). For each of these three problems, none of the individual runs differ significantly from the combined result. Moreover, the results of parameter clustering as well as no clustering at all are also similar to these results, so we omit further graphs for these three problems. The table in figure indicates that using the IDEA approach requires only a few evaluations to adequately solve the three MOP problems. Compared to EC 4, the three MOP problems are relatively simple. Converging to the optimal front is very difficult in EC 4. In figure, we can see that we indeed require a vastly larger amount of evaluations. Only when we cluster in the objective space, do we on average require less than the maximum of evaluations. However, closely observing the results points out that premature convergence has often taken place in the algorithm with objective clustering. For all of the tested algorithms, the results differ quite largely amongst the different runs. Therefore, we have plotted the f run run run 2 run 3 run 4 run run 6 run 7 run 8 run f Figure 3: Result for MOP 3 (objective clustering) f Figure 6: All runs for EC 4 (parameter clustering).

6 2 run run run 2 run 3 run 4 run run 6 run 7 run 8 run f Figure 7: All runs for EC 4 (no clustering) Objective Clustering Parameter Clustering f Figure 9: Result for EC 6 (objective clustering) f Figure 8: Additional results for EC combined results for objective clustering, parameter clustering and no clustering in figures, 6 and 7 respectively. It becomes clear that clustering in the objective space is by far the most effective in evaluation requirements as well as optimization performance. From the results on EC 4 given so far, it seems that parameter clustering is not very effective either. We note however that the amount of clusters can have an influence on the optimization performance. Taking more clusters in combination with a larger population size to effectively fill up and use these additional clusters, can lead to a good estimation of the promising regions of the multi objective space. To illustrate this, we have plotted the resulting fronts after runs for objective clustering with bfinc = and T d =: and for parameter clustering with bfinc =2T d =4:743. This leads to approximately clusters for objective clustering and 7 clusters for parameter clustering. The results in figure 8 show that very good results are obtained with these settings. For objective clustering the average amount evaluations is 94746, whereas for parameter clustering we find an average of 8242 evaluations. Note that we again have premature convergence because of the termination criterion. Given that the results of no clustering do not really improve with an increasing population size and the maximum of evaluations, we conclude that clustering of some sort is crucial in order to be able to effectively tackle difficult problems such as EC 4. The main difficulty with problem EC 6 is that the optimal front is not uniformly distributed in its solutions. Without clustering, we are therefore very likely to find only a part of Figure : Result for EC 6 (parameter clustering) f Figure : Additional result for EC 6 using a larger population (parameter clustering). f

7 the front. Furthermore, by clustering in parameter space, we also have no guarantee to find a good representation of the front since the parameter space is directly related to the density of the points along the front. On the other hand, clustering this space does give a means of capturing more regions than a single cluster can. If we are to cluster in the objective space, we should have no problem finding a larger part of the front unless the problem itself is very difficult as is the case for instance with EC 4. In figures 9 and, the results for objective clustering and parameter clustering are shown respectively. Using parameter clustering is clearly not effective. Using no clustering, we found results identical to those of parameter clustering, no matter how large the population size. However, by increasing the population size to get bfinc = and setting T d = 4:473, wefind the front in figure with an average of 2839 evaluations. Even though the results are not as good as those of objective clustering, it does show the notable effect of clustering on multi objective optimization. It should also be noted that the Pareto front found in figure 9 (and to a lesser degree in figure ) seems to coincide with the optimal Pareto front, which is not trivial to achieve since the fast elitist non-dominated sorting GA (NSGA-II [2]), the strength Pareto Evolutionary Algorithm (SPEA [27]), and the Pareto-archived evolution strategy (PAES [3]) are all reported to converge to a sub-optimal front ([2]). In the experiments so far, the structure learned at each generation is a conditionally factorized gaussian probability density function. This structure has the advantage of being capable to learn conditional dependences between variables, while at the same time being computationally efficient enough to be applied at each generation. Without detailed knowledge about the fitness function it is not possible to tell whether this structure is optimal. For instance it might well be possible that the fitness function can be optimized without the need to learn the conditional dependences between variables. In this case it would be computationally more efficient to use a probability density structure that ignores the interactions between the variables. To get a feeling of the impact of this choice we have optimized the functions EC 4 and EC 6 with a mixture of univariate factorizations. A population size of bfinc = 2, resp. bfinc =(fi =:3) was used, with a cluster threshold =., resulting in 3 to clusters. Clustering was done in the objective space, and a total of runs were performed. Figures 2 and 3 show that the Pareto front found is of similar quality than in the previous experiment using conditionally factorized gaussian pdfs with objective clustering. The average amount of function evaluations for EC 4 was 2963, while for EC 6 the number was These figures are substantially lower than those found before (see figure ), indicating that for these functions the computational effort spent by learning a more powerful and complicated model seems to be unnecessary. It should be noted that this gives only a rough impression about convergence speed and quality of the algorithms. Future studies will have to look at the influence of population size, selection threshold, and cluster size f Figure 2: Result for EC 4 with univariate factorization (objective clustering) f Figure 3: Result for EC 6 with univariate factorization (objective clustering). Conclusion We have proposed a multi-objective iterated density estimation evolutionary algorithm. The algorithm builds a mixture distribution as probabilistic model which allows to maintain the necessary diversity to cover the Pareto front in a sound statistical way. As a specific instantiation of the proposed algorithm we have implemented an example with gaussian mixture models. Preliminary experimental results show the validity of the proposed method on a set of continuous multi objective optimization problems taken from the multiobjective evolutionary optimization literature. Bibliography [] P.A.N. Bosman and D. Thierens. An algorithmic framework for density estimation based evolutionary algorithms. Utrecht Univ. Tech. Rep. UU CS ftp://ftp.cs.uu.nl/pub/ruu/cs/techreps/cs-999/ ps.gz, 999. [2] P.A.N. Bosman and D. Thierens. Expanding from discrete to continuous estimation of distribution algorithms: The IDEA. In M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J.J. Merelo, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature PPSN VI, pages Springer, 2. [3] P.A.N. Bosman and D. Thierens. Mixed IDEAs. Utr. Univ. Tech. Rep. UU CS 2 4. ftp://ftp.cs.uu.nl/ pub/ruu/cs/techreps/cs-2/2-4.ps.gz, 2. [4] P.A.N. Bosman and D. Thierens. Negative log likelihood and statistical hypothesis testing as the basis of model selec-

8 tion in IDEAs. Utrecht University Tech. Rep. UU CS ftp://ftp.cs.uu.nl/pub/ruu/ CS/techreps/CS-2/2-36.ps.gz, 2. [] C.A. Coello Coello. A comprehensive survey of evolutionarybased multiobjective optimization techniques. Knowledge and Information Systems, (3):269 38, 999. [6] D.E. Goldberg, K. Deb, H. Kargupta, and G. Harik. Rapid, accurate optimization of difficult problems using fast messy genetic algorithm. In S. Forrest, editor, Proceedings of the Fifth International Conference on Genetic Algorithms ICGA- 93, pages Morgan Kaufmann, 993. [7] K. Deb. Multi-objective genetic algorithms: Problem difficulties and construction of test problems. Evolutionary Computation, 7(3):2 23, 999. [8] C.M. Fonseca and P.J. Fleming. An overview of evolutionary algorithms in multiobjective optimization. Evolutionary Computation, 3(): 6, 99. [9] M. Gallagher, M. Fream, and T. Downs. Real valued evolutionary optimization using a flexible probability density estimator. In W. Banzhaf, J. Daida, A.E. Eiben, M.H. Garzon, V. Honavar, M. Jakiela, and R.E. Smith, editors, Proc. of the GECCO 999 Genetic and Evolutionary Computation Conference, pages Morgan Kaufmann Pub., 999. [] G. Harik. Linkage learning via probabilistic modeling in the ECGA. IlliGAL Tech. Report 99. ftp://ftpilligal.ge.uiuc.edu/pub/papers/illigals/99.ps.z, 999. [] G. Harik, F. Lobo, and D.E. Goldberg. The compact genetic algorithm. In Proc. of the 998 IEEE Int. Conf. on Evolutionary Computation, pages IEEE Press, 998. [2] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan. A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: Nsga-ii. In M. Schoenauer et al., editor, Parallel Problem Solving from Nature PPSN VI, pages Springer, 2. [3] J. Knowles and D. Corne. The pareto archived evolution strategy: a new baseline algorithm for multi-objective optimisation. In A. Zalzala et al., editor, Proceedings of the 999 Congress on Evolutionary Computation (CEC-99), pages 98. IEEE Press, 999. [4] P. Larranaga, R. Etxeberria, J. Lozano, and J. Pena. Optimization by learning and simulation of bayesian and gaussian networks. TR EHU-KZAA-IK-4-99, 999. [] M. Meila and M.I. Jordan. Estimating dependency structure as a hidden variable. In M.I. Jordan et al., editor, Proceedings of Neural Information Processing Systems, pages MIT Press, 998. [6] H. Mühlenbein and T. Mahnig. FDA a scalable evolutionary algorithm for the optimization of additively decomposed functions. Evol. Comp., 7:33 376, 999. [7] H. Mühlenbein, T. Mahnig, and O. Rodriguez. Schemata, distributions and graphical models in evolutionary optimization. Journal of Heuristics, :2 247, 999. [8] H. Mühlenbein and G. Paaß. From recombination of genes to the estimation of distributions i. binary parameters. In A.E. Eiben, T. Bäck, M. Schoenauer, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature PPSN V, pages Springer, 998. [9] M. Pelikan and D.E. Goldberg. Genetic algorithms, clustering, and the breaking of symmetry. In M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J.J. Merelo, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature PPSN VI, pages Springer, 2. [2] M. Pelikan, D.E. Goldberg, and E. Cantú-Paz. BOA: The bayesian optimization algorithm. In W. Banzhaf, J. Daida, A.E. Eiben, M.H. Garzon, V. Honavar, M. Jakiela, and R.E. Smith, editors, Proc. of the GECCO 999 Genetic and Evolutionary Computation Conference, pages Morgan Kaufmann Pub., 999. [2] M. Pelikan, D.E. Goldberg, and F. Lobo. A survey of optimization by building and using probabilistic models. IlliGAL Tech. Rep ftp://ftp-illigal. ge.uiuc.edu/pub/papers/illigals/998.ps.z, 999. [22] M. Pelikan and H. Mühlenbein. The bivariate marginal distribution algorithm. In R. Roy, T. Furuhashi, K. Chawdry, and K. Pravir, editors, Advances in Soft Computing Engineering Design and Manufacturing. Springer Verlag, 999. [23] S. Baluja and S. Davies. Using optimal dependency trees for combinatorial optimization: Learning the structure of the search space. In D.H. Fisher, editor, Proc. of the 997 Int. Conf. on Machine Learning. Morgan Kauffman Pub., 997. [24] M. Sebag and A. Ducoulombier. Extending population based incremental learning to continuous search spaces. In A.E. Eiben, T. Bäck, M. Schoenauer, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature PPSN V, pages Springer, 998. [2] D. Thierens and P.A.N. Bosman. Multi-objective mixturebased iterated density estimation evolutionary algorithms. submitted, 2. [26] B. Thiesson. Learning mixtures of bayesian networks. Microsoft Tech. Report MSR-TR-97-3, 998. [27] E. Zitzler and L. Thiele. Multiobjective optimization using evolutionary algorithms a comparative case study. In A.E. Eiben, T. Bäck, M. Schoenauer, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature PPSN V, pages Springer, 998.

10000 MIDEA (univariate) IDEA (no clustering, univariate) MIDEA (trees) SPEA NSGA

10000 MIDEA (univariate) IDEA (no clustering, univariate) MIDEA (trees) SPEA NSGA Multi-Objective Mixture-based Iterated Density Estimation Evolutionary Algorithms Dirk Thierens Peter A.N. Bosman dirk.thierens@cs.uu.nl peter.bosman@cs.uu.nl Institute of Information and Computing Sciences,

More information

A Generator for Hierarchical Problems

A Generator for Hierarchical Problems A Generator for Hierarchical Problems Edwin D. de Jong Institute of Information and Computing Sciences Utrecht University PO Box 80.089 3508 TB Utrecht, The Netherlands dejong@cs.uu.nl Richard A. Watson

More information

Multiobjective hboa, Clustering, and Scalability. Martin Pelikan Kumara Sastry David E. Goldberg. IlliGAL Report No February 2005

Multiobjective hboa, Clustering, and Scalability. Martin Pelikan Kumara Sastry David E. Goldberg. IlliGAL Report No February 2005 Multiobjective hboa, Clustering, and Scalability Martin Pelikan Kumara Sastry David E. Goldberg IlliGAL Report No. 2005005 February 2005 Illinois Genetic Algorithms Laboratory University of Illinois at

More information

Multi-objective Optimization

Multi-objective Optimization Jugal K. Kalita Single vs. Single vs. Single Objective Optimization: When an optimization problem involves only one objective function, the task of finding the optimal solution is called single-objective

More information

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms H. Ishibuchi, T. Doi, and Y. Nojima, Incorporation of scalarizing fitness functions into evolutionary multiobjective optimization algorithms, Lecture Notes in Computer Science 4193: Parallel Problem Solving

More information

Bayesian Optimization Algorithms for Multi-Objective Optimization

Bayesian Optimization Algorithms for Multi-Objective Optimization Bayesian Optimization Algorithms for Multi-Objective Optimization Marco Laumanns 1 and Jiri Ocenasek 2 1 ETH Zürich, Computer Engineering and Networks Laboratory, CH 8092 Zürich laumanns@tik.ee.ethz.ch

More information

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances Minzhong Liu, Xiufen Zou, Yu Chen, Zhijian Wu Abstract In this paper, the DMOEA-DD, which is an improvement of DMOEA[1,

More information

Indicator-Based Selection in Multiobjective Search

Indicator-Based Selection in Multiobjective Search Indicator-Based Selection in Multiobjective Search Eckart Zitzler and Simon Künzli Swiss Federal Institute of Technology Zurich Computer Engineering and Networks Laboratory (TIK) Gloriastrasse 35, CH 8092

More information

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Hisao Ishibuchi Graduate School of Engineering Osaka Prefecture University Sakai, Osaka 599-853,

More information

Linkage Learning using the Maximum Spanning Tree of the Dependency Graph

Linkage Learning using the Maximum Spanning Tree of the Dependency Graph Linkage Learning using the Maximum Spanning Tree of the Dependency Graph B. Hoda Helmi, Martin Pelikan and Adel Rahmani MEDAL Report No. 2012005 April 2012 Abstract The goal of linkage learning in genetic

More information

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague.

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague. Evolutionary Algorithms: Lecture 4 Jiří Kubaĺık Department of Cybernetics, CTU Prague http://labe.felk.cvut.cz/~posik/xe33scp/ pmulti-objective Optimization :: Many real-world problems involve multiple

More information

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems Repair and Repair in EMO Algorithms for Multiobjective 0/ Knapsack Problems Shiori Kaige, Kaname Narukawa, and Hisao Ishibuchi Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho,

More information

Estimation of Distribution Algorithm Based on Mixture

Estimation of Distribution Algorithm Based on Mixture Estimation of Distribution Algorithm Based on Mixture Qingfu Zhang, Jianyong Sun, Edward Tsang, and John Ford Department of Computer Science, University of Essex CO4 3SQ, Colchester, Essex, U.K. 8th May,

More information

Decomposable Problems, Niching, and Scalability of Multiobjective Estimation of Distribution Algorithms

Decomposable Problems, Niching, and Scalability of Multiobjective Estimation of Distribution Algorithms Decomposable Problems, Niching, and Scalability of Multiobjective Estimation of Distribution Algorithms Kumara Sastry Martin Pelikan David E. Goldberg IlliGAL Report No. 2005004 February, 2005 Illinois

More information

arxiv:cs/ v1 [cs.ne] 15 Feb 2004

arxiv:cs/ v1 [cs.ne] 15 Feb 2004 Parameter-less Hierarchical BOA Martin Pelikan and Tz-Kai Lin arxiv:cs/0402031v1 [cs.ne] 15 Feb 2004 Dept. of Math. and Computer Science, 320 CCB University of Missouri at St. Louis 8001 Natural Bridge

More information

Recombination of Similar Parents in EMO Algorithms

Recombination of Similar Parents in EMO Algorithms H. Ishibuchi and K. Narukawa, Recombination of parents in EMO algorithms, Lecture Notes in Computer Science 341: Evolutionary Multi-Criterion Optimization, pp. 265-279, Springer, Berlin, March 25. (Proc.

More information

Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts

Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts IEEE Symposium Series on Computational Intelligence Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts Heiner Zille Institute of Knowledge and Language Engineering University of Magdeburg, Germany

More information

NCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems

NCGA : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems : Neighborhood Cultivation Genetic Algorithm for Multi-Objective Optimization Problems Shinya Watanabe Graduate School of Engineering, Doshisha University 1-3 Tatara Miyakodani,Kyo-tanabe, Kyoto, 10-031,

More information

A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization

A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization Hisao Ishibuchi and Youhei Shibata Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho, Sakai,

More information

Updating the probability vector using MRF technique for a Univariate EDA

Updating the probability vector using MRF technique for a Univariate EDA Updating the probability vector using MRF technique for a Univariate EDA S. K. Shakya, J. A. W. McCall, and D. F. Brown {ss, jm, db}@comp.rgu.ac.uk School of Computing, The Robert Gordon University, St.Andrew

More information

Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms

Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms Franz Rothlauf Department of Information Systems University of Bayreuth, Germany franz.rothlauf@uni-bayreuth.de

More information

Offspring Generation Method using Delaunay Triangulation for Real-Coded Genetic Algorithms

Offspring Generation Method using Delaunay Triangulation for Real-Coded Genetic Algorithms Offspring Generation Method using Delaunay Triangulation for Real-Coded Genetic Algorithms Hisashi Shimosaka 1, Tomoyuki Hiroyasu 2, and Mitsunori Miki 2 1 Graduate School of Engineering, Doshisha University,

More information

On The Effects of Archiving, Elitism, And Density Based Selection in Evolutionary Multi-Objective Optimization

On The Effects of Archiving, Elitism, And Density Based Selection in Evolutionary Multi-Objective Optimization On The Effects of Archiving, Elitism, And Density Based Selection in Evolutionary Multi-Objective Optimization Marco Laumanns, Eckart Zitzler, and Lothar Thiele ETH Zürich, Institut TIK, CH 8092 Zürich,

More information

A Model-Based Evolutionary Algorithm for Bi-objective Optimization

A Model-Based Evolutionary Algorithm for Bi-objective Optimization A Model-Based Evolutionary Algorithm for Bi-objective Optimization Aimin Zhou 1, Qingfu Zhang 1, Yaochu Jin 2, Edward Tsang 1 and Tatsuya Okabe 3 1 Department of Computer Science, University of Essex,

More information

Model-Based Evolutionary Algorithms

Model-Based Evolutionary Algorithms Model-Based Evolutionary Algorithms Dirk Thierens Universiteit Utrecht The Netherlands Dirk Thierens (Universiteit Utrecht) Model-Based Evolutionary Algorithms 1 / 56 1/56 What? Evolutionary Algorithms

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2

SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2 SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2 Mifa Kim 1, Tomoyuki Hiroyasu 2, Mitsunori Miki 2, and Shinya Watanabe 3 1 Graduate School, Department of Knowledge Engineering

More information

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Multi-objective Optimization Algorithm based on Magnetotactic Bacterium

Multi-objective Optimization Algorithm based on Magnetotactic Bacterium Vol.78 (MulGrab 24), pp.6-64 http://dx.doi.org/.4257/astl.24.78. Multi-obective Optimization Algorithm based on Magnetotactic Bacterium Zhidan Xu Institute of Basic Science, Harbin University of Commerce,

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department ofmechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

DEMO: Differential Evolution for Multiobjective Optimization

DEMO: Differential Evolution for Multiobjective Optimization DEMO: Differential Evolution for Multiobjective Optimization Tea Robič and Bogdan Filipič Department of Intelligent Systems, Jožef Stefan Institute, Jamova 39, SI-1000 Ljubljana, Slovenia tea.robic@ijs.si

More information

PolyEDA: Combining Estimation of Distribution Algorithms and Linear Inequality Constraints

PolyEDA: Combining Estimation of Distribution Algorithms and Linear Inequality Constraints PolyEDA: Combining Estimation of Distribution Algorithms and Linear Inequality Constraints Jörn Grahl and Franz Rothlauf Working Paper 2/2004 January 2004 Working Papers in Information Systems University

More information

Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources

Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Sanaz Mostaghim, Jürgen Branke, Andrew Lewis, Hartmut Schmeck Abstract In this paper, we study parallelization

More information

Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources

Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Parallel Multi-objective Optimization using Master-Slave Model on Heterogeneous Resources Author Mostaghim, Sanaz, Branke, Jurgen, Lewis, Andrew, Schmeck, Hartmut Published 008 Conference Title IEEE Congress

More information

Multi-objective Optimization

Multi-objective Optimization Some introductory figures from : Deb Kalyanmoy, Multi-Objective Optimization using Evolutionary Algorithms, Wiley 2001 Multi-objective Optimization Implementation of Constrained GA Based on NSGA-II Optimization

More information

Improved S-CDAS using Crossover Controlling the Number of Crossed Genes for Many-objective Optimization

Improved S-CDAS using Crossover Controlling the Number of Crossed Genes for Many-objective Optimization Improved S-CDAS using Crossover Controlling the Number of Crossed Genes for Many-objective Optimization Hiroyuki Sato Faculty of Informatics and Engineering, The University of Electro-Communications -5-

More information

Hierarchical Problem Solving with the Linkage Tree Genetic Algorithm

Hierarchical Problem Solving with the Linkage Tree Genetic Algorithm Hierarchical Problem Solving with the Linkage Tree Genetic Algorithm Dirk Thierens Universiteit Utrecht Utrecht, The Netherlands D.Thierens@uu.nl Peter A.N. Bosman Centrum Wiskunde & Informatica (CWI)

More information

Combining Convergence and Diversity in Evolutionary Multi-Objective Optimization

Combining Convergence and Diversity in Evolutionary Multi-Objective Optimization Combining Convergence and Diversity in Evolutionary Multi-Objective Optimization Marco Laumanns laumanns@tik.ee.ethz.ch Department of Information Technology and Electrical Engineering, Swiss Federal Institute

More information

Approximation-Guided Evolutionary Multi-Objective Optimization

Approximation-Guided Evolutionary Multi-Objective Optimization Approximation-Guided Evolutionary Multi-Objective Optimization Karl Bringmann 1, Tobias Friedrich 1, Frank Neumann 2, Markus Wagner 2 1 Max-Planck-Institut für Informatik, Campus E1.4, 66123 Saarbrücken,

More information

Mixed-Integer Bayesian Optimization Utilizing A-Priori Knowledge on Parameter Dependences

Mixed-Integer Bayesian Optimization Utilizing A-Priori Knowledge on Parameter Dependences Mixed-Integer Bayesian Optimization Utilizing A-Priori Knowledge on Parameter Dependences Michael T. M. Emmerich a Rui Li a Anyi Zhang a Ildikó Flesch b Peter Lucas c a LIACS, Leiden University, Niels

More information

Evolutionary multi-objective algorithm design issues

Evolutionary multi-objective algorithm design issues Evolutionary multi-objective algorithm design issues Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical Information Technology Karthik.sindhya@jyu.fi

More information

Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search

Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search Seventh International Conference on Hybrid Intelligent Systems Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search Crina Grosan and Ajith Abraham Faculty of Information Technology,

More information

Learning the Neighborhood with the Linkage Tree Genetic Algorithm

Learning the Neighborhood with the Linkage Tree Genetic Algorithm Learning the Neighborhood with the Linkage Tree Genetic Algorithm Dirk Thierens 12 and Peter A.N. Bosman 2 1 Institute of Information and Computing Sciences Universiteit Utrecht, The Netherlands 2 Centrum

More information

Decomposition of Multi-Objective Evolutionary Algorithm based on Estimation of Distribution

Decomposition of Multi-Objective Evolutionary Algorithm based on Estimation of Distribution Appl. Math. Inf. Sci. 8, No. 1, 249-254 (2014) 249 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.12785/amis/080130 Decomposition of Multi-Objective Evolutionary

More information

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem An Evolutionary Algorithm for the Multi-objective Shortest Path Problem Fangguo He Huan Qi Qiong Fan Institute of Systems Engineering, Huazhong University of Science & Technology, Wuhan 430074, P. R. China

More information

A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II

A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II Kalyanmoy Deb, Samir Agrawal, Amrit Pratap, and T Meyarivan Kanpur Genetic Algorithms Laboratory (KanGAL)

More information

An Improved Multi-Objective Evolutionary Algorithm with Adaptable Parameters

An Improved Multi-Objective Evolutionary Algorithm with Adaptable Parameters Nova Southeastern University NSUWorks CEC Theses and Dissertations College of Engineering and Computing 26 An Improved Multi-Objective Evolutionary Algorithm with Adaptable Parameters Khoa Duc Tran Nova

More information

DE/EDA: A New Evolutionary Algorithm for Global Optimization 1

DE/EDA: A New Evolutionary Algorithm for Global Optimization 1 DE/EDA: A New Evolutionary Algorithm for Global Optimization 1 Jianyong Sun, Qingfu Zhang and Edward P.K. Tsang Department of Computer Science, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ,

More information

Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm

Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm Nasreddine Hallam, Graham Kendall, and Peter Blanchfield School of Computer Science and IT, The Univeristy

More information

A Double-Distribution Statistical Algorithm for Composite Laminate Optimization

A Double-Distribution Statistical Algorithm for Composite Laminate Optimization A Double-Distribution Statistical Algorithm for Composite Laminate Optimization Laurent Grosset and Rodolphe Le Riche École des Mines, Saint-Étienne, France Raphael T. Haftka University of Florida, Gainesville,

More information

Fuzzy-Pareto-Dominance and its Application in Evolutionary Multi-Objective Optimization

Fuzzy-Pareto-Dominance and its Application in Evolutionary Multi-Objective Optimization Fuzzy-Pareto-Dominance and its Application in Evolutionary Multi-Objective Optimization Mario Köppen, Raul Vicente-Garcia, and Bertram Nickolay Fraunhofer IPK, Pascalstr. 8-9, 10587 Berlin, Germany {mario.koeppen

More information

A motivated definition of exploitation and exploration

A motivated definition of exploitation and exploration A motivated definition of exploitation and exploration Bart Naudts and Adriaan Schippers Technical report 02-99 at the University of Antwerp, Belgium. 1 INTRODUCTION The terms exploration and exploitation

More information

Adaptive Crossover in Genetic Algorithms Using Statistics Mechanism

Adaptive Crossover in Genetic Algorithms Using Statistics Mechanism in Artificial Life VIII, Standish, Abbass, Bedau (eds)(mit Press) 2002. pp 182 185 1 Adaptive Crossover in Genetic Algorithms Using Statistics Mechanism Shengxiang Yang Department of Mathematics and Computer

More information

STUDY OF MULTI-OBJECTIVE OPTIMIZATION AND ITS IMPLEMENTATION USING NSGA-II

STUDY OF MULTI-OBJECTIVE OPTIMIZATION AND ITS IMPLEMENTATION USING NSGA-II STUDY OF MULTI-OBJECTIVE OPTIMIZATION AND ITS IMPLEMENTATION USING NSGA-II A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Bachelor of Technology in Electrical Engineering.

More information

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences Upali K. Wickramasinghe and Xiaodong Li School of Computer Science and Information Technology, RMIT University,

More information

Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy

Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy Mihai Oltean Babeş-Bolyai University Department of Computer Science Kogalniceanu 1, Cluj-Napoca, 3400, Romania moltean@cs.ubbcluj.ro

More information

Comparing Algorithms, Representations and Operators for the Multi-Objective Knapsack Problem

Comparing Algorithms, Representations and Operators for the Multi-Objective Knapsack Problem Comparing s, Representations and Operators for the Multi-Objective Knapsack Problem Gualtiero Colombo School of Computer Science Cardiff University United Kingdom G.Colombo@cs.cardiff.ac.uk Christine L.

More information

Finding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm

Finding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm 16th World Congress of the International Fuzzy Systems Association (IFSA) 9th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT) Finding Sets of Non-Dominated Solutions with High

More information

An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function Results

An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function Results Syracuse University SURFACE Electrical Engineering and Computer Science College of Engineering and Computer Science -0-005 An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function

More information

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls Florian Siegmund, Amos H.C. Ng Virtual Systems Research Center University of Skövde P.O. 408, 541 48 Skövde,

More information

GECCO 2007 Tutorial / Evolutionary Multiobjective Optimization. Eckart Zitzler ETH Zürich. weight = 750g profit = 5.

GECCO 2007 Tutorial / Evolutionary Multiobjective Optimization. Eckart Zitzler ETH Zürich. weight = 750g profit = 5. Tutorial / Evolutionary Multiobjective Optimization Tutorial on Evolutionary Multiobjective Optimization Introductory Example: The Knapsack Problem weight = 75g profit = 5 weight = 5g profit = 8 weight

More information

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

gorithm and simple two-parent recombination operators soon showed to be insuciently powerful even for problems that are composed of simpler partial su

gorithm and simple two-parent recombination operators soon showed to be insuciently powerful even for problems that are composed of simpler partial su BOA: The Bayesian Optimization Algorithm Martin Pelikan, David E. Goldberg, and Erick Cantu-Paz Illinois Genetic Algorithms Laboratory Department of General Engineering University of Illinois at Urbana-Champaign

More information

Multiobjective Job-Shop Scheduling With Genetic Algorithms Using a New Representation and Standard Uniform Crossover

Multiobjective Job-Shop Scheduling With Genetic Algorithms Using a New Representation and Standard Uniform Crossover Multiobjective Job-Shop Scheduling With Genetic Algorithms Using a New Representation and Standard Uniform Crossover J. Garen 1 1. Department of Economics, University of Osnabrück, Katharinenstraße 3,

More information

Neural Network Regularization and Ensembling Using Multi-objective Evolutionary Algorithms

Neural Network Regularization and Ensembling Using Multi-objective Evolutionary Algorithms Neural Network Regularization and Ensembling Using Multi-objective Evolutionary Algorithms Yaochu Jin Honda Research Institute Europe Carl-Legien-Str 7 Offenbach, GERMANY Email: yaochujin@honda-ride Tatsuya

More information

On the Locality of Grammatical Evolution

On the Locality of Grammatical Evolution On the Locality of Grammatical Evolution Franz Rothlauf and Marie Oetzel Department of Business Administration and Information Systems University of Mannheim, 68131 Mannheim/Germany rothlauf@uni-mannheim.de

More information

COMPARING VARIOUS MARGINAL PROBABILITY MODELS IN EVOLUTIONARY ALGORITHMS

COMPARING VARIOUS MARGINAL PROBABILITY MODELS IN EVOLUTIONARY ALGORITHMS COMPARING VARIOUS MARGINAL PROBABILITY MODELS IN EVOLUTIONARY ALGORITHMS Petr Pošík Department of Cybernetics, CTU FEE Technická, 166 7 Prague, Czech Republic e-mail: posik@labe.felk.cvut.cz, phone: +--378

More information

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS In: Journal of Applied Statistical Science Volume 18, Number 3, pp. 1 7 ISSN: 1067-5817 c 2011 Nova Science Publishers, Inc. MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS Füsun Akman

More information

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization adfa, p. 1, 2011. Springer-Verlag Berlin Heidelberg 2011 Devang Agarwal and Deepak Sharma Department of Mechanical

More information

Unsupervised Feature Selection Using Multi-Objective Genetic Algorithms for Handwritten Word Recognition

Unsupervised Feature Selection Using Multi-Objective Genetic Algorithms for Handwritten Word Recognition Unsupervised Feature Selection Using Multi-Objective Genetic Algorithms for Handwritten Word Recognition M. Morita,2, R. Sabourin 3, F. Bortolozzi 3 and C. Y. Suen 2 École de Technologie Supérieure, Montreal,

More information

Distributed Probabilistic Model-Building Genetic Algorithm

Distributed Probabilistic Model-Building Genetic Algorithm Distributed Probabilistic Model-Building Genetic Algorithm Tomoyuki Hiroyasu 1, Mitsunori Miki 1, Masaki Sano 1, Hisashi Shimosaka 1, Shigeyoshi Tsutsui 2, and Jack Dongarra 3 1 Doshisha University, Kyoto,

More information

A Multi-objective Evolutionary Algorithm of Principal Curve Model Based on Clustering Analysis

A Multi-objective Evolutionary Algorithm of Principal Curve Model Based on Clustering Analysis A Multi-objective Evolutionary Algorithm of Principal Curve Model Based on Clustering Analysis Qiong Yuan1,2, Guangming Dai1,2* 1 School of Computer Science, China University of Geosciences, Wuhan 430074,

More information

Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms

Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms Lei Chen 1,2, Kalyanmoy Deb 2, and Hai-Lin Liu 1 1 Guangdong University of Technology,

More information

Multi-Objective Pipe Smoothing Genetic Algorithm For Water Distribution Network Design

Multi-Objective Pipe Smoothing Genetic Algorithm For Water Distribution Network Design City University of New York (CUNY) CUNY Academic Works International Conference on Hydroinformatics 8-1-2014 Multi-Objective Pipe Smoothing Genetic Algorithm For Water Distribution Network Design Matthew

More information

SPEA2: Improving the strength pareto evolutionary algorithm

SPEA2: Improving the strength pareto evolutionary algorithm Research Collection Working Paper SPEA2: Improving the strength pareto evolutionary algorithm Author(s): Zitzler, Eckart; Laumanns, Marco; Thiele, Lothar Publication Date: 2001 Permanent Link: https://doi.org/10.3929/ethz-a-004284029

More information

minimizing minimizing

minimizing minimizing The Pareto Envelope-based Selection Algorithm for Multiobjective Optimization David W. Corne, Joshua D. Knowles, Martin J. Oates School of Computer Science, Cybernetics and Electronic Engineering University

More information

Combining Model-based and Genetics-based Offspring Generation for Multi-objective Optimization Using a Convergence Criterion

Combining Model-based and Genetics-based Offspring Generation for Multi-objective Optimization Using a Convergence Criterion Combining Model-based and Genetics-based Offspring Generation for Multi-objective Optimization Using a Convergence Criterion Aimin Zhou, Yaochu Jin, Qingfu Zhang, Bernhard Sendhoff, Edward Tsang Abstract

More information

Reducing Fitness Evaluations Using Clustering Techniques and Neural Network Ensembles

Reducing Fitness Evaluations Using Clustering Techniques and Neural Network Ensembles Reducing Evaluations Using Clustering Techniques and Neural Network Ensembles Yaochu Jin and Bernhard Sendhoff Honda Research Institute Europe Carl-Legien-Str. 30 63073 Offenbach/Main, Germany yaochu.jin@honda-ri.de

More information

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL., NO., MONTH YEAR 1

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL., NO., MONTH YEAR 1 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL., NO., MONTH YEAR 1 An Efficient Approach to Non-dominated Sorting for Evolutionary Multi-objective Optimization Xingyi Zhang, Ye Tian, Ran Cheng, and

More information

USING CHI-SQUARE MATRIX TO STRENGTHEN MULTI-OBJECTIVE EVOLUTIONARY ALGORITHM

USING CHI-SQUARE MATRIX TO STRENGTHEN MULTI-OBJECTIVE EVOLUTIONARY ALGORITHM Far East Journal of Mathematical Sciences (FJMS) Volume, Number, 2013, Pages Available online at http://pphmj.com/journals/fjms.htm Published by Pushpa Publishing House, Allahabad, INDIA USING CHI-SQUARE

More information

The Genetic Algorithm for finding the maxima of single-variable functions

The Genetic Algorithm for finding the maxima of single-variable functions Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding

More information

Adaptive Reference Vector Generation for Inverse Model Based Evolutionary Multiobjective Optimization with Degenerate and Disconnected Pareto Fronts

Adaptive Reference Vector Generation for Inverse Model Based Evolutionary Multiobjective Optimization with Degenerate and Disconnected Pareto Fronts Adaptive Reference Vector Generation for Inverse Model Based Evolutionary Multiobjective Optimization with Degenerate and Disconnected Pareto Fronts Ran Cheng, Yaochu Jin,3, and Kaname Narukawa 2 Department

More information

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Ke Li 1, Kalyanmoy Deb 1, Qingfu Zhang 2, and Sam Kwong 2 1 Department of Electrical and Computer

More information

Using Different Many-Objective Techniques in Particle Swarm Optimization for Many Objective Problems: An Empirical Study

Using Different Many-Objective Techniques in Particle Swarm Optimization for Many Objective Problems: An Empirical Study International Journal of Computer Information Systems and Industrial Management Applications ISSN 2150-7988 Volume 3 (2011) pp.096-107 MIR Labs, www.mirlabs.net/ijcisim/index.html Using Different Many-Objective

More information

Using an outward selective pressure for improving the search quality of the MOEA/D algorithm

Using an outward selective pressure for improving the search quality of the MOEA/D algorithm Comput Optim Appl (25) 6:57 67 DOI.7/s589-5-9733-9 Using an outward selective pressure for improving the search quality of the MOEA/D algorithm Krzysztof Michalak Received: 2 January 24 / Published online:

More information

An Efficient Constraint Handling Method for Genetic Algorithms

An Efficient Constraint Handling Method for Genetic Algorithms An Efficient Constraint Handling Method for Genetic Algorithms Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Department of Mechanical Engineering Indian Institute of Technology Kanpur Kanpur,

More information

Multiobjective Prototype Optimization with Evolved Improvement Steps

Multiobjective Prototype Optimization with Evolved Improvement Steps Multiobjective Prototype Optimization with Evolved Improvement Steps Jiri Kubalik 1, Richard Mordinyi 2, and Stefan Biffl 3 1 Department of Cybernetics Czech Technical University in Prague Technicka 2,

More information

DCMOGADES: Distributed Cooperation model of Multi-Objective Genetic Algorithm with Distributed Scheme

DCMOGADES: Distributed Cooperation model of Multi-Objective Genetic Algorithm with Distributed Scheme : Distributed Cooperation model of Multi-Objective Genetic Algorithm with Distributed Scheme Tamaki Okuda, Tomoyuki HIROYASU, Mitsunori Miki, Jiro Kamiura Shinaya Watanabe Department of Knowledge Engineering,

More information

Evolutionary Multiobjective Bayesian Optimization Algorithm: Experimental Study

Evolutionary Multiobjective Bayesian Optimization Algorithm: Experimental Study Evolutionary Multiobective Bayesian Optimization Algorithm: Experimental Study Josef Schwarz * schwarz@dcse.fee.vutbr.cz Jiří Očenášek * ocenasek@dcse.fee.vutbr.cz Abstract: This paper deals with the utilizing

More information

Incremental Gaussian Model-Building in Multi-Objective EDAs with an Application to Deformable Image Registration

Incremental Gaussian Model-Building in Multi-Objective EDAs with an Application to Deformable Image Registration Incremental Gaussian Model-Building in Multi-Objective EDAs with an Application to Deformable Image Registration Peter A.N. Bosman Centrum Wiskunde & Informatica (CWI) P.O. Box 9479 9 GB Amsterdam The

More information

Approximation Model Guided Selection for Evolutionary Multiobjective Optimization

Approximation Model Guided Selection for Evolutionary Multiobjective Optimization Approximation Model Guided Selection for Evolutionary Multiobjective Optimization Aimin Zhou 1, Qingfu Zhang 2, and Guixu Zhang 1 1 Each China Normal University, Shanghai, China 2 University of Essex,

More information

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Tony Maciejewski, Kyle Tarplee, Ryan Friese, and Howard Jay Siegel Department of Electrical and Computer Engineering Colorado

More information

Multi-objective Genetic Algorithms: Problem Difficulties and Construction of Test Problems

Multi-objective Genetic Algorithms: Problem Difficulties and Construction of Test Problems Multi-objective Genetic Algorithms: Problem Difficulties and Construction of Test Problems Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Department of Mechanical Engineering Indian Institute

More information

A Genetic Based Algorithm to Generate Random Simple Polygons Using a New Polygon Merge Algorithm

A Genetic Based Algorithm to Generate Random Simple Polygons Using a New Polygon Merge Algorithm A Genetic Based Algorithm to Generate Random Simple Polygons Using a New Polygon Merge Algorithm Ali Nourollah, Mohsen Movahedinejad Abstract In this paper a new algorithm to generate random simple polygons

More information

EVOLUTIONARY algorithms (EAs) are a class of

EVOLUTIONARY algorithms (EAs) are a class of An Investigation on Evolutionary Gradient Search for Multi-objective Optimization C. K. Goh, Y. S. Ong and K. C. Tan Abstract Evolutionary gradient search is a hybrid algorithm that exploits the complementary

More information

Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach

Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach Reference Point-Based Particle Swarm Optimization Using a Steady-State Approach Richard Allmendinger,XiaodongLi 2,andJürgen Branke University of Karlsruhe, Institute AIFB, Karlsruhe, Germany 2 RMIT University,

More information

Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm

Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm Effects of Discrete Design-variable Precision on Real-Coded Genetic Algorithm Toshiki Kondoh, Tomoaki Tatsukawa, Akira Oyama, Takeshi Watanabe and Kozo Fujii Graduate School of Engineering, Tokyo University

More information

X/$ IEEE

X/$ IEEE IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 1, FEBRUARY 2008 41 RM-MEDA: A Regularity Model-Based Multiobjective Estimation of Distribution Algorithm Qingfu Zhang, Senior Member, IEEE,

More information

division 1 division 2 division 3 Pareto Optimum Solution f 2 (x) Min Max (x) f 1

division 1 division 2 division 3 Pareto Optimum Solution f 2 (x) Min Max (x) f 1 The New Model of Parallel Genetic Algorithm in Multi-Objective Optimization Problems Divided Range Multi-Objective Genetic Algorithm Tomoyuki HIROYASU Mitsunori MIKI Sinya WATANABE Doshisha University,

More information

Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems

Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems Improved Pruning of Non-Dominated Solutions Based on Crowding Distance for Bi-Objective Optimization Problems Saku Kukkonen and Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information