An Angle based Constrained Many-objective Evolutionary Algorithm

Similar documents
Effectiveness and efficiency of non-dominated sorting for evolutionary multi- and many-objective optimization

MULTI-OBJECTIVE optimization problems (MOPs)

Two-Archive Evolutionary Algorithm for Constrained Multi-Objective Optimization

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances

Using ɛ-dominance for Hidden and Degenerated Pareto-Fronts

Using an outward selective pressure for improving the search quality of the MOEA/D algorithm

Evolutionary Computation

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization

A Search Method with User s Preference Direction using Reference Lines

MULTI-objective optimization problems (MOPs),

Investigating the Effect of Parallelism in Decomposition Based Evolutionary Many-Objective Optimization Algorithms

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems

Benchmarking Multi- and Many-objective Evolutionary Algorithms under Two Optimization Scenarios

MULTI-objective optimization problems (MOPs) are. A Many-Objective Evolutionary Algorithm Using A One-by-One Selection Strategy

Approximation Model Guided Selection for Evolutionary Multiobjective Optimization

Finding Sets of Non-Dominated Solutions with High Spread and Well-Balanced Distribution using Generalized Strength Pareto Evolutionary Algorithm

Multi-objective Optimization

THE CONSTRAINED multi-objective optimization problem

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL., NO., MONTH YEAR 1

Multi-objective Optimization

Locating the boundaries of Pareto fronts: A Many-Objective Evolutionary Algorithm Based on Corner Solution Search

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague.

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

Balancing Survival of Feasible and Infeasible Solutions in Evolutionary Optimization Algorithms

Evolutionary multi-objective algorithm design issues

Using Different Many-Objective Techniques in Particle Swarm Optimization for Many Objective Problems: An Empirical Study

DEMO: Differential Evolution for Multiobjective Optimization

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Late Parallelization and Feedback Approaches for Distributed Computation of Evolutionary Multiobjective Optimization Algorithms

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach

An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls

Difficulty Adjustable and Scalable Constrained Multi-objective Test Problem Toolkit

Adaptive Reference Vector Generation for Inverse Model Based Evolutionary Multiobjective Optimization with Degenerate and Disconnected Pareto Fronts

R2-IBEA: R2 Indicator Based Evolutionary Algorithm for Multiobjective Optimization

IN the real world, it is not uncommon to face a problem. A Grid-Based Evolutionary Algorithm for Many-Objective Optimization

An Efficient Constraint Handling Method for Genetic Algorithms

A GENETIC ALGORITHM APPROACH FOR TECHNOLOGY CHARACTERIZATION. A Thesis EDGAR GALVAN

A Similarity-Based Mating Scheme for Evolutionary Multiobjective Optimization

Lecture

IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, VOL. XX, NO. X, XXXX XXXX 1

An Evolutionary Algorithm with Advanced Goal and Priority Specification for Multi-objective Optimization

A Clustering Multi-objective Evolutionary Algorithm Based on Orthogonal and Uniform Design

A Framework for Large-scale Multi-objective Optimization based on Problem Transformation

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

An Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

MOEA/D with NBI-style Tchebycheff approach for Portfolio Management

Improved Crowding Distance for NSGA-II

An Evolutionary Multi-Objective Crowding Algorithm (EMOCA): Benchmark Test Function Results

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences

Multi-Objective Pipe Smoothing Genetic Algorithm For Water Distribution Network Design

Improved S-CDAS using Crossover Controlling the Number of Crossed Genes for Many-objective Optimization

A Surrogate-assisted Reference Vector Guided Evolutionary Algorithm for Computationally Expensive Many-objective Optimization

Recombination of Similar Parents in EMO Algorithms

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

What is GOSET? GOSET stands for Genetic Optimization System Engineering Tool

SPEA2+: Improving the Performance of the Strength Pareto Evolutionary Algorithm 2

EVOLUTIONARY algorithms (EAs) are a class of

CHAPTER 2 MULTI-OBJECTIVE REACTIVE POWER OPTIMIZATION

Escaping Local Optima: Genetic Algorithm

An Evolutionary Algorithm Approach to Generate Distinct Sets of Non-Dominated Solutions for Wicked Problems

Assessing the Convergence Properties of NSGA-II for Direct Crashworthiness Optimization

Finding a preferred diverse set of Pareto-optimal solutions for a limited number of function calls

An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization

Generating Uniformly Distributed Pareto Optimal Points for Constrained and Unconstrained Multicriteria Optimization

A Hybrid Genetic Algorithm for the Distributed Permutation Flowshop Scheduling Problem Yan Li 1, a*, Zhigang Chen 2, b

Towards Understanding Evolutionary Bilevel Multi-Objective Optimization Algorithm

Author s Accepted Manuscript

EliteNSGA-III: An Improved Evolutionary Many- Objective Optimization Algorithm

Heuristic Optimisation

Solving Multi-objective Optimisation Problems Using the Potential Pareto Regions Evolutionary Algorithm

Incorporating Decision-Maker Preferences into the PADDS Multi- Objective Optimization Algorithm for the Design of Water Distribution Systems

Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search

Multi-Objective Memetic Algorithm using Pattern Search Filter Methods

Adaptive Operator Selection With Bandits for a Multiobjective Evolutionary Algorithm Based on Decomposition

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

THE CAPACITATED arc routing problem (CARP) [1] is a

Towards an Estimation of Nadir Objective Vector Using Hybrid Evolutionary and Local Search Approaches

Metaheuristic Optimization with Evolver, Genocop and OptQuest

minimizing minimizing

An Efficient Solution Strategy for Bilevel Multiobjective Optimization Problems Using Multiobjective Evolutionary Algorithms

Asoftware development process typically consists of four

Evolutionary Algorithms

CS5401 FS2015 Exam 1 Key

Telecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem

Adjusting Parallel Coordinates for Investigating Multi-Objective Search

International Conference on Computer Applications in Shipbuilding (ICCAS-2009) Shanghai, China Vol.2, pp

Neural Network Weight Selection Using Genetic Algorithms

Optimizing Delivery Time in Multi-Objective Vehicle Routing Problems with Time Windows

Evolutionary Multi-objective Optimization of Business Process Designs with Pre-processing

Overcompressing JPEG images with Evolution Algorithms

Pseudo-code for typical EA

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

ETEA: A Euclidean Minimum Spanning Tree-Based Evolutionary Algorithm for Multi-Objective Optimization

The Binary Genetic Algorithm. Universidad de los Andes-CODENSA

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 12, NO. 4, AUGUST

Transcription:

APPLIED INTELLIGENCE manuscript No. (will be inserted by the editor) An Angle based Constrained Many-objective Evolutionary Algorithm Yi Xiang Jing Peng Yuren Zhou Li Zefeng Chen Miqing Received: date / Accepted: date Abstract Having successfully handled many-objective optimization problems with box constraints only by using VaEA, a vector angle based many-objective evolutionary algorithm in our precursor study, this paper extended VaEA to solve generic constrained many-objective optimization problems. The proposed algorithm (denoted by CVaEA) differs from the original one mainly in the mating selection and the environmental selection, which are made suitable in the presence of infeasible solutions. Furthermore, we suggest a set of new constrained many-objective test problems which have different ranges of function values for all the objectives. Compared with normalized problems, this set of scaled ones is more applicable to test an algorithm s performance. This is due to the nature property of practical problems being usually far from normalization. The proposed CVaEA was compared with two latest constrained many-objective optimization methods on the proposed test problems with up to objectives, and on a constrained engineering problem from practice. It was shown by the simulation results that CVaEA could find a set of well converged and properly distributed solutions, and, compared with its competitors, obtained a better balance between convergence and diversity. This, and the original VaEA paper, together demonstrate the usefulness and efficiency of vector angle based algorithms for handling both constrained and unconstrained many-objective optimization problems. Corresponding author: Y. Zhou This paper is supported by the National Natural Science Foundation of China (Grant nos. 64743 and 667343), and the Scientific Research Special Plan of Guangzhou Science and Technology Programme (Grant no. 674) Yi Xiang, Jing Peng, Yuren Zhou and Zefeng Chen School of Data and Computer Science & Collaborative Innovation Center of High Performance Computing, Sun Yat-sen University, Guangzhou 6, P. R. China. E-mail: gzhuxiang yi@63.com (Y. Xiang), zhouyuren@mail.sysu.edu.cn (Y. Zhou) Miqing Li Centre of Excellence for Research in Computational Intelligence and Applications (CERCIA), School of Computer Science, University of Birmingham, Birmingham B TT, U. K.

Y. Xiang et al. Keywords Many-objective optimization; constraint handling; evolutionary algorithms; VaEA. Introduction Recently, many-objective optimization problems (MaOPs) have caught much attention from the evolutionary computation community. Such problems refer to multiobjective optimization problems (MOPs) with more than three objectives. The MaOPs can cause great difficulties for traditional multi-objective evolutionary algorithms (MOEAs) that work quite well for problems with two or three objectives. The performance degradation of Pareto-based MOEAs, such as NSGAII [8], SPEA [37], and others [, 7], is mainly due to the insufficient selection pressure toward the Pareto front with the increase of the number of objectives []. MaOPs are widely seen in water distribution system design [], automotive engine calibration problem [], airfoil designing problem [9], and [8,]. Therefore, it is worth making great efforts to efficiently solve them. Generally, MaOPs can be classified into two groups: the constrained and unconstrained (with box constraints only) problems. In this paper, we consider the following constrained many-objective optimization problems with J inequality and K equality constraints. Minimize F(x) = (f (x), f (x),..., f M (x)) T, s.t. g j (x), j =,,..., J, h k (x) =, k =,,..., K, x Ω, () where M is the number of objectives, and M 4. And x = (x, x,..., x n ) T is the decision vector, where n is the number of decision variables. In (), Ω = n i= [x(l) i, x (U) i ] R n is called the decision space, where x (L) i and x (U) i are the lower and upper bounds of the decision variable x i, respectively. If we omit the inequality and equality constraints in MaOP (), then we get unconstrained (with box constraints only) problems which can be stated as follows. Minimize F(x) = (f (x), f (x),..., f M (x)) T, s.t. x Ω. () For MaOP (), by dealing with the challenges in solving MaOPs to some extent, some specific many-objective evolutionary algorithms (MaOEAs) have been proposed in recent years. Yang et al. [3] proposed a grid-based evolutionary algorithm (GrEA) for many-objective optimization problems. In GrEA, the grid dominance and grid difference are used to increase the selection pressure toward the optimal fronts. In addition, three grid-based criteria, i.e., grid ranking, grid crowding distance and grid coordinate point distance, are incorporated to help maintain an extensive and uniform distribution among solutions. Li et al. [3] suggested using a shift-based density estimation (SDE) strategy to measure the density of the population so as

CVaEA 3 to make Pareto-based algorithms suitable for MaOPs. In SDE, the density of a solution s neighborhood is measured by shifting the positions of other solutions according to their convergence comparison with the current solution [8]. Thus, SDE covers both the distribution and convergence information of individuals. The SDE was applied to three popular Pareto-based algorithms (i.e., NSGA-II, SPEA, and PESA-II), and the experimental results have demonstrated its usefulness in handling many-objective problems. Praditwong and Yao [4], Wang et al. [7] proposed the two-archive algorithm (Two Arch) and its improved version Two Arch. In these algorithms, two archives are used, focusing on convergence and diversity, respectively. In Two Arch, different selection principles (i.e., indicator-based and Pareto-based) are assigned to the two archives. In addition, a new L p -norm-based (p < ) diversity maintenance scheme is designed. It was shown by the experimental results that the Two Arch performed well on a set of MaOPs with up to objectives. Recently, Li et al. [] have proposed a Bi-Goal Evolution (BiGE) to optimize problems with many objectives. In BiGE, a given MaOP is converted into a bi-goal (objective) optimization problem regarding proximity and diversity. Then, in this bi-goal domain, the Pareto dominance relation can be applied to handle the new problem well. The BiGE has been found to be very competitive against other state-of-the-art algorithms, and it suggested a completely new way of addressing many-objective problems. With the help of a set of supplied reference points or weight vectors, some reference set based or decomposition based evolutionary algorithms have been suggested for MaOPs. Deb and Jain [6] proposed a reference-point-based many-objective evolutionary algorithm (called NSGAIII) by following the NSGAII framework. In NSGAI- II, more emphases are put on non-dominated population members, and the diversity within the population is maintained by using a number of well-distributed reference points. The proposed NSGAIII is found to be able to produce satisfactory results on a set of test problems with three to fifteen objectives. Li et al. [] suggested a unified paradigm named MOEA/DD that combines dominance- and decomposition-based approaches for dealing with MaOPs. In MOEA/DD, a set of weight vectors is used to specify subregions in the objective space, aiming at maintaining a good distribution among solutions. In addition, an efficient procedure [6] is employed to update the non-domination level structure of the population after an offspring solution is introduced. The empirical results have demonstrated the effectiveness of the proposed MOEA/DD in finding a set of well-converged and well-distributed solutions for the DTLZ and WFG test suites. Some other evolutionary algorithms concentrating on MaOP () can be found in [,, 4, 6, 33, 3]. From the literature review, it seems that much more attention have been restricted on handling unconstrained MaOPs only. And there is not enough literature on dealing with constraints in an MaOP. Representatives for constrained MaOP () can be reviewed as follows. Jan and Zhang [4] proposed a constrained MOEA/D-DE by modifying the replacement and update schemes in the original approach [9]. In the constrained algorithm, a penalty function is adopted to penalize infeasible solutions that requires two control parameters. In addition, to make the approach applicable, four other parameters are also required. Having realized the difficulty in the parameter configuration, Jain and Deb proposed the C-MOEA/D algorithm by making the following two major modifications to the original MOEA/D-

4 Y. Xiang et al. DE approach [3]: () Instead of replacing the member just based on a performance metric (PBI or Tchebycheff), the constraint violation (CV), if any, of a child solution y and its randomly picked neighbor x is checked, and adequate emphasis are put on feasible and small-cv solutions; () When creating the offspring population, the DE operator is replaced by the SBX and polynomial mutation operators. The experimental results have shown that C-MOEA/D was a competing and alternate algorithm for MaOPs with constraints. Also in [3], the authors proposed another constrained MaOEA based on the framework of NSGAIII procedure, namely CNSGAIII. The differences between CNSGAIII and the original unconstrained one can be summarized as follows. First, in the presence of constraints, the constraint-dominance principle adopted in NSGAII [8] is used to divide the population into different nondominated levels. Second, instead of randomly selecting pairs of members to create offsprings, CNSGAIII uses a modified tournament selection operation for choosing parents, which emphasizes feasible solutions over infeasible ones, and smaller-cv solutions over larger-cv ones. Li et al. [] introduced a constraint handling method in MOEA/DD, and proposed the CMOEA/DD to solve constrained MaOPs. In the algorithm, the survival of infeasible solution depends on both CVs and niching scenarios. Infeasible solutions, associated with an isolated subregion, are given a second chance to survive. In addition, a similar binary tournament selection procedure as in CNSGAIII is used to choose mating parents. In the precursor study [3], we suggested a many-objective evolutionary algorithm based on vector angles (denoted by VaEA). In this algorithm, the general framework is the same as in NSGAII or NSGAIII. However, the VaEA uses the maximum-vector-angle-first principle to keep a good distribution among solutions. Besides, another principle named the worse-elimination is adopted to conditionally replace worse solutions in terms of the convergence. Based on the above two principles, VaEA achieves a good balance between convergence and diversity. In this paper, we extend VaEA to solve constrained MaOPs by making the following main modifications to the original VaEA: () a modified tournament selection operation as in CNSGAIII is used to choose mating parents; () a portion of infeasible solutions is added into the new population before the inclusion of feasible ones. Certainly infeasible solutions with smaller constraint violations are preferred and the process aims at sufficiently utilizing the information provided by infeasible solutions. Especially, in the case when infeasible solutions are located at an isolated region, the inclusion of them will be good for the diversity promotion; (3) in the presence of infeasible solutions, the modified worse-elimination principle emphasizes feasible solutions over infeasible ones, and emphasizes solutions with smaller fitness value (i.e., the sum of all normalized objectives) over those with larger values. We denote the new algorithm by CVaEA hereafter. The main contributions of this work can be summarized as follows. An efficient and effective many-objective EA for constrained optimization problems is suggested. The proposed CVaEA inherits some good properties from the original VaEA, for example, it is free from a set of supplied reference points or weight vectors, and has the time complexity max{o(nlog M N), O(MN )} [3], where M is the number of objectives and N is the population size. The

CVaEA O(Nlog M N) is the time for the fast non-dominated sorting [] while O(MN ) is the required time for the association and niching operations [3]. A set of constrained scaled test problems is proposed. In this new test suite, each test problem has a different range of values for each objective. Therefore, it is more suitable to use them to test an algorithm s performance because practical problems are far from normalized ones (i.e., with an identical range for each objective). The proposed test problems can be scaled to any number of objectives, and so does the number of decision variables. The rest of this paper is organized as follows: We first describe our proposed CVaEA in detail in Section. Then, in Section 3, we verify the performance of C- VaEA through an experimental study, including the descriptions of the scaled test problems and the performance comparison against CNSGAIII and CMOEA/DD. In what followed, CVaEA is applied to a practical problem in Section 4. Finally, Section concludes the paper. Proposed VaEA with Constraint-Handing Approach This section first gives a brief review of VaEA, followed by the description of the general framework of CVaEA. Finally, we present details of the modifications in the mating and environmental selections in the presence of constraints.. A Brief Review of VaEA Before we describe the procedure for handling constrained optimization problems, we first give a brief review of the recently proposed VaEA described in the original paper [3]. The VaEA uses the same general framework as in NSGAII [8] or NSGAIII [6] procedures. By applying mating selection and genetic (crossover and mutation) operators, the current population P is used to create an offspring population Q. The union population S = P Q is then adaptively normalized. By utilizing the nondominated sorting procedure, the solutions in S are divided into different layers, i.e., F, F,... All members in layer to layer l are first included in a temporary set S t. If S t = N (N is the population size), then the next generation is started with P = S t. If S t > N, then all members up to the l layer are already included for the next generation, i.e., P = l F i, and the remaining K = N P solutions will be i= selected from the layer F l (called the critical layer) one by one. We start by defining the angle from a member x j F l to the set P as the vector angle between x j and its target solution to which x j has the minimum vector angle. Then, the maximum-vector-angle-first principle is used to select candidates from F l one by one to fill the population P. Specifically, the priority is given to the member in F l that has the maximum vector angle to P. After a member is added, we may need to update target solutions of the remaining solutions in F l. According to the above procedure, the proposed VaEA selects solutions dynamically and it is expected

6 Y. Xiang et al. to keep a well distributed population. Another principle named the worse-elimination is adopted to allow worse solutions in terms of the convergence to be conditionally replaced by other individuals so as to keep a balance between convergence and diversity. In VaEA, the convergence of a solution is measured by the sum of all normalized objectives. When a solution in P is replaced by a member in F l, the update of target solutions for the remaining members is also needed if necessary. The above procedure repeats until the population is full. The worst time complexity of VaEA is max{o(nlog M N), O(MN )} which is equivalent to that of NSGAIII. However, the VaEA has the following good properties: () it is free from a set of supplied reference points or weight vectors; () it introduces no additional algorithmic parameters; (3) the VaEA was found to be efficient and effective in solving problems having a large number of objectives.. General Framework of CVaEA The pseudo code of the proposed constrained approach is shown in Algorithm. The CVaEA shares a common framework that is employed by many evolutionary algorithms. First, a population with N solutions is randomly initialized in the whole decision space Ω. Then more potential solutions are selected into the mating pool P according to the fitness value of each individual. In what followed, a set of offspring solutions Q is generated by applying crossover and mutation operations to the population P. Finally, by applying the environmental selection, N solutions in the union set of P and Q survive into the next generation. The above steps repeat until the number of generations G reaches its maximum value, i.e., G max. Algorithm Framework of the proposed CVaEA : Initialization(P ) : G = 3: while G < G max do 4: P = Mating selection(p ) : Q = V ariation(p ) 6: S = P Q 7: P = Environmental selection(s) 8: G + + 9: end while The main differences between CVaEA and the original VaEA lie in the Mating selection (line 4 in Algorithm ) and the Environmental selection (line 7). We will describe these modifications in the following subsections, respectively..3 Modifications in the Mating Selection In CVaEA, the mating pool P is constructed as follows (line 4 in Algorithm ): () select two members p and p from the current population P at random; () apply the modified binary tournament selection operation [3] to p and p, and select a better

CVaEA 7 one; (3) repeat the above steps until N parents are selected. Note that some better solutions may be selected more than once. Before applying the binary tournament selection in step (), we use the CV value introduced in [3] to assess the quality of infeasible solutions. For calculating the CV value of an infeasible solution x [denoted by CV (x)], we first normalize all constraints by the method suggested in [3]. The normalized inequality and equality constraints are denoted by g j (x) and h k (x), respectively. Then CV (x) is given by the following equation. CV (x) = J K < g j (x) > + h k (x), (3) j= where the bracket operation < β > returns the negative of β if β <, and returns if β [3]. The smaller the CV value is, the better the solution will be. In step (), the conditions for choosing a solution between p and p, by using the binary tournament selection, are listed as follows. k= If p is feasible and p is infeasible, select p ; If p is feasible and p is infeasible, select p ; Both p and p are infeasible, then if p has smaller CV, select p ; Both p and p are infeasible, then if p has smaller CV, select p ; If both p and p are feasible, then p or p is selected randomly. After the mating pool is formed, the normal crossover and mutation operators are used to generate the offspring population Q (line in Algorithm ). Next, the population for the next generation is created by applying the environmental selection to the union of P and Q (lines 6-7 in Algorithm ). Details of the environmental selection will be given in the next section..4 Modifications in the Environmental Selection In the presence of constraints, the environmental selection procedure is different from that in VaEA. First, we divide the union set S of size N into two sets: the feasible solutions (set F) and infeasible solutions (set I). If the number of feasible solutions is no larger than N, i.e., F N, then we definitely add all feasible solutions into the new population P, and the remaining members are selected from the set I. This is simply realized by sorting infeasible solutions in an ascending order based on the CV values. Then the first N F solutions in I are selected to fill the population P. However, if F > N, meaning that there are more feasible solutions than required, we first normalize the set S as in the original NSGAIII paper using only feasible solutions. Then, some infeasible solutions, if any, are selected and added into the new population P first. The number of selected infeasible solutions N ifs is controlled by a parameter α, i.e., N ifs = αn. The parameter α takes a smaller value whose effect on the algorithm s performance will be investigated in Section 3. later. If I N ifs, then we definitely add all infeasible solutions. Otherwise, N ifs infeasible solutions having smaller CV values are preferred. Next, solutions in F are divided

8 Y. Xiang et al. into different layers and the critical layer F l is identified. Note that when applying the non-dominated sorting procedure, the number of included infeasible solutions should be considered. Finally, N P solutions are selected from the front F l based on t- wo principles in the original VaEA: the maximum-vector-angle-first principle and the worse-elimination principle. As stated in Section., the target solution of each member in F l is defined as the closest solution in P in terms of the angle. These target solutions are pre-calculated before applying the two principles, and may be changed during the selection process. Specifically, the maximum-vector-angle-first principle is the same as in the VaEA. Each time the solution in F l that has the maximum angle to the population P is chosen and included. However, there exist some differences in the worse-elimination principle in the presence of infeasible solutions. The pseudo-code of the modified worse-elimination principle is shown in Algorithm. Suppose x j F l has the minimum vector angle to P, and assume that x j is associated with y r P. If the angle between them is smaller than the threshold π/ N+ [3], then we will exchange x j and y r if one of the following conditions is true: () y r is infeasible, and () both are feasible, but x j has a smaller fitness value which is defined as the sum of the normalized objective values [3] (lines 3-4 in Algorithm ). After exchanging x j and y r (y r is now in F l and may be reconsidered in the next generation), we need to do two works: () finding the target solution of y r (line in Algorithm ). This can be achieved by a normal routine. First, calculate the vector angles between y r and each member in P. Second, find a minimum one among these angles. Finally, the corresponding solution is the target one we are looking for; and () updating target solutions of the remaining members in F l (line 6 in Algorithm ). To this end, the same procedure as in VaEA is employed here. We need to calculate the vector angle between each remaining member in F l and the newly added x j. If the angle is smaller than the original one, then the target solution is updated accordingly. Algorithm The modified worse-elimination principle : Find x j F l that has the minimum angle to P, and assume that x j is associated with y r : if The angle between x j and y r is smaller than π/ N+ then 3: if {feasible(y r ) = FALSE} or {feasible(y r ) = TRUE and F itness(x j ) < F itness(y r )} then 4: Exchange x j and y r : Determine the target solution of y r 6: Update target solutions of the remaining members in F l 7: end if 8: end if According to Algorithm, it is possible that some of the added infeasible solutions will be replaced by feasible ones while others may be not. For example, infeasible solutions located at an isolated region [] will not be substituted, and the preservation of them helps the algorithm to search around this poorly-explored region. Hence, this may be good for the diversity promotion [].

CVaEA 9 3 Experimental Study This section aims at verifying the performance of CVaEA through the experimental study. We compare our proposed algorithm with CNSGAIII and CMOEA/DD on a set of constrained scaled test problems originated from the CDTLZ benchmarks that were first introduced in [3]. 3. Test Problems In [3], three types of constrained problems were first developed on the basis of the DTLZ test suite [9]. In Type- problems, the original DTLZ problems were modified by adding a constraint which formed an infeasible barrier in approaching the Pareto-optimal front. In this type of problems, the original Pareto-optimal fronts are still optimal. However, an algorithm may find difficulty in solving these problems because it is not easy to overcome the infeasible regions in the objective space. By applying this principle, the authors got C-DTLZ and C-DTLZ3 that share the same objective functions with the original DTLZ and DTLZ3 problems. In C-DTLZ, a linear constraint is added and due to this constraint the feasible solutions appear only in a region of objective space close to the true front. Meanwhile, a non-linear constraint is added in C-DTLZ3 which provides a band of infeasible space adjacent to the Pareto-optimal front. Mathematical formulations of constraints can be found elsewhere [3]. For type- constrained problems, some parts of the Pareto-optimal front are made infeasible by adding a non-linear constraint. This kind of problems are used to test an algorithm s ability to deal with discontinuous Pareto-optimal fronts. The C-DTLZ and convex C-DTLZ were designed by applying the above principle to the original DTLZ [9] and convex DTLZ problems [6]. Unlike type- and type- problems that have only one constraint, the type-3 problems involve multiple constraints and the entire Pareto-optimal front of the unconstrained problem is no longer optimal. Instead, it is made up of parts of the added constrained surfaces. For this purpose, DTLZ and DTLZ4 were modified accordingly by adding M different constraints. Hence, two new problems C3-DTLZ and C3-DTLZ4 were obtained. For details of all types of constrained problems, please refer to [3]. The objectives of the constrained DTLZ family (or CDTLZ) share the same property: all of them are in the same range. For example, in C-DTLZ3, C-DTLZ and convex C-DTLZ, all the objectives f i, i =,,..., M are in the region [,]. S- ince these problems have an identical range of values for each objective, they are called normalized test problems [6]. However, as stated in [6], practical problems are far from being normalized and the objectives are usually scaled differently. Therefore, it is in great need of testing algorithms on problems with different scaling of objectives. Hence, following the practice in [6], this paper suggests a set of constrained scaled test problems by multiplying objective f i in CDTLZ problems by a factor r i, where r is the base of the factor. The scaled objectives are denoted by f i, i =,,..., M. Therefore, we obtain some new constrained test problem-

Y. Xiang et al. Table The settings of r for constrained scaled test problems. Problem C-SDTLZ C3-SDTLZ M r 3 8 3. Problem M r C-SDTLZ3 C-SDTLZ C-SDTLZConvex C3-SDTLZ4 3 8 3 3 s: C-SDTLZ, C-SDTLZ3, C-SDTLZ, C-SDTLZConvex, C3-SDTLZ and C3-SDTLZ4. The settings of r for C-SDTLZ and C3-SDTLZ are listed on the left side of Table, while those for the remaining problems are presented on the right side in that table. For example, for the three-objective C-SDTLZ, the scaled objectives f, f and f3 are multiplied by, and, respectively. Since we have the relation fi = fi ri, we can easily obtain fi = fi /ri. By plugging it into the expression of the original constraints in the CDTLZ problems, the new constraints can be worked out. For example, in C-DTLZ, the original constraint is given by the following expression [3]. c(x) = M fm (x) fi (x)..6. i= (4) In C-SDTLZ, the constraint is then evaluated as follows. c (x) = M fm (x) fi (x)..6 rm. ri i= () Analogously, constraints in other test problems can also be calculated. Fig. shows the difference between C-DTLZ and C-SDTLZ through a two-dimensional case. In C-DTLZ, the feasible region is the black part between lines f + f =. f f and..6 = [Fig. (a)], while that in C-SDTLZ is marked by black backf f f = and. f6 = ground in Fig. (b), where the boundary lines are. (assuming r = ), respectively. 6.6 Feasible Feasible. 4 f f.4 3.3. Pareto front. Pareto front...3 f (a) C-DTLZ.4....3.4 f (b) C-SDTLZ Fig. Two-objective version of the C-DTLZ and C-SDTLZ problems..

CVaEA 3. Performance Metric The well-known IGD metric is used here to assess the performance of all the algorithms. This metric is a joined measurement of both the convergence and diversity of the obtained set. Let P be an approximation set, and P be a set of non-dominated points uniformly distributed along the true Pareto front, and then the IGD metric is defined as follows [3, 34] IGD(P ) = P z P dist(z, P ), (6) where dist(z, P ) is the Euclidean distance between z and its nearest neighbor in P, and P is the cardinality of P. If P is large enough to cover the true Pareto front very well, then both convergence and diversity of approximate set P can be measured by IGD(P ) [3]. For an EMO algorithm, a small IGD value is desirable. Table lists the number of points used for the calculation of IGD with respect to different numbers of objectives on each test problem. These points are generated according to the method in [6]. As the number of objectives increases, more points are needed to cover the true Pareto-optimal fronts as well as possible. Table The number of points in P Problem M Number of points Problem M Number of points C-SDTLZ C-SDTLZ C3-SDTLZ 3 3 3 3 49 49 8 9 C-SDTLZ3 8 9,, 3,74 3,74 3 3 74 46 9 8 696 C-SDTLZConvex 8,94,74 8,88, 3,974 3 7 3 37 7 9 8,464 C3-SDTLZ4 8,48 3,6 6,77,46,4 3.3 General Experimental Settings The parameter settings for this experiment are listed as below unless otherwise mentioned. Population size: According to [7], the population size in CNSGAIII is set as the smallest multiple of four larger than the number of reference points (H) produced by the so-called two-layer reference point (or weight vector) generation method.

Y. Xiang et al. For CMOEA/DD, we use the population size, N = H, as recommend by its developers []. The CVaEA keeps the same population size as in CNSGAIII. The population size N for different numbers of objectives is summarized in Table 3. Number of independent runs and the termination condition: All algorithms are independently run times on each test instance and terminated when a predefined maximum function evaluations (MF E) reaches. The settings of MF E for different numbers of objectives are listed in Table 4. Parameter settings for operators: In all algorithms, the simulated binary crossover (SBX) and polynomial mutation are used to generate offspring solutions. The crossover probability p c and mutation probability p m are set to. and /n, respectively. For SBX operator, its distribution index is η c = 3, and the distribution index of mutation operator is η m = [7, ]. Parameter settings for algorithms: Following the practice in [], the penaltybased boundary intersection (PBI) approach is used in CMOEA/DD with a penalty parameter θ =. The neighborhood size T is set to and the probability used to select in the neighborhood is given by δ =.9. In CVaEA, the α is set to., and effect of this parameter will be investigated in Section 3.. Table 3 The population size for different numbers of objectives M H CNSGAIII&CVaEA CMOEA/DD 3 9 9 9 8 6 6 6 7 76 7 3 36 3 Table 4 The settings of MF E for different numbers of objectives M MF E = population size G max 3 9 8 6 76 3 3 G max denotes the number of the maximum generations that is used in CVaEA and CNSGAIII, while MF E is used in CMOEAD/D. 3.4 Simulation Results and Analyses Table gives the median and Inter Quartile Range (IQR) results on the three-objective test problems in terms of the IGD metric. The significance of the difference between

CVaEA 3 CVaEA and the peer algorithms is determined by using the well known Wilcoxon s rank sum test []. As shown, CVaEA performs significantly better than CNSGAI- II and CMOEAD/D on both C-SDTLZ and C3-SDTLZ. For C-SDTLZ3, all the three algorithms obtain similar performance, meaning that no significance differences are detected by the Wilcoxon s rank sum test. The CVaEA shows a significant improvement over CMOEAD/D on the C-SDTLZ problem, but its performance deteriorates when compared against the CNSGAIII. For problems C-SDTLZConvex and C4-SDTLZ4, the proposed algorithm is comparable to the well-known CNSGAI- II, however, it shows a clear superiority over CMOEAD/D on these two problems. Table Median and IQR (in brackets) of IGD metric for the three-objective test problems. The best and the second best results are shown with dark and light gray background, respectively. CVaEA CNSGAIII CMOEAD/D C-SDTLZ 7.8E (.E ).E + (.E ) 7.799E + (8.E + ) C-SDTLZ3.7E + (.8E + ).4E + (4.4E + ).67E + (3.3E ) C-SDTLZ.39E + (.9E ).89E + (4.8E ).983E + (.3E ) C-SDTLZConvex.E + (8.9E ).68E + (.9E ).88E + (7.6E ) C3-SDTLZ.436E + (.E ).E + (3.4E ).887E + (8.4E ) C3-SDTLZ4 3.E + (.8E ) 3.3E + (.E ) 4.34E + (.6E + ) indicates that the peer algorithm is significantly worse than CVaEA with a level of significance. by the Wilcoxon s rank sum test, while indicates the opposite. They are the same in Tables 6-9. Table 6 shows the results on the five-objective test problems. It can be seen that CVaEA performs best, presenting a clear advantage over other two algorithms on the majority of the test problems. The CVaEA significantly outperforms CNSGAIII and CMOEAD/D on almost all the test problems. As a special case, the proposed algorithm is defeated by CNSGAIII only on the C-DTLZConvex problem. Table 6 Median and IQR (in brackets) of IGD metric for the five-objective test problems. The best and the second best results are shown with dark and light gray background, respectively. CVaEA CNSGAIII CMOEAD/D C-SDTLZ.349E + (.E + ).7E + (7.8E + ).64E + 4 (4.E + 4) C-SDTLZ3.4E + 3 (4.3E + ).6E + 3 (8.4E + ) 3.E + 3 (7.4E + ) C-SDTLZ.E + (.E + ).77E + (.4E + ) 3.34E + 3 (.E 3) C-SDTLZConvex.67E + (.4E + ).849E + (3.7E + ).98E + 3 (3.E ) C3-SDTLZ.73E + (3.6E + ) 3.3E + (4.7E + ).E + 3 (.7E ) C3-SDTLZ4.8E + (4.E + ) 4.73E + (3.E + ) 4.894E + 3 (6.3E + ) Results on the eight- and ten-objective problems are presented in Tables 7 and 8, respectively. The CVaEA significantly outperforms CMOEAD/D on all the test problems for both M = 8 and M =. From the Wilcoxon test results, we can see that the proposed algorithm shows an obvious improvement over CNSGAIII on the majority of the test instances, except for the eight- and ten-objective C-SDTLZConvex, and the ten-objective C-SDTLZ3 problem. For C-SDTLZ3 with eight objectives, the difference between CVaEA and CNSGAIII is negligible. Table 9 lists the IGD results on the fifteen-objective test problems. It can be found that CVaEA performs relatively worse than CNSGAIII and CMOEAD/D on the C- SDTLZ3 and C3-SDTLZ problems. For the C-SDTLZConvex problem, CVaEA and CNSGAIII achieve similar performance. For all the other pairwise comparisons

4 Y. Xiang et al. Table 7 Median and IQR (in brackets) of IGD metric for the eight-objective test problems. The best and the second best results are shown with dark and light gray background, respectively. CVaEA CNSGAIII CMOEAD/D C-SDTLZ 3.378E + (.E + ).737E + (3.8E + ).9E + 4 (.E + 4) C-SDTLZ3 4.74E + (.E + ) 3.897E + (.6E + ).89E + (.7E + ) C-SDTLZ.63E + (6.E + ) 8.63E + (.E + ).7E + (.E + ) C-SDTLZConvex.6E + (8.3E + ) 6.67E + (7.7E + ) 3.9E + (7.8E ) C3-SDTLZ 4.96E + (7.E + ) 7.796E + (3.4E + ) 3.37E + (4.8E ) C3-SDTLZ4.E + (.3E + ).698E + (.E + ) 8.397E + (.7E + ) Table 8 Median and IQR (in brackets) of IGD metric for the ten-objective test problems. The best and the second best results are shown with dark and light gray background, respectively. CVaEA CNSGAIII CMOEAD/D C-SDTLZ 8.33E + (.7E + ).48E + (4.3E ) 8.39E + 3 (3.E + 3) C-SDTLZ3.68E + 3 (7.4E + ).96E + 3 (3.E + ) 4.63E + 3 (6.3E + ) C-SDTLZ 3.63E + (4.9E + ) 4.9E + (6.6E + ) 4.38E + 3 (6.6E ) C-SDTLZConvex 6.66E + (3.8E + ) 4.78E + (.E + ).996E + 3 (4.6E ) C3-SDTLZ.446E + (.8E + ).7E + (6.E + ) 7.34E + (4.E ) C3-SDTLZ4 6.8E + (.4E + ).387E + 3 (.6E + ) 6.33E + 3 (.8E + ) between CVaEA and CNSGAIII (or CMOEAD/D), our proposed algorithm presents significant better results than its competitors. Table 9 Median and IQR (in brackets) of IGD metric for the fifteen-objective test problems. The best and the second best results are shown with dark and light gray background, respectively. CVaEA CNSGAIII CMOEAD/D C-SDTLZ 7.78E (6.3E ) 9.7E (3.4E 3) 4.46E + (.3E + ) C-SDTLZ3 4.636E + 4 (4.E + 4).43E + 3 (4.8E + ) 3.49E + 3 (.E + ) C-SDTLZ 7.E + (.E + ) 7.8E + (.9E + ).84E + 3 (.6E + ) C-SDTLZConvex 8.94E + (7.3E + ) 6.66E + (.E + ).E + 3 (.E ) C3-SDTLZ.4E + (.E + ).64E + (.6E ).93E + (.7E ) C3-SDTLZ4 9.33E + (.E + ).873E + 3 (.6E + ) 4.636E + 3 (4.E ) We summarize the results of the pairwise comparisons in Table, where nb, ne and nw denote the number of test instances in which CVaEA shows better, equal and worse performance than the peer algorithms, respectively. Specifically, the proportion of the test instances where CVaEA performs better than CNSGAIII and CMOEAD/D is 8/3 and 7/3, respectively. Conversely, the proportion that CVaEA is defeated by the peer algorithms is 7/3 and /3, respectively. Table Summary of the pairwise comparison CVaEA v.s. CNSGAIII CMOEAD/D nb 8/3 7/3 ne /3 /3 nw 7/3 /3 In order to present the distribution of solutions visually, Fig. plots the final solutions of one run with respect to three-objective problems. This run is associated with the particular run that obtains the closest result to the median value of IGD. For C-SDTLZ, as shown in Fig. (a), (b) and (c), the solutions obtained by CVaEA and CNSGAIII cover the whole Pareto front well, while those found by CMOEAD/D

CVaEA seem to concentrate in only a small part of the optimal front. Hence, for CMOEAD/D, the IGD value naturally increases. It can be seen from Fig. (d), (e) and (f) that all the three algorithms have difficulty in solving the C-SDTLZ3 problem. The quality of the approximation sets is not satisfactory in terms of both the convergence and diversity. Relatively speaking, the front found by CMOEAD/D has better convergence than those obtained by CVaEA and CNSGAIII. The solutions of CVaEA and CNSGAIII are distributed similarly. The C-SDTLZ3 is a very hard problem for many evolutionary algorithms [3], because it introduces a band of infeasible region around the Pareto-optimal front providing great difficulties for an algorithm to overcome. Pareto front of problem CSDTLZ Pareto front of problem CSDTLZ Pareto front of problem CSDTLZ f3 f3 f3.... f f (a) CVaEA. f f (b) CNSGAIII. f f (c) CMOEA/DD Pareto front of problem CSDTLZ3 Pareto front of problem CSDTLZ3 Pareto front of problem CSDTLZ3 4 f3 f3 f3 f f (d) CVaEA f f (e) CNSGAIII f f (f) CMOEA/DD Fig. The final solution set of the three algorithms on the three-objective problems (C-SDTLZ and C-SDTLZ3). Fig. 3 plots, by parallel coordinates, the final solutions for the eight-objective C- SDTLZ and the fifteen-objective C3-SDTLZ4. In these figures, the objectives are divided by the scaling factor r i to get identical ranges for each objective which is helpful for a better presentation of the distribution of solutions. For the eight-objective C-SDTLZ, as shown in Fig. 3 (c), the solutions obtained by CMOEAD/D have the worst convergence, while those found by CVaEA and CNSGAIII converge similarly but indeed have differences in terms of the distribution [see Fig. 3 (a) and (b)]. It seems that the solutions of CNSGAIII are distributed more uniformly than those of CVaEA, however, CNSGAIII finds repeated values for each objective, which will cause information redundancies [8]. Similar observations are found on the fifteenobjective C3-SDTLZ4 problem. It can be seen that our proposed algorithm is able to obtain a set of well converged and appropriately distributed solutions [Fig. 3 (d)], while CNSGAIII still struggles to find enough different values for each objective to cover the optimal front well [Fig. 3 (e)]. For CMOEA/DD, some objectives, e.g., the 9th to th objectives, are not well covered by the solutions [Fig. 3 (f)].

6 Y. Xiang et al..7 CVaEA on CSDTLZ.7 CNSGAIII on CSDTLZ 4 CMOEA/DD on CSDTLZ.6.6 4 3.. 3 f i /3 i.4.3 f i /3 i.4.3 f i /3 i.... 3 4 6 7 8 Objective No. (a) CVaEA 3 4 6 7 8 Objective No. (b) CNSGAIII 3 4 6 7 8 Objective No. (c) CMOEA/DD CVaEA on C3SDTLZ4. CNSGAIII on C3SDTLZ4. CMOEA/DD on C3SDTLZ4. f i / i f i / i. f i / i.... 3 4 6 7 8 9 3 4 Objective No. (d) CVaEA 3 4 6 7 8 9 3 4 Objective No. (e) CNSGAIII 3 4 6 7 8 9 3 4 Objective No. (f) CMOEA/DD Fig. 3 The final solution set of the three algorithms on the eight-objective C-SDTLZ and the fifteenobjective C3-SDTLZ4, shown by parallel coordinates. 3. The Effect of the Parameter α In CVaEA, a parameter, the infeasible portion (α) is utilized to control the ratio of infeasible solutions that are preferentially added into the population in the environmental selection. This section investigates the effect of α. Here, we show the results for the ten- and fifteen-objective problems. Similar results can be obtained for test problems with other numbers of objectives. 9 8 M= M= Average Rankings (IGD) 7 6 4 3....3.4..6.7.8.9. α Fig. 4 The curve of the average rankings when α varies from. to. with a step size.. To study the sensitivity of our algorithm to α, we repeat the experiments conducted in the previous section for α [.,.] with a step size.. All the other

CVaEA 7 9 8 M= M= Average Rankings (IGD) 7 6 4 3....3.4..6.7.8.9. α Fig. The curve of the average rankings when α varies from. to. with a step size.. parameters are kept the same as described in the previous section. For each value of α, the average ranking obtained by applying the Friedman test [] is used to measure the performance of the algorithm. Fig. 4 shows the results of average rankings for the ten- and fifteen-objective problems. It is clear from the figure that the average rankings, in general, tend to increase when α is larger than., meaning that CVaEA seems to prefer smaller values of α. Compared with α =., the algorithm achieves better average rankings when α is set to., indicating that the inclusion of a certain small portion of infeasible solutions really contributes to the performance improvement of the algorithm. It seems that better settings of α can be found in the interval [.,.]. To carry out a fine tuning of the parameter, the above experiment is repeated by changing the value of α from. to. with a step size.. The curve of average rankings is shown in Fig., where the best values of α are. and. for problems with ten and fifteen objectives, respectively. Considering the overall performance, a value between [.,.] is here suggested for an unknown optimization problem. 4 Practical Application of the CVaEA Having shown the ability of CVaEA in solving various kinds of constrained test problems, the CVaEA is now applied to an engineering constrained optimization problem, the Water problem that has five objectives and seven constraints. 4. A Brief Introduction to the Problem This is a problem taken from [], [3]. The problem has three decision variables x, x and x 3. All the variables have a lower boundary., and the upper boundary

8 Y. Xiang et al. is.4 for x, and. for x and x 3. The five objective functions are given as follows. All objectives are to be minimized. f (x) = 678.37(x + x 3 ) + 674.67 f (x) = 3x f 3 (x) = 37 89 x /(.6 89).6 f 4 (x) = 89 e 39.7x +9.9x 3 +.74 f (x) = (.39/x x + 494x 3 8) The seven constraints are formulated as below. g (x) =.39/(x x ) + 4.94x 3.8 g (x) =.36/(x x ) +.8x 3.986 g 3 (x) =.37/(x x ) + 4948.4x 3 + 4. g 4 (x) =.98/(x x ) + 846.33x 3 696.7 6 g (x) =.38/(x x ) + 7883.39x 3 7.4 g 6 (x) =.47(x x ) + 7.6x 3 36.4 g 7 (x) =.64/(x x ) + 63.3x 3 4.48 (7) (8) 4. Results on the Water Problem To measure the distribution of the obtained solutions, we here introduce a new metric named the generalized spread (denoted as SPREAD) which is defined as follows [36]. M i= SP READ(P ) = dist(e i, P ) + z P dist(z, P ) d M i= dist(e, (9) i, P ) + P d where P is a set of solutions, P is the set of Pareto optimal solutions, e, e,..., e M are M extreme solutions in P and dist(z, P ) = d = P min F(z) y P,y z F(y), () x P dist(x, P ). () Another metric, the generational distance (GD), is introduced to measure how far are the points in the approximate set from those in the optimal Pareto front []. It is a measurement of the convergence of the obtained solutions. Experimental results of all the metrics (SPREAD, GD and IGD) are tabulated in Table. Note that when calculating the above metrics, a set of reference points approximating the true Pareto front is needed. For the Water problem, the reference points are generated by combining all non-dominated solutions found by all the algorithms over all the runs.

CVaEA 9 CVaEA CNSGAIII f 6 7 8 4 x 6 x 4 6 7 8 x 6 x 4 6 7 8 4 x 4 x 4 6 7 8 x 4 6 7 8 4 x 6 f x 4 x 6 4 x 4 4 x 6 6 7 8 4 x 6 x 4 x 6 f3 4 4 x 4 x 6 4 x 6 x 6 6 7 8 x 6 x 4 x 6 4 f4 x 6 x 6 4 x 4 6 7 8 4 x 4 x 4 4 x 4 4 4 x 4 x 6 4 x 4 f x 6 Fig. 6 Scatter plot showing the results of CVaEA and CNSGAIII (top-right plots) and the results of the optimal front (bottom-left plots). Table Results (median and IQR) of CVaEA, CNSGAIII and CMOEA/DD on the Water problem. CVaEA CNSGAIII CMOEA/DD SPREAD 3.4E (.8E ) 6.49E (.E ) 4.889E (6.E ) GD 4.94E 3 (6.3E 4) 3.746E 3 (6.7E 4) 4.894E (8.E 3) IGD 3.6E + 4 (7.6E + 3) 3.4E + 4 (6.E + 3) 7.73E + 4 (6.9E + 3) As shown in Table, the CVaEA performs best in terms of the SPREAD metric, followed by CMOEA/DD. Contrarily, CNSGAIII gives relatively poor performance with respect to the distribution of the solutions. However, as demonstrated by the GD indicator, CNSGAIII can converge to the optimal Pareto front very well, greatly outperforming the CMOEA/DD algorithm. Compared with CNSGAIII, our proposed CVaEA obtains similar convergence. Finally, as shown by the IGD results, CVaEA is the most competitive algorithm, showing its ability of keeping a good balance between the convergence and diversity. Figs. 6 and 7 show the solutions of all the three algorithms in the scatter matrix plot. In these figures, the lower left plots present results of the optimal Pareto front, while the upper right plots are for the constrained algorithms. For the convenience of comparison, the (i, j)th (i > j) plot should be compared with that in the position (j, i). It is shown in Fig. 6 that solutions obtained by CVaEA are widely distributed

Y. Xiang et al. CVaEA CMOEA/DD f 6 7 8 4 x 6 x 4 6 7 8 x 6 x 4 6 7 8 4 x 4 x 4 6 7 8 x 4 6 8 4 x 6 f x 4 x 6 4 x 4 4 x 6 6 8 4 x 6 x 4 x 6 f3 4 4 x 4 x 6 4 x 6 x 7 6 8 x 7 x 4 x 7 4 f4 x 6 x 6 x 6 8 x x 4 x 4 x x 6 4 x 4 f x 7 Fig. 7 Scatter plot showing the results of CVaEA and CMOEA/DD (top-right plots) and the results of the optimal front (bottom-left plots). on the entire Pareto-optimal front, achieving a better distribution than those found by CNSGAIII. It is observed, in some plots [e.g., the (,4)th plot], that solutions of CNSGAIII don t necessarily cover the entire optimal front. Fig. 7 presents the comparison of solutions obtained by CVaEA and CMOEA/DD. Clearly, solutions of CVaEA converge towards the Pareto-optimal front significantly better than those of CMOEA/DD. For some plots, e.g., the (,4)th, (,)th and (4,)th plots, some extremely poor converged solutions are found by the CMOEA/DD. This phenomenon may be attributed to the mechanism adopted in CMOEA/DD that isolated solutions are always kept for the promotion of diversity, however, these solutions may be far away from the optimal front. Hence, the CMOEA/DD improves the solutions distribution at risk of harming their convergence. To compare the running speed of all the algorithms, we record the actual running time for each run and each algorithm in milliseconds in the same platform (Intel (R) Core (TM) i-u, CPU @. GHz with 8. GB RAM). The median of runtime over runs is 6.3E+4, 7.397E+4 and.88e+4 for CVaEA, CNS- GAIII and CMOEA/DD, respectively. Fig. 8 shows the intuitive comparison of the runtime, where the time of all the algorithms is normalized by dividing the time of CMOEA/DD. Obviously, CMOEA/DD is the fastest algorithm, followed by CVaEA, and finally the CNSGAIII algorithm. Precisely, the runtime of CVaEA is only 8.%

CVaEA of that of CNSGAIII. The CMOEA/DD is faster than CVaEA and CNSGAIII, and this may due to the fact that CMOEA/DD doesn t normalize the population, and it also uses an efficient method to update the non-domination level structure []. Although efficient is the CMOEA/DD, it isn t effective in solving problems in terms of the quality of the solutions, especially the convergence. 3. Runtime.. CVaEA CNSGAIII CMOEA/DD Fig. 8 The comparison of runtime on the Water problem. Finally, we conclude that our proposed CVaEA would be the best choice when handling some engineering problems, such as the Water problem, mainly because of its ability of efficiently finding a set of well converged and properly distributed solutions. Conclusion In the previous study, by using the concept of vector angles, the VaEA was proposed for dealing with unconstrained many-objective optimization problems. In this paper, we extend VaEA to CVaEA by making modifications to the mating and environmental selection processes so as to handle constraints. In CVaEA, the information provided by infeasible solutions are sufficiently utilized, and the algorithm puts more emphases on feasible solutions over infeasible ones, and on smaller-cv solutions over larger- CV ones. The balance between convergence and diversity is achieved by two principles: the maximum-vector-angle-first principle and the modified worse-elimination principle. To test the performance of our proposed CVaEA, a set of new constrained manyobjective test problems is designed by multiplying each objective in the constrained DTLZ problems by different factors. Thus, the ranges of values for each objective are different, which reflects the real situations more exactly because objectives of a practical engineering problem are usually distributed in different ranges. The simulation results of CVaEA, together with CNAGAIII and CMOEA/DD, on six test problems with up to objectives, have shown the superiority of our proposed method in finding a set of well converged solutions while maintaining an extensive distribution among them.