MULTIOBJECTIVE OPTIMIZATION ALGORITHM BENCHMARKING AND DESIGN UNDER PARAMETER UNCERTAINTY

Size: px
Start display at page:

Download "MULTIOBJECTIVE OPTIMIZATION ALGORITHM BENCHMARKING AND DESIGN UNDER PARAMETER UNCERTAINTY"

Transcription

1 MULTIOBJECTIVE OPTIMIZATION ALGORITHM BENCHMARKING AND DESIGN UNDER PARAMETER UNCERTAINTY by NICOLAS LALONDE A thesis submitted to the Department of Mechanical and Materials Engineering in conformity with the requirements for the degree of Master of Science (Engineering) Queen s University Kingston, Ontario, Canada August, 29 Copyright Nicolas Lalonde, 29

2 Abstract This research aims to improve our understanding of multiobjective optimization, by comparing the performance of five multiobjective optimization algorithms, and by proposing a new formulation to consider input uncertainty in multiobjective optimization problems. Four deterministic multiobjective optimization algorithms and one probabilistic algorithm were compared: the Weighted Sum, the Adaptive Weighted Sum, the Normal Constraint, the Normal Boundary Intersection methods, and the Nondominated Sorting Genetic Algorithm-II (NSGA-II). The algorithms were compared using six test problems, which included a wide range of optimization problem types (bounded vs. unbounded, constrained vs. unconstrained). Performance metrics used for quantitative comparison were the total run (CPU) time, number of function evaluations, variance in solution distribution, and numbers of dominated and nonoptimal solutions. Graphical representations of the resulting Pareto fronts were also presented. No single method outperformed the others for all performance metrics, and the two different classes of algorithms were effective for different types of problems. NSGA-II did not effectively solve problems involving unbounded design variables or equality constraints. On the other hand, the deterministic algorithms could not solve a problem with a non-continuous objective function. In the second phase of this research, design under uncertainty was considered in multiobjective optimization. The effects of input uncertainty on a Pareto front were quantitatively investigated by developing a multiobjective robust optimization framework. Two possible effects on a Pareto front were identified: a shift away from the Utopia point, and a shrinking of the Pareto curve. A set of Pareto fronts were obtained in which the optimum solutions have different levels of insensitivity or robustness. ii

3 Four test problems were used to examine the Pareto front change. Increasing the insensitivity requirement of the objective function with regard to input variations moved the Pareto front away from the Utopia point or reduced the length of the Pareto front. These changes were quantified, and the effects of changing robustness requirements were discussed. The approach would provide designers with not only the choice of optimal solutions on a Pareto front in traditional multiobjective optimization, but also an additional choice of a suitable Pareto front according to the acceptable level of performance variation. iii

4 Acknowledgments I would like to profoundly thank my supervisor, Dr. Il Yong Kim, for his continuous support and commitment during my research. His knowledge of optimization was greatly relied upon, and his patience to my many questions was greatly appreciated. Thank you also to the members of the SMSD group at Queen s University for their support. I would like to thank Ryan Willing for his help during nearly all phases of my project, and for helping me to prepare for my conference and journal papers. I would also like to thank my parents, brothers, and friends for their support and encouragement, making my many trips to Montreal on the week-end a relaxing endeavor. Lastly, this research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC), a Senator Frank Carrel Fellowship, research and teaching assistantship from Queen s University Department of Mechanical and Materials Engineering, and a graduate entrance tuition award. iv

5 Table of Contents Abstract..... ii Acknowledgments... iv Table of Contents... v List of Tables... vii List of Figures and Illustrations... ix Nomenclature... xiii Chapter 1 Introduction Motivation Objectives...4 Chapter 2 Background and Literature Survey Single-Objective Optimization Multiobjective Optimization Robust Design Taguchi Method Single-Objective Optimization Techniques Multiobjective Optimization Techniques Reliability-based design optimization Evaluating objective functions Chapter 3 MOO Algorithms Weighted Sum (WS) Method Adaptive Weighted Sum (AWS) Method Normal Constraint (NC) Method Normal Boundary Intersection (NBI) Method Nondominated Sorting Genetic Algorithm (NSGA)-II Chapter 4 MOO Comparison Test Problems Problem Problem Problem v

6 4.1.4 Problem Problem Problem Numerical Results Problem Problem Problem Problem Problem Problem Discussion WS AWS NC NBI NSGA-II Chapter 5 Multiobjective Robust Optimization Considering Pareto Front Changes due to Uncertainty Fundamental Concepts Method Overview Step-by-Step Procedure Numerical Examples Example Example Example Example Discussion... 8 Chapter 6 Conclusions and Recommendations Conclusions Recommendations References Appendix A Handling Large Number of Function Evaluations A.1 Design of Experiments vi

7 List of Tables Table 4-1: Table 4-2: Table 4-3: Table 4-4: Table 4-5: Table 4-6: Numerical results for problem 1. The five methods are compared in terms of their run times, numbers of function evaluations, variances of solution distribution, and numbers of dominated and non-optimal solutions Numerical results for Problem 2. The initial condition grid setup is not applicable to the genetic algorithm. For NSGA-II, the first value represents the mean value obtained from multiple runs, and the standard deviation is shown in the parenthesis Numerical results for Problem 3. For NSGA-II, the first value represents the mean value obtained from multiple runs, and the standard deviation is shown in the parenthesis. The NBI contains no dominated solutions regarding its own solutions, but one of the solutions is dominated if compared to the true Pareto front Numerical results for Problem 4. For NSGA-II, the first value represents the mean value obtained from multiple runs, and the standard deviation is shown in the parenthesis Numerical results for Problem 5. The first value represents the mean value obtained from multiple runs, and the standard deviation is shown in the parenthesis. None of the gradient-based algorithms could adequately solve the problem; the results are not shown here Numerical results for Problem 6. Due to the long computing time, only 6 NSGA-II analyses were conducted. For NSGA-II, the first value represents the mean value, and the standard deviation is shown in the parenthesis Table A-1: Number of function evaluation for each method depending on the number of design variables Table A-2: Table showing one set of results from the comparison of four methods proposed to deal with the large number of function evaluation with a full factorial. The values shown represent the % difference from the value found by full (1) factorial analysis. Using [-3, -3] as design vector in 2 nd problem evaluated in [6], with ±.1 for both upper and lower variation. Problem contains no constraints vii

8 Table A-3: L 9 (3 4 ) as commonly used in Robust design, where Exp. is the experiment number, A, B, C, D represents variables, W is the experiment value, with 1, 2, 3 representing the different levels. Adapted and modified from [72] viii

9 List of Figures and Illustrations Figure 2-1: Example of the sensitivity of gradient-based methods to the initial conditions used. Starting at (1), the local optimum (3) is obtained. Otherwise, starting at (4), the global optimum is obtained (5)... 7 Figure 2-2: Solutions 1 through 4 are Pareto optimal solutions, since they satisfy two requirements: nondominated and feasible Figure 2-3: Anchor points, Utopia point, Utopia line and Pareto front in the bi-objective space Figure 2-4: In deterministic optimization, we seek x optimum. In robust design, x robust is sought Figure 3-1: Trigonometric Linear Combination method. (1) Coordinate rotation of, (2,3) rotation of θ, (4) rotation of 9. The dotted line represent a translation of the J 2 axis, minimizing the J 1 value Figure 3-2: AWS method in the bi-objective case. Optimization is performed only in the regions where refinement is needed. Inequality constraints, parallel to the objective axes, are used Figure 3-3: NC method, shown on the normalized bi-objective space. An inequality constraint restricts the feasible region, and the second objective function is minimized Figure 3-4: Concept behind the NBI method, shown on the normalized bi-objective space. An equality constraint restricts the feasible region to lie on a line perpendicular to the Utopia Line Figure 3-5: In NSGA-II, the individuals are sorted in nondominated fronts to determine the fitness value, instead of using the objective function value. All individuals in the same front have the same fitness value. This eliminates the needs for fitness evaluation when the problem contains more than one objective function Figure 3-6: Calculating the crowding distance of the i th point, within the same front Figure 4-1: Koski s three-bar truss problem used as Problem 3. The tip deflection and volume of the trusses were minimized ix

10 Figure 4-2: (Left) First phase that shows the punch and die setup. The energy required to bend the sheet metal was extracted from the model and was used as the first objective function. (Right) Second phase for the tip deflection calculation. The stress distribution from the first-phase model was re-applied on the model. The tip deflection was used as the second objective function Figure 4-3: Pareto front representations for Problem 1. All solutions shown lie on the Pareto front, except those by NSGA-II. The problem type is minimize-minimize Figure 4-4: Poor representation of the Pareto front of Problem 2 when an arbitrary design vector is used Figure 4-5: Grid size of 2 (left) and 1 (right) for initial design vectors in Problem 2. The lower and upper bounds are x L =-3 and x U =3 for x 1 and x Figure 4-6: Pareto front representations for Problem 2. The WS, AWS, and NC used a grid size of 2. NBI (1) used a grid size of 2, and NBI (2) used a grid size of 1.5. Only one set of NSGA-II solutions is shown. Other NSGA-II results show a similar trend. The problem type is maximize-maximize. The set of light-grey points represents the entire feasible solutions in the objective space Figure 4-7: Pareto front representations for Problem 3. Fifteen points were obtained by the AWS, WS, NBI and NC methods, and 1 by the NSGA-II method. The set of lightgrey points represents the entire feasible domain in the objective space. The problem type is minimize-minimize Figure 4-8: Pareto front representations for Problem 4. The true anchor points in all plots are (2, ) and (, 1) Figure 4-9: Sample results for Problem 5 with the gradient-based algorithms Figure 4-1: Pareto front representation for Problem 5. Only NSGA-II solved the problem properly. The problem type is minimize-minimize Figure 4-11: Pareto front representations for Problem 6. Each algorithm was tested with multiple initial design vectors, but only one of the converged solutions is presented here Figure 4-12: Performance results comparison for all methods in all problems. The height represents the relative performance of the method compared to the other methods for all performance metrics, where the highest bar represents the worst performance. x

11 The number of dominated solutions and non-optimal solutions were combined into one Figure 5-1: Three possible modes of Pareto front and feasible domain changes in a bi-objective case Figure 5-2: Quantification of the two modes of Pareto front changes: Pareto front shift and Pareto front shortening Figure 5-3: Feasible domain for Example 1 with ±1 as design variable uncertainty. Blue color represents solutions with low variance, and red denotes solutions with high variance Figure 5-4: Pareto fronts and feasible domains with four different robustness levels for Example 1. The set of light grey solutions represent the entire feasible domain Figure 5-5: Superimposed Pareto fronts and Pareto front length ratio (R X% ) according to the robustness level for Example Figure 5-6: Feasible domain for Example 2 with ±.5 as design variable uncertainty. Blue color represents solutions with low variance, and red denotes solutions with high variance Figure 5-7: Pareto fronts and feasible domains with four different robustness levels for Example 2. The set of light grey solutions represent the entire feasible domain Figure 5-8: Superimposed Pareto fronts and Pareto front shift distance (S X% ) according to the robustness level for Example Figure 5-9: Three-bar truss problem [77]. The cross-sectional area of each truss member is optimized to minimize tip deflection and total volume Figure 5-1: Feasible domain for Example 3 with +.1 and -.5 as design variable uncertainty. Blue color represents solutions with low variance, and red denotes solutions with high variance Figure 5-11: Pareto fronts and feasible domains with four different robustness levels for Example 3. The set of light grey solutions represent the entire feasible domain Figure 5-12: Superimposed Pareto fronts and Pareto front length ratio (R X% ) according to the robustness level for Example xi

12 Figure 5-13: The finite element analysis model and the schematic of a cross section for Example 4. The small bearing end is fixed in all degrees of freedom, while axial and transverse loads were applied in the larger bearing Figure 5-14: Feasible domain for Example 4 with design variable uncertainty. Blue color represents solutions with low variance, and red denotes solutions with high variance Figure 5-15: Pareto fronts and feasible domains with four different robustness levels for Example 4. The set of light grey solutions represent the entire feasible domain Figure 5-16: (a) Superimposed Pareto fronts and Pareto front length ratio (R X% ) according to the robustness level for Example Figure A-1: Effects of variations on design variables on the objective space. (Left) Design variable space. (Right) Bi-objective space. The maximum objective function values may not necessarily be found at the upper and lower bounds, as shown in b) xii

13 Nomenclature δ J Offset distance (AWS method) ε Distance metric (AWS method) η c Crossover probability (NSGA-II ) η m Mutation probability (NSGA-II) μ j ( x, p), μ j Mean value of the j th objective function σ j ( x, p), σ j Standard deviation of the j th objective function σ ( x) σ min, σ max Summed (normalized) standard deviation used as constraint Minimum and maximum summed (normalized) standard deviation Δθ allowed Allowed spring-back angle deviation from 9 (Problem 6) Δx i C ( ) g x h( x) ( ), g ( xp, ), hxp (, ) J x, J ( xp, ) J ( x) ( ) J x L X% L 1% n i p, J ( xp, ) Deviation on design variable x i Refinement parameter (AWS method) Inequality constraints Equality constraints Objective function Objective function vector Normalized objective function vector Length of reduced Pareto front, normalized space Length of optimal Pareto front, normalized space Number of initial WS solutions (AWS method) Vector of additional parameters R X% Quantitative metric, Pareto front length ratio xiii

14 S X% x Quantitative metric, Pareto front shift distance Design vector xil, i th design variable lower bound xih, i th design variable upper bound AWS DoE CHIM MINSOOP NBI NC NSGA-II RBDO RDO SQP WS Adaptive Weighted Sum Design of Experiments Convex Hull of Individual Minima Minimization of No. of Single Objective Optimization Problems algorithm Normal Boundary Intersection Normal Constraint Nondominated Sorting Genetic Algorithm II Reliability-Based Design Optimization Robust Design Optimization Sequential Quadratic Programming Weighted Sum xiv

15 Chapter 1 1 Introduction Optimization seeks to maximize the performance of a system, part or component, while satisfying design constraints. One common form of optimization is trial and error and is used every day. We make decisions, observe the result, and change future actions depending on the success of those decisions. When performing optimization, we wish to minimize (or maximize) the system response, while considering both design variables and design constraints. Design variables are variables the designer or engineer can freely choose between, for example the thickness of a wall, the material chosen, and the width of a part. The resulting stress, deflection, volume, natural frequency and other typical performance measures are often considered either as objective functions or as constraints. Objective functions are the system responses that we wish to minimize, while constraints are limits that we impose on the system. If a set of design variables produces a solution that violates any of the constraints, that solution is considered infeasible. The focus of this research is multiobjective optimization. In single-objective optimization, we seek a set of design variables that yields the best performing system. Multiobjective optimization, on the other hand, considers problems that have more than one objective function; a typical example is a bridge design, where we wish to minimize mass and maximize stiffness. Because there often does not exist a single set of design variables that minimizes all objective functions, a set of optimum solutions is sought. The solution set is often represented as a curve (surface or hyper-surface) in the objective space, and is often called a Pareto front. The concept of a Pareto front was initially proposed by the Italian economist Vilfredo Pareto. Because a designer or 1

16 decision-maker usually works with a number of objective functions simultaneously, it is essential to have an effective tool that can determine the complete trade-off curve for a given problem. 1.1 Motivation This research focused on two main areas of multiobjective optimization that was found to be lacking in literature. First, no clear comparison has been done looking at different algorithms and different types of algorithms, using quantitative performance measures. New designers wishing to use multiobjective optimization techniques are flooded with available solving methods, each claiming to be the superior algorithm. New algorithms, when proposed, were compared against other algorithms using problems that could be solved easily by their own methods, and subsequently claim superiority. There is a wide variety of multiobjective optimization algorithms published in literature. Many methods produce a set of Pareto solutions that demonstrate the trade-off between the objective functions. The Pareto curve, formed from the Pareto solutions, cannot be properly represented unless well spread solutions are obtained. Not all methods can effectively and efficiently obtain well spread Pareto solutions. Knowing the available trade-off, a designer can make a decision on the best design available. The goal of multiobjective optimization is hence to give designers a choice between designs, where none are mathematically dominant over the others. It can also be used after the component is produced if the design variables can be altered, such as changing the voltage in an electrical component. Other methods produce a single Pareto solution, such as the goal programming method, where the user s preferences are considered beforehand. These methods do not produce a Pareto front, and 2

17 were not considered in this research. Only autonomous Pareto front generating methods were considered. The methods compared all have their known strengths and drawbacks. The Weighted Sum method, for example, is one of the most used methods, and is a linear combination of the objective functions. However, it fundamentally cannot represent non-convex regions, and has been found to be unable to represent well distributed solutions. More advanced methods, such as the Adaptive Weighted Sum method and Multiobjective Genetic Algorithms, were created by looking at the weaknesses of previous methods and proposing improvements. The first part of this work was motivated by the lack of clear comparison between the algorithms. As it will be shown, some methods are useful only if some conditions are met. For example, genetic algorithms are considered unconstrained random algorithms, and cannot handle equality constraints and unbounded design variables. There are some methods available that can circumvent both of these restrictions, but often they are problem-specific. Other methods, such as those using the gradient in the local space, require the objective functions to be continuous and continuously differentiable. The effects of parameter variation on the solution set of multiobjective problems were considered. These uncertainties are commonly caused by manufacturing tolerances, changes in environmental settings, material properties, and wear. Robust design aims to yield a solution in which the system is the most insensitive to input variations. However, a clear distinction must be made between a robust solution and a performing solution. A robust solution is insensitive to noise, whereas the typical results of multiobjective optimization is a performing solution, ignoring uncertainty. Modern applications and methods of robust design attempt to consider this problem, but most can only deal with problems that only have a single objective function. A common robust singleobjective optimization method doubles the number of objective functions, and is not easily 3

18 applicable to multiobjective problems. Very few methods exists that can deal with multiobjective optimization problems efficiently. Reliability-based design optimization (RBDO) is a well researched method that considers uncertainty. The goal of RBDO is to obtain a reliable solution: a reliable design is one where the chance of any constraints being violated is low. We assign a probability of failure to the system, and the final solutions must satisfy that reliability. However, the method considers the effects of uncertainty on the constraints, whereas robust design considers their effects on the objective functions. Some research has been done looking at the effects of uncertainty on the Pareto curve. The effect of uncertainty on the Pareto front from the viewpoint of the constraints, using RBDO, has been well researched in the past. However, from the viewpoint of robust design, it has not. Very little research has been done in this field. An efficient and general approach to consider uncertainty is not available. The second part of this work was motivated by the lack of appropriate formulation to consider the effects uncertainty on the Pareto front in multiobjective optimization problems, from the viewpoint of robust design. 1.2 Objectives The first objective of this research was to quantitatively compare five popular multiobjective optimization methods (four deterministic and one probabilistic) in terms of various performance metrics that represent solution quality and numerical efficiency. Six test problems were selected that cover a wide range of optimization problem types (bounded vs. unbounded, constrained vs. unconstrained, and equality vs. inequality constraints). Five mathematical problems and one 4

19 practical problem were chosen. Performance metrics chosen were the total run (CPU) time, total number of function evaluations, variance in solution distribution, and number of dominated and non-optimal solutions. Pareto fronts were graphically presented and discussed. It should be noted that only bi-objective optimization was performed in this research for simplicity. The second objectives of this research were (1) to propose a multiobjective optimization approach to investigate the change in the Pareto front when uncertainty is considered, and (2) to quantitatively study each of the Pareto front change modes (shift and shrink) due to uncertainty. A method that considers uncertainty on the objective functions without the need of additional objective functions was developed. The approach first computes two extreme design solutions for the given problem: the most robust design with the best possible (or smallest) variance and the least robust design with the largest possible variance. Several additional Pareto fronts that have intermediate levels of robustness were then computed to show how the trade-off curves change. The method was then applied to four numerical examples and the changes were quantitatively discussed. This research is presented in the following fashion; Chapter 2 covers the background and existing literature on single-objective optimization, multiobjective optimization, and robust design methods in optimization. The optimization methods that were used throughout this research are discussed in Chapter 3. The multiobjective optimization algorithm comparison research is then discussed in Chapter 4, and a new formulation to consider uncertainty in multiobjective problems is discussed in Chapter 5. Conclusions and recommended future directions are given in Chapter 6. 5

20 Chapter 2 2 Background and Literature Survey 2.1 Single-Objective Optimization In single-objective optimization, a single objective function is optimized (minimized), while satisfying a set of equality and inequality constraints. The mathematical single-objective optimization statement can be defined as follows: Minimize J ( x) Subject to g( x) ( ) h x = xil, xi xih, i = 1,2,..., n (2-1) where J is the objective function we wish to minimize, while g and h are vectors of inequality and equality constraints, respectively. x is the design vector with n design variables, with lower and upper bounds x i,l and x i,h, respectively. Two primary classes of optimization algorithms are deterministic (or gradient-based) methods and probabilistic (statistical, random-search, or heuristic) methods. Deterministic methods are mathematically based algorithms that use information concerning the current and previous steps to determine future optimization direction. One of the most popular optimization methods is the Sequential Quadratic Programming (SQP) algorithm. SQP is an iterative method that solves a quadratic sub-problem during each iteration. The deterministic methods that use the SQP algorithm are gradient-based optimization methods: they use information about the gradient at a 6

21 set of design variables to deterministically choose how the design variables should be changed to obtain a better design. The SQP algorithm comprises three operations: Updating the Hessian matrix, solving the quadratic sub-problem, and obtaining the step change. For unconstrained problems, the SQP method acts similarly to the Newton-Lagrange approach. For simple functions, the SQP method can quickly converge towards the optimum solution, since it always moves in a direction of descending objective value. However, one of the main drawbacks of the method is that the method is highly sensitive to the initial conditions used. An illustrative example using a gradient-based method can be found in the following figure (Figure 2-1). Given an initial condition (point 1), the method evaluates the gradient around that point (each design variable is incrementally changed to determine the direction where the objective function decreases), and chooses an appropriate step size to take. The algorithm will obtain the second solution (2), and eventually the third solution (3), based on convergence criterion (such as finite step size). This solution is a local optimum. Using a different initial condition (4), the global optimum is obtained (5). J(x) x Figure 2-1: Example of the sensitivity of gradient-based methods to the initial conditions used. Starting at (1), the local optimum (3) is obtained. Otherwise, starting at (4), the global optimum is obtained (5). 7

22 Two types of optimum solutions are first defined: local optimum and global optimum. In nonlinear functions, there are often many local optima: locations where a small change in any of the design variables will yield a worse objective value and the gradient at that point is zero. However, in a given domain, these are not the best performing solutions. Gradient-based methods, like the SQP method, often finds itself trapped in these region, and will report these points as optimum values. Global optimum refers to the lowest value of the objective function in the entire domain. Or simply, it refers to be best possible attainable, feasible solution. The nonlinear solving toolbox from MATLAB [1], a very popular and powerful solver, uses the SQP method as the primary solver. It was chosen amongst other gradient-based nonlinear constrained optimization algorithms [2]. Gradient-based algorithms are highly sensitivity to initial conditions, and there is no available guarantee that any of the solutions obtained are global optimum, unless the value of the objective function over of the entire domain is known. Deterministic methods have some restrictions. The objective functions must be continuous and continuously differentiable to be able to perform adequately. Similarly, the design variables must be continuous, and deterministic methods cannot handle discrete design variables. On the other hand, evolutionary algorithms are classified as probabilistic, global optimization methods, which involve random search techniques. Genetic algorithms are the most common form of evolutionary algorithms and use the concept of survival of the fittest and usually require a large number of function evaluations. Genetic algorithms consist of three main operators: selection, crossover and mutation. All chromosomes (design variables) are often represented in binary form (1 s and s), allowing them to represent non-continuous design variables (and nonnumeric design variables). We use a decoding scheme to evaluate the fitness of the chromosome. A commonly known drawback is that more bytes are needed for better representation and accuracy of the design variables. 8

23 A basic review of genetic algorithms is given here [3]. After the fitness of the initial population is evaluated, a selection operator is used to determine which individual will be chosen for reproduction. A combination of operators is often used. A common operator is tournament selection, where two individuals are randomly selected, and the one with the highest fitness is chosen for reproduction. Most methods also use elitism, ensuring that the best performing individuals in each generation are either selected for reproduction, carried over to the next generation, or both. Once the individuals are chosen for reproduction, we use the crossover operator to determine how the individuals will form offsprings. Single point (binary) crossover is the most common operator, where we randomly choose a location to exchange the binary strings. For example, [1111] and [] crossed at the third bit would yield [11] and [11]. Lastly, the mutation operator determines if any random variation is introduced. Mutation is simulated by randomly changing the value of a single bit in a design variable. Both high probability and low probability mutation are used throughout literature. We use mutation to prevent premature lost of important values and to prevent early convergence. As the problem progresses, for single-objective optimization, all solutions will converge towards the same solution. The program terminates once a maximum number of generations has expired. In order to promote well spread-out solutions in multiobjective optimization, different schemes are used, such as a penalty function [4] or fitness sharing [5]. 2.2 Multiobjective Optimization In multiobjective optimization, there generally does not exist a single solution that simultaneously minimizes all objective functions, and a set of optimum solutions that show the best trade-off is 9

24 found. These optimum solutions form a Pareto front. The multiobjective optimization problem formulation with nonlinear constraints can be expressed as follows: T = 1 2 N Minimize J ( x) J ( x), J ( x),..., J ( x) Subject to g( x) ( ) h x = xil, xi xih, i = 1,2,..., n (2-2) where J is the objective function vector, N the total number of objective functions, x the set of design variables, and g and h are vectors of inequality and equality constraints, respectively. A Pareto solution must satisfy two conditions: the solution must be feasible, and nondominated. A feasible solution must satisfy all design constraints. Let J ( x) be the set of objective functions, with components J ( ) i x, then a point and only if for all other x in the feasible design space, J ( x) with at least one ( ) i * J x > i ( ) cannot be decreased without sacrificing another. * x in the feasible design space is a strong Pareto solution if * J ( x ) for all objective functions J x. Given a set of Pareto optimal solutions, one objective value Consider the following figure. Figure 2-2 represents the results of a multiobjective optimization problem. Solutions 1 through 4 are nondominated solutions, since no other solution in the set is better or equal for both objective functions. Assuming these solutions are feasible, they would be part of the Pareto set. The Pareto front (curve, surface, or hyper-surface depending on the number of objective functions) is defined as the curve linking all of the Pareto solutions. It may be defined by more than one front if there are dominated regions within the Pareto set. 1

25 J J 1 Figure 2-2: Solutions 1 through 4 are Pareto optimal solutions, since they satisfy two requirements: nondominated and feasible. There are three main classes of multiobjective optimization algorithms: a priori, interactive, and a posterior methods. In a priori methods, the relative importance of each objective is chosen before the optimization is performed, and the objective function values are combined into a single objective value. These methods require significant information concerning the importance of each criterion. A single optimal solution is obtained with a priori methods. The second class, interactive methods, require continuous user interaction during the optimization process, and the optimal solution relies on the experience of the user. Lastly, a posterior methods, comprises of two stages. First, a Pareto front is obtained illustrating the available trade-off between the objective functions. Then, an optimum solution is chosen by the user from the previously obtained Pareto solutions. The concept of a Pareto filter was proposed to eliminate dominated and unwanted solution. Every pair of optimum solutions is compared, and if any of the solutions are dominated, the dominated solution is removed. In other words, the Pareto filter eliminates all dominated solutions. This is necessary with some methods such as the Normal Boundary Intersection and the Normal Constraint methods, since they are prone to finding dominated solutions if the Pareto front is not continuous over the entire normalized space. 11

26 Most deterministic algorithms require normalization; a common approach is to use the Utopia point which is determined from the Anchor points. Anchor points are defined as the best solutions that are obtained by performing a single-objective optimization on the individual objective functions. In bi-objective problems, we obtain two anchor points, and the anchor points often contain both the best value of one objective, and the worst of the other. The Utopia point is then defined as the point with the best objective function values, and it is usually unattainable (Figure 2-3). J 2 J 1 Anchor point Feasible domain Pareto front Utopia line Utopia point J 2 Anchor point J 1 Figure 2-3: Anchor points, Utopia point, Utopia line and Pareto front in the bi-objective space. A review of popular multiobjective optimization algorithms reveals that many methods can easily handle bi-objective problems [6-9], but there are only a few algorithms that can handle more than two objective functions [5, 7, 9-12]. It should be noted that visualization of a Pareto front in a four-dimensional or higher-dimensional problem is very difficult, and the primary merit of the use of multiobjective optimization choosing a best optimal solution after examining the entire Pareto optimal solutions significantly diminishes. Transforming a multiobjective optimization problem into a set of single objective optimization problems, often with additional constraints, is a common technique in multiobjective optimization. The Weighted Sum method combines all objective functions into one and solves a 12

27 set of sub-optimization problems. The Normal Constraint, ε-constraint, Equality Constraint, Normal Boundary Intersection, and Adaptive Weighted Sum methods also solve a number of single-objective, sub-optimization problems with additional constraints [6-8, 1, 11]. Goal Programming optimizes the distance between a target value and the objective functions [13]. In these algorithms, a single Pareto-optimal solution is obtained during each sub-optimization, and the optimization is repeated until the complete Pareto front is built. Additional constraints often help to obtain solutions in non-convex regions. One of the major drawbacks of the Weighted Sum method is that it cannot find solutions in non-convex regions [6, 9]. The Pareto-Archived Evolution Strategy (PAES) [14], Strength-Pareto Evolutionary Algorithm (SPEA) [15], Nondominated Sorting Genetic Algorithm (NSGA) [13], and other multiobjective genetic algorithms are evolutionary algorithms involving selection, crossover, and mutation. The Variable Chromosome Length (VCL) GA was developed in order to deal with structural topology optimization problems that have a large number of design variables and singular designs [16]. The methods can effectively solve problems that have many local optima, functional discontinuities, or discrete design variables. Most previous optimization algorithm comparison studies were done for similar algorithm types. Hock and Schittkowski [2] compared 27 single-objective optimization algorithms and found Sequential Quadratic Programming (SQP) the best; this method was subsequently implemented in MATLAB in the fmincon function [1]. Kim and de Weck [6] compared the Adaptive Weighted Sum (AWS) method with the Weighted Sum and Normal Boundary Intersection methods when proposing their original algorithm. Zitzler et al. compared seven different evolutionary algorithms with the Strength-Pareto Evolutionary Algorithm (SPEA) [17], and Knowles and Corne compared a new Pareto-Archived Evolution Strategy (PAES) with the SPEA [14]. Deb et al. similarly compared the PAES and SPEA when proposing the newer version of Nondominated Sorting Genetic Algorithm or NSGA-II [5]. Hassan et al. and Eberhart and Shi performed a comparison 13

28 between the newer Particle Swarm Optimization (PSO) and genetic algorithms [18, 19], examining the accuracy of results and computational time. The limitations of the previous benchmarking studies are identified as follows: (1) most studies evaluated algorithms of the same class, looking at either deterministic or probabilistic algorithms; (2) the studies examined the ability of each method to obtain effective solutions, but they did not evaluate their numerical efficiency; and (3) it is interesting to note that most studies chose test problems that can be easily solved by their own proposed methods. For example, studies on evolutionary algorithms typically solved bounded, unconstrained problems (sometimes with only simple inequality constraints), and no evolutionary algorithm was tested against unbounded problems. 2.3 Robust Design Engineering design is plagued with uncertainties, such as manufacturing tolerances, variations in material properties, non-fixed operating conditions, and wear [2-22]. These uncertainties can have a profound effect on the performance of the system to be designed, and it is important to make the system function as intended under the influence of the uncontrollable variations. In most industries, a small performance deviation does not affect the overall system due to overdesigning. In others, such as the aerospace industry, precise behavioural information is required to achieve the best efficiency. For example, Duvigneau [23] observed large variations in the drag coefficient of an aircraft wing due to small changes in Mach number. To illustrate the effect of variation on the performance of a system, consider the following figure (Figure 2-4). The following function is minimized. 14

29 J(x) Δx Δx x optimum x robust x Figure 2-4: In deterministic optimization, we seek x optimum. In robust design, x robust is sought. With deterministic optimization (considering no variations), the optimum solution is x optimum. This point yields the lowest value of objective function over the given range, and would be the result of traditional optimization. However, in terms of robustness, this point is not robust. A slight deviation in the design variable (Δx) causes a much greater change in objective value than a similar change on x robust. The worst case value is higher, and the overall performance change is higher. The second solution (x robust ) is therefore more robust. Since actual systems are subject to variation, engineering design optimization should be performed using a stochastic framework instead of a deterministic framework. Examples illustrating the need of robust design are numerous. Tay et al. [24] investigated the effect of uncertainty in Young s Modulus, beam width and beam mass on the natural frequency of a microresonator, while Liu et al. [25] mathematically derived the relationship between parameters to make a design less sensitive to manufacturing errors. Robust design has been applied to collaborative design of two aspects of a passenger aircraft [26], on an internal combustion engine [21], aircraft wings [23, 27] and compressor fan blades [28]. Shavezipur et al. [29] investigated the effects of uncertainty in fabrication processes on the capacitance versus voltage curve of MEMS devices. They demonstrated that a 3% variation in the product s output is achieved. These few examples illustrate the growing need to consider uncertainty in design. 15

30 2.3.1 Taguchi Method Robust design initially surfaced in the 198s when Taguchi proposed a methodology that maximizes the signal (performance) to noise (variation) ratio, and the goal is to yield a set of design variables at which the system is the most insensitive to input variations [3, 31]. In the approach, every continuous variable is discretized into a few discrete levels, and an optimum solution is sought which is the most insensitive to noise; this solution is not necessarily a true optimum solution that satisfies the Kuhn-Tucker conditions. The source of noise is not eliminated, but its effects are reduced. A two step approach, where first the optimum solution is obtained, and subsequently optimized for robustness around that point, has been proposed. Grujicic and Chittajallu [32] initially performed a constrained optimization on a polymer electrolyme member fuel cell, and then used the Taguchi method to obtain a robust design. However, they gave no guarantee that the final design was the most robust, only that it was the most robust point near the previously obtained optimum design. Arvidsson and Gremyr [33] gave a detailed literature review of currently existing methods surrounding Taguchi s method, while Shin and Cho [34] discussed some well known drawbacks, limitations and controversy. Bhamare et al. [35] proposed a hybrid Taguchi method for optimal parameter configuration to consider degradation with service time of products Single Objective Optimization Techniques The traditional multiobjective optimization is deterministic, and each objective function value is constant for a fixed design vector. If there are uncertainties, however, each objective function value varies even for a fixed design vector; the usual multiobjective optimization approach cannot handle this situation. Yoon et al. [36], Lee and Park [37], and several other researchers [23, 28, 16

31 38-41] addressed this limitation by dividing each (variable) objective function into two deterministic objective functions: one is the mean value, and the second represents the standard variation. Yoon et al. [36] optimized an electromechanical device considering performance robustness, while Lee and Park [37] used an additional penalty term and linearization in the constraints. Chen et al. [38] performed bi-objective optimization using physical programming, an optimization algorithm that uses the designer s preferences when choosing the appropriate weights for the objective functions. A weighted sum approach is repeated for each different set of weights chosen by the user. Steenackers et al. [41] and Duvigneau [23] both optimized an airplane component; Steenackers et al. [41] and Shin and Cho [4] optimized the distance from a target value. Duvigneau [23] mixed a single-objective Particle Swarm Optimization with five different weighted sums between the two objectives. Doltsinis and Kang [39] used perturbation-based finite element method (SFEM) as well as the Weighted Sum method, but used the nominal value (as opposed to the mean value) together with the standard deviation. Considering the mean value and the standard deviation as two separate objectives allows engineers to see the trade-off between the mean and the standard deviation for each performance metric, and depending on the preference, either a better-performing solution or a more-robust solution may be selected. A drawback is that it is difficult to see how trade-offs amongst objective functions change with the inclusion of uncertainties, and secondly, the dimension of the problem becomes doubled, and the computing cost increases and visualization becomes more difficult. Various other sensitivity indices have been developed for single-objective problems. Balling et al. [42] first deterministically found optimum solutions and then considered constraint violation, because the constraints would be no longer satisfied. The final solution was near the optimum solution, and the method only found a solution near the optimum solution that satisfied variations 17

32 in the constraints. Belegundu and Zhang [43] minimized sensitivity, applying an upper limit on the objective value. From the viewpoint of a designer, this only gives assurance that the solution has the lowest variance for a given objective range. Sundaresan et al. [44] considered both the problem on the objective function and on the constraints, and developed a sensitivity index to look at variations. They then performed bi-objective optimization on the mean value and sensitivity index. The authors also looked at several methods to considerer the effect on the constraints, either looking at only the worst design, assuming a linear relationship, or using KKT conditions. Ok et al. [45] used a worst-case analysis as a robust performance index and discussed the change of a Pareto front according to the damper-to-structure mass ratio. Gobbi et al. [46] minimized the weighted sum of the mean values and standard deviations for three performance measures on a simplified vehicle suspension mode; the main limitation lies on the difficulty of choosing meaningful coefficients for the standard deviation terms. Some researchers deviated from above, and proposed alternative methods. Sandgren et al. [47] opted to minimize the linearized objective values over a range, and used a weighted sum approach. Parkinson [48, 49] considered only the extreme values of the design variables when proposing a new minimization function. His method is only applicable to mathematical formulation, where an incremental increase on one design variable has a known effect on the objective function throughout the entire feasible domain. In the research by Shin and Cho [4], the customer specifies an allowed deviation from the target performance, and the variation is then minimized. The method used is the ε-constraint method, and the author provides a solution procedure based on the Lagrangian method, with slack variables used to change the inequality constraints to equality constraints. 18

33 Otto and Antonsson [5] looked at the problem from a post-manufacturing prospect. Once the part is manufactured (variations have already occurred), some of the parameters are tuned to obtain the desired performance. These tuning parameters were introduced to overcome the effects of noise and variation, to give more freedom to the designer. This concept is also applicable to post-delivery, where the customer may tune some parameters as the performance changes over time (wear) Multiobjective Optimization Techniques As most problems are multiobjective in nature, it is not surprising that research has been done looking at problems with multiple objectives. Shimoyama et al. [51] and Zhong et al. [52] used existing multiobjective evolutionary algorithms, optimizing for n objective function mean values and n standard deviations, and Li et al. [53], Li and Azarm [54] and Nishida et al. [55] considered different robustness measures. In [53], a range of acceptable objective function deviation is given, and is translated to the design variable space, forming a contour (sensitivity region) within the design variable space. A single-objective optimization is then performed, finding the minimum distance from the nominal design vector to the contour. The same is performed on the constraints, and the lower of the two is used as the robustness measure. This method is unfortunately computationally expensive, as the robustness measure requires an inner optimization loop. Li and Azarm [54] later proposed giving variation on the design parameters, and proposed a sensitivity index relating the size of the new feasible domain and worst-case region in the objective space. In contrast, Nishida et al. [55] proposed an MOO robustness measure combining sensitivity analysis and sigma level estimation. Teich [56], Bird et al. [57], and Bui et al. [58] proposed new evolutionary algorithms that incorporate the noise effects within the solving algorithm. Teich [56] proposed a Strength Pareto 19

34 Evolutionary Algorithm (SPEA) that allows the objective values to be uncertain and to vary within intervals, while Bird et al. [57] proposed the Stochastic Pareto Genetic Algorithm (SPGA). Bui et al. [58] proposed a version of NSGA-II that handles Hughes [59] new ranking schemes to consider noise in the objective function, and compared the performance of SPEA2 and NSGA-II with noise factors introduced. Bui et al. simulated noise with a Gaussian distribution function, and simply added it to the objective functions. Bui et al. then compared the performance of noisenegating models by using probabilistic and re-sampling methods [6]. Not all methods for multiobjective optimization use evolutionary algorithms. Messac and Ismail- Yahaya [61] used a physical programming approach, but dealt with MOO by combining normalized values of each objective function (similar to Weighted Sum approach), and linearized the constraints. Given a bi-objective problem, Sun and Lou [62] minimized two weighted sum approaches. The objective functions were multiplied by a factor, and the second term was simply a constant. The four factors were all based on previous optimization results, and were problem specific to work. Their variation procedure is very different from any of the above formulations, but required prior knowledge of the behaviour of the system. Salazar and Rocco [3] considered a bound on the upper tolerance on both objective functions, by finding the maximum-volume inner box (MIB) or the maximum range of variations that satisfied a pre-define performance criterion. This was based on the maximum deviation from the target that a design can tolerate. Subramanyan et al. [63] optimized a (future) hybrid power plant using power cells by using probability density functions to represent the uncertainty. The authors used the MINSOOP multiobjective solving algorithm to deal with more than three objectives. This algorithm involves a one objective at a time optimization scheme (one objective is optimized 2

35 while all others are turned into inequality constraints). Shin and Cho [34] solved bi-objective problems by using both robust design and tolerance design in sequential steps Reliability based Design Optimization Another design optimization methodology that considers uncertainty, along with robust design, is reliability-based design optimization (RBDO). The goal of RBDO is to obtain an optimal design whose chance of any constraints being violated is smaller than a prescribed value. We assign a probability of failure to the system prior to optimization, and ensure that the final solutions satisfy that reliability. RBDO focuses on the constraints and does not consider insensitivity or robustness of the objective function(s), in contrast to robust design where we aim at performance insensitivity. Several studies applied RBDO to multiobjective optimization. Li et al. [64] used multiobjective RBDO to minimize emission (Nitric oxide and Soot) while keeping the probability of a diesel engine performance and fuel efficiency above a certain level. This work and Deb et al. [65] investigated the shift of the Pareto front caused by different probability levels of RBDO. Sinha [66] used RBDO and a meta-model on the crashworthiness and occupant safety of an automobile, while Deb et al. [67] proposed a new reliability-based multiobjective evolutionary procedure that mixes RBDO and NSGA-II. Zou and Mahadevan [68] proposed a multiobjective method that minimizes the failure probabilities of both objective functions. 21

36 2.3.5 Evaluating Objective Functions One major challenge in robust design is the need to evaluate the objective functions more often than deterministic optimization, in order to evaluate the effect of variation [36]. Some common methods found in literature to tackle this problem are discussed here. Monte Carlo Simulations (MCS) are popular for estimating the uncertainty when data is not available [23, 41, 69]. Yadav et al. [69] proposed using MCS as an estimation methodology when failure data is not available; Papadrakakis and Lagaros [7] used neural networks and MCS to deal with reliability-based optimization. Liu et al. [71] used Monte Carlo Simulations coupled with Genetic Algorithms to optimize the off-target frequency caused by uncertainty as one of the objectives. In [41], Steenackers et al. use MCS to reduce the number of finite element simulation evaluations, as well as different response surface methods. Designs of Experiments are a popular method [31, 44, 72-74], and offer the choice between level 2 or higher arrays. Design of Experiments (DoE) is a statistical method used to determine which variables have an important effect on a process, and which variables to use to optimize the process. In [73], the authors tabulate a total of 77 different engineering examples where DoE was used. For a review of Design of Experiments, see [31] and [72]. Another proposed method, mostly applied to constraints, is linearization [42, 44]. The gradient of each constraint with respect to the variations are considered, and the worst case is summed up for every variation. In practice, this method involves taking the forward difference at a point with respect to the variations. Then, if the objective functions are also considered, the negative impact of each variation is added to the objective function. From the nominal value, every variation needs to be evaluated only once, and all required information is extracted. Response surface approximation, mixed with data mining, has been proposed and shown to give good approximations [51, 75]. Response surface approximation works first by evaluating the 22

37 objective functions at a reasonably large sample of point, and the unknown points are then approximated from the given solutions, based on some derived algebraic functions. The Kriging model used in [51] also gives approximate errors, and additional points are obtained in regions with high error. The advantage of such a method is that a set number of solutions are evaluated, and for the remainder of the problem, the solutions are approximated. For practical use of multiobjective optimization, it is important to obtain robust optimal solutions whose performance measures are insensitive to uncertainties, and it would be also useful if design engineers can see how a Pareto front changes according to the robustness or insensitivity requirement level and even quantify the Pareto front changes. None of the aforementioned multiobjective optimization approaches discussed or quantitatively investigated the Pareto front change due to the effect of uncertainty on objective functions. 23

38 Chapter 3 3 MOO Algorithms This chapter describes the multiobjective optimization algorithms that were compared. Five algorithms are presented: The Weighted Sum method, the Adaptive Weighted Sum method, the Normal Constraint method, the Normal Boundary Intersection method, as well as the Nondominated Sorting Genetic Algorithm-II. For the four deterministic methods, the anchor points were automatically calculated before the multiobjective optimization portion of the algorithms was run. This eliminated the need to define the normalizing factors and eliminated two user-defined parameters. 3.1 Weighted Sum (WS) Method The Weighted Sum (WS) method, also referred to as the Trigonometric Linear Combination method, is one of the first multiobjective optimization methods and one of the most widely used. It is the simplest to implement; by using a linear combination of the objective functions, the multiobjective problem is transformed into a series of single-objective optimization problems. See [9] for more details. The mathematical problem formulation for a sub-problem of the WS algorithm is as follows: 24

39 Minimize wj i i ( x) N i= 1 Subject to N i= 1 w = 1 and w i i gx ( ) hx ( ) = xil, xi xih, i = 1,2,..., n (3-1) where J i is the normalized value of the i th objective function, and N is the number of objective functions. The sub-problem is repeated for a number of different weights, and only one solution is obtained for each weight w i. Of all multiobjective optimization methods, the WS method is often the least computationally expensive. However, equally spaced weights do not guarantee equally spaced solutions in the objective space, and the method can only represent well defined convex regions of a Pareto front. In two-dimensions, the WS method represents a rotation of the J 1 - J 2 coordinate axis. The 1 st objective function is then minimized (represented by the dotted line). Figure 3-1 represents the WS method graphically for four different angles θ for a bi-objective problem. 25

40 J 2 J' 2 J' 1 J 1 θ J' 1 J' 1 J' 2 θ J' 2 θ Figure 3-1: Trigonometric Linear Combination method. (1) Coordinate rotation of, (2,3) rotation of θ, (4) rotation of 9. The dotted line represents a translation of the J 2 axis, minimizing the J 1 value. 3.2 Adaptive Weighted Sum (AWS) Method The Adaptive Weighted Sum (AWS) method uses similar mathematical formulation to the WS method and performs a WS optimization with fewer weights in its first step (or iteration). The most important feature is that the AWS method adaptively focuses on unexplored regions of the Pareto front by specifying additional constraints for the regions where further refinement is needed (Figure 3-2). Based on a distance criterion, the AWS sub-problem is repeated until a desired solution distribution density is achieved. It was developed by Kim and de Weck [6, 1] as 26

41 a method that can find only non-dominated solutions on convex and non-convex regions. Figure 3-2 shows a single AWS iteration with two sets of AWS sub-problems. AWS: Weighted Sum Step AWS: Additional Constraints J 2 Region for refinement J 2 Additional constraints Feasible region Region for refinement J 1 J 1 AWS: Solving AWS: New Iteration J 2 J 2 J 1 J 1 Figure 3-2: AWS method in the bi-objective case. Optimization is performed only in the regions where refinement is needed. Inequality constraints, parallel to the objective axes, are used. In the initial iteration, a simple WS optimization is performed; and from the second iteration, an AWS sub-problem with additional constraints is conducted, as follows: 27

42 Minimize wj i i( x) N i= 1 Subject to N i= 1 ( ) w = 1 and w i x J1 x P1 δ J cos( θ ) ( ) y J2 x P2 δ J sin( θ ) i gx ( ) hx ( ) = xil, xi xih, i = 1,2,..., n (3-2) where θ is the angle between subsequent Pareto solutions in the previous iteration, and P 1 and P 2 the objective values of those solutions. Instead of setting a set number of solutions, an offset distance δ J between Pareto optimal points is set by the user. The author recommends setting δ J between.5 and.2 for an adequate number of solutions, where a smaller setting will give a more dense solution set. The refinement parameter C, the distance metric ε, and the number of initial Weighted Sum solutions n i must be carefully chosen by the user. The parameter C is used to determine the number of refinement (weights) during the AWS iterations. If C is too small, then the algorithm may converge prematurely; too large and the computation cost may increase. The author recommends setting C between 1 and 2. During solving, if the distance between two optimum solutions is less than ε δ, then one of the solutions is removed. The value of ε is used to j remove nearly-overlapping solutions, and a value of.5 is recommended. Lastly, the number of initial Weighted Sum solutions (n i ) must be selected to result in important solutions being located, without causing unnecessary computation costs. A value between three and ten is recommended. 28

43 The method creates evenly distributed solutions along the Pareto front over multiple iterations. Most importantly, it can find only Pareto-optimal solutions in non-convex regions, and those solutions are always non-dominated. For our work, the refinement in each region for all iterations was set at 1. In order to prevent premature convergence, an additional constraint on the size of each solution interval was imposed in addition to the initial solution spacing constraint. One of the drawbacks of the AWS method is its reliance on the solutions obtained from the initial WS step. If the WS method cannot find any effective Pareto solutions in the first iteration, the AWS method may not determine the Pareto front in its entirety. 3.3 Normal Constraint (NC) Method The Normal Constraint (NC) method was proposed by Messac et al. [8] in 23 and Messac and Mattson [11] in 24 as an improvement over the Normal Boundary Intersection method; as such, they are very similar to each other in practice. An inequality constraint is applied perpendicular to the Utopia line in the normalized space. One of the two objective functions is then optimized, depending on the direction of the inequality constraint; this limits the feasible domain. As the problem progresses, the feasible domain is gradually reduced. Figure 3-3 illustrates the inequality constraint and the new feasible region. See the reference articles for a step-by-step approach to the NC method. 29

44 J 2 User defined step size Feasible region J 1 Figure 3-3: NC method, shown on the normalized bi-objective space. An inequality constraint restricts the feasible region, and the second objective function is minimized. The NC method works best in the normalized objective space. The anchor points must either be given or determined, and the problem normalized. The NC sub-problem formulation can then be defined as follows: Minimize J 2 ( ) Subject to N ( J x X pj ) x ( ) 1 T gx ( ) hx ( ) = xil, xi xih, i = 1,2,..., n (3-3) where N is the Utopia line, 1 X pj are points on this line, and J( x) is the vector of objective values at design vector x. With the exception of the WS method, the NC method was found to be one of the simplest and fastest to implement and understand. However, the method has the potential to find dominated solutions, and thus a Pareto filter is necessary to remove these dominated solutions. 3

45 3.4 Normal Boundary Intersection (NBI) Method The Normal Boundary Intersection (NBI) method, by Das and Dennis [7], imposes additional constraints in the objective space. By adding equality constraints, the solution is forced to lie on a line normal to the Utopia line. The distance away from the Utopia line is then maximized (towards the Utopia point). The NBI sub-problem is expressed as follows: Maximize t Subject to Φ β + tnˆ = J ( x) gx ( ) hx ( ) = xil, xi xih, i = 1,2,..., n (3-4) where t is a dummy parameter, and Φβ is a point on the Utopia line and represents the set of convex hull of individual minima (CHIM). Φ is the identity matrix in a normalized space, and the sum of all β is 1, i.e. N β i= 1 i = 1. A set number of required solutions determine the equaldistant values of Φβ. ˆn is the unit normal, and J ( x ) the objective function values. The space should be normalized. Figure 3-4 illustrates the NBI method for a bi-objective problem. J 2 User defined step size Feasible line J 1 Figure 3-4: Concept behind the NBI method, shown on the normalized bi-objective space. An equality constraint restricts the feasible region to lie on a line perpendicular to the Utopia Line. 31

46 One of the known drawbacks of the NBI method is that dominated solutions can be obtained as optimal solutions because the algorithm finds a solution regardless of whether the point is dominated or not. A Pareto filter is subsequently needed to eliminate dominated solutions. Another drawback is that the method introduces equality constraints, which are generally more difficult to handle numerically. 3.5 Nondominated Sorting Genetic Algorithm (NSGA)-II Unlike traditional genetic algorithms, real-coded genetic algorithms directly use continuous design variables and do not require discretization into binary. The Nondominated Sorting Genetic Algorithm (NSGA) was original proposed by Deb and Srinivas [13]; Deb et al. [5] subsequently developed a newer version, NSGA-II. This evolutionary algorithm can use either binary or continuous design variables; Deb and Agrawal developed Simulated Binary Crossover [76] to simulate the crossover probability of simple binary to be used with continuous variables (Equation 3-5), since binary crossover is not applicable to continuous variables. In Equation 3-5, p 1 and p 2 are the parent values, while c 1 and c 2 are the offspring values. β is calculated using Equation 3-6, based on a probability distribution similar to single point binary crossover. In Equation 3-6, u is a randomly generated number between and 1, and η c is a crossover parameter chosen by the user. These two equations are used during crossover with a user defined probability. 1 1 c = p + p p p 2 2 ( ) β ( ) c = p + p + p p 2 2 ( ) β ( ) (3-5) 32

47 1 if u.5 ( 2u) ηc β = ηc if u >.5 21 ( u) (3-6) Polynomial mutation was also developed to use continuous variables and replaces binary mutation [76], as shown in Equation 3-7 and Equation 3-8. In Equation 3-7, c i is the offspring value, and p i is the original (pre-mutation) value, while p (U) i and p (L) i are the design variable upper and lower bounds, respectively. Similar to β, γ is based on the probability distribution of mutation, and in Equation 3-8, η m is the mutation parameter chosen by the user, and r i is a randomly chosen number between and 1. ( ( U ) ( L) ) c = p + p p γ (3-7) i i i i < γ = 1 2(1 ) 1( ηm + 1) (2 ri ) 1 if ri.5 1( ηm + 1) [ r ] if ri.5 i (3-8) Tournament selection and ranking based on non-dominated fronts are used, where both parent and offspring solutions are classified based on the number of solutions that dominate them. Figure 3-5 illustrates the ranking scheme used for a small number of solutions. All solutions in the first rank are not dominated by any other solutions. Then, those solutions are removed, and the new nondominated solutions are determined and assigned the second rank. The process is continued until all solutions are ranked. 33

48 J 2 Front 3 Front 2 Front 1 J 1 Figure 3-5: In NSGA-II, the individuals are sorted in nondominated fronts to determine the fitness value, instead of using the objective function value. All individuals in the same front have the same fitness value. This eliminates the needs for fitness evaluation when the problem contains more than one objective function. A specified number (normally half) of the best solutions form the new generation, and spread-out solutions are assured by taking only the least crowded individuals of the last set of ranks that are retained. The crowding distance of the i th individual is calculated using the following equation (Equation 3-9), where J j is the fitness value of the j th objective function, and J max min j and J j represent the maximum and minimum value of the objective function in the entire domain. Figure 3-6 illustrates graphically crowding distance. j= 1 max min ( j j ) ( j j ) N CD() i = J ( i + 1) J ( i 1) J J (3-9) 34

49 J 2 i th point J 1 Figure 3-6: Calculating the crowding distance of the i th point, within the same front. Constraints are handled by putting all infeasible solutions in the lowest fronts, prioritizing feasible solutions. No penalty parameter is used or required, eliminating the need for penalty functions or multipliers. Two termination criteria were implemented: the algorithm stops (1) when the number of generations reaches the maximum allowable number or (2) when all members of the population have remained non-dominated for a prescribed number of generations. NSGA-II frequently produces more effective results than other genetic algorithms. However, the method still has the well-known drawbacks of GA: the difficulty of handling equality constraints and unbounded design variables, and the solution dependency on mutation and crossover distribution parameters. 35

50 Chapter 4 4 MOO Comparison The first objective of this research was to perform a comprehensive comparison between multiobjective optimization methods on both mathematical and practical problems, to determine the efficiency of each method and to determine which method is more appropriate for which type of problem. In order to assess the efficiency and effectiveness of each method, five problems were chosen from the literature while a sixth problem was developed by the author. The test problem set represents various types of optimization (equality and inequality constraints, bounded and unbounded). Four numerical performance metrics and one visual criterion were chosen for quantitative and qualitative comparisons: (1) the total number of function evaluations, (2) the total run time (excluding non-program specific calculations such as variance calculation, plotting, and mapping), (3) the variance of solution distribution in the non-dominated regions, (4) the total numbers of dominated solutions and non-optimal solutions, and lastly (5) graphical representation of the Pareto fronts for discussion. These metrics were chosen to represent the speed of the algorithms, as well as quality by ensuring well spread solutions. We define the variance as the sum of the square difference between the distance of each Pareto solutions and the average distance between Pareto solutions, over the total number of Pareto solutions less one. NSGA-II uses random population generation; the results are repeatable by resetting the random number generator in MATLAB. This was not done in order to simulate randomness. The mean value and standard deviation were calculated after multiple runs. 36

51 4.1 Test Problems Problem 1 The first problem was proposed by [7]. The problem contains two equality constraints with five unbounded design variables, as follows: Minimize J = x + x + x + x + x x J = 3x + 2x +.1 x x 3 ( ) Subject to x1+ 2x2 x3.5x4 + x5 = 2 4x 2x +.8x +.6x +.5x = x x x x x x i= 1, 2,,5. (4-1) i Problem 2 The second problem was taken from the AWS method paper [6]. The problem contains no constraints but has multiple local optima. This is a maximization problem for both objective functions, and maximization in this thesis was realized by minimizing the negative of the objective functions. Maximize x1 ( x2+ 1) x1 3 5 x1 x2 ( x1+ 2) x2 J1 = 31 ( x1) e 1 x1 x2 e 3e +.5(2 x1 + x2) 5 x J ( x ) x x x2 (1 x1) x2 x1 (2 x2) x1 2 = e e 3e Subject to 3 x 3 i= 1,2. (4-2) i 37

52 4.1.3 Problem 3 The third problem is a three-bar truss optimization problem (Figure 4-1), which was first presented by Koski [77] and used in other studies [6]. A concentrated load was applied at the tip of the structure, and the maximum allowable stress within the domain was set at 2 MPa, as shown in Figure 4-1. The total volume and the tip deflection were used as objective functions, and the cross-sectional areas of the truss members were used as design variables. L L 3 L A 1 A 2 A 3 F = 2 KN F F δ 2 δ 1 L = 1 m E = 2Gpa Figure 4-1: Koski s three-bar truss problem used as Problem 3. The tip deflection and volume of the trusses were minimized. The mathematical optimization statement is defined as follows: Minimize J1 = Volume( A) J 2 =.25 δ1( A) +.75 δ2( A) Subject to 2 MPa σ i 2 MPa A = [ A1 A2 A3] cm A 2 cm i = 1,2,3. (4-3) i 38

53 4.1.4 Problem 4 Messac et al. used this problem as an example for the original NC method [8]. The problem is an unbounded constrained problem, with a single inequality constraint and two simple objective functions. The problem formulation is as follows: Minimize Subject to J J = x 1 1 = x x1 2 x x i= 1, 2. (4-4) i Problem 5 The fifth test problem is a bounded, constrained optimization and was proposed in the NSGA-II research article [5]. This problem was originally developed to test different evolutionary algorithms, but in this research it was used to evaluate the four gradient-based algorithms as well as NSGA-II. Minimize n ( 1exp(.2 i i+ )) n.8 3 ( i 5sin( i )) J = x + x 1 1 i= 1 J = x + x 2 i= 1 Subject to 5 x 5 i= 1,2,3. (4-5) i Problem 6 The last test problem is a manufacturing process simulation. A sheet metal part was bent using a punch and die setup, forming a bracket. The bracket was then subjected to a constant load, as 39

54 shown in Figure 4-2. The two objective functions for minimization were the amount of energy required to bend the part (in Phase 1) and the tip deflection under the applied load (in Phase 2). The geometry was modified from the work by Wisselink and Huétink [78]. Punch Stroke(S) F Width (W) Punch Radius (R) Sheet Thickness(T) Die Separation (DS) Figure 4-2: (Left) First phase that shows the punch and die setup. The energy required to bend the sheet metal was extracted from the model and was used as the first objective function. (Right) Second phase for the tip deflection calculation. The stress distribution from the first-phase model was reapplied on the model. The tip deflection was used as the second objective function. The mathematical problem statement including all constraints is the following: Minimize J1 = Energy( x) J2 = Tip Deflection δ ( x) Subject to 9 - Δθ allowed θ ( x) 9 + Δθ allowed δ ( x) δi Mass( x) Mi x [ T, R, S, DS, W] T.5 m Sheet Thickness (T).2 m.1 m Punch Radius (R).5 m.6 m Punch Stroke (S).1 m.75 m Die Separation (DS).9 m.25 m Sheet Width (W).2 m. (4-6) 4

55 The material chosen was steel with a Young s modulus of 21 GPa, Poisson s ratio of.3 and a friction coefficient of.1. A multi-linear kinematic model in ANSYS was used to simulate nonlinear material properties. A force of 327N was used in the second phase. Δθ allowed was set to 1, δ i to 1 cm, and M i as kg. This problem was chosen for two main reasons. First, the problem represents a real-life engineering application. Second, although the problem is two dimensional and geometrically simple, it is nonlinear in nature: it involves large displacement and large deformation and requires contact analysis. A finite element analysis was therefore required, and the total run time for each function evaluation was around 5 minutes. The number of function evaluations dominated the run time, with algorithm specific calculations representing only a minor portion. 4.2 Numerical Results This section shows the numerical results by all algorithms for all test problems. It should be noted that the plotted solutions do not necessarily represent the true Pareto fronts; all Pareto optimal solutions along with dominated or non-optimal solutions are plotted. For problems 1 to 5, the simulations were run on a 32 bit Windows based operation system, with a Centrino dual core 2.53 GHz processor and 3.5 GB of RAM. The sixth problem was run with a Sun Blade server running Linux with an Opteron dual core 3. GHz and 4 GB of RAM. Where possible, the entire feasible domain was mapped: we performed a factorial analysis on the design variables and the solutions that satisfied the constraints were plotted in the objective space. Function mapping is not done in optimization due to the high computation cost, and was done here simply to ensure that the results obtained were optimal. The mapping is represented in grey solutions on the solution set if there are no unbounded design variables or equality constraints. The algorithms tested do not obtain these solutions, they were obtained separately. 41

56 4.2.1 Problem 1 The WS, NC and NBI methods produced 17 solution points. For fair comparison, the parameters in the AWS method were selected such that the same number of solutions is obtained: δ J and ε were set at.95 and.5 respectively. The refine multiplier parameter (C) for AWS was set at 1 for all optimizations in this thesis. NSGA-II had 1 data points and was assumed to converge if only dominated solutions were obtained over eight consecutive generations. The probability of crossover was.9, and the Simulated Binary Crossover (η c ) and polynomial mutation (η m ) distribution indices were both set at 2, as suggested by the authors [5]. For NSGA-II, a tolerance of.5 was implemented for the two equality constraints, and large enough bounds were specified for the design variables. For consistency, the same parameter values of the MATLAB optimization function fmincon were used in all gradient-based optimization algorithms. The maximum number of function evaluations for every sub-optimization was 1, and the default values were used for all other optimization parameters. Table 4-1 shows the numerical results. Table 4-1: Numerical results for problem 1. The five methods are compared in terms of their run times, numbers of function evaluations, variances of solution distribution, and numbers of dominated and non-optimal solutions. WS AWS NC NBI NSGA-II No. of function evaluations 1,214 1,48 1,75 2,116 25, Run time (s) [2] Variance (1-4 ) [2] No. of dominated solutions [1] N/A No. of non-optimal solutions N/A [1] WS has 6 repeated solutions. [2] After 25 generations, the problem was not solved properly. Run time and variance values were omitted to prevent confusion. See Figure 4-3 NSGA-II. In order to compare the results graphically, all solution points were plotted in the objective space (Figure 4-3). The design variable bounds were set to ±1 for NSGA-II this selection required 42

57 some a priori knowledge on the Pareto front but yet the results after 25 generations poorly represented the Pareto front. 3 2 WS 3 2 AWS 3 2 NC J 2-1 J 2-1 J J 1 J 1 J NBI 3 2 NSGA-II 1 1 J 2-1 J J 1 J 1 Figure 4-3: Pareto front representations for Problem 1. All solutions shown lie on the Pareto front, except those by NSGA-II. The problem type is minimize-minimize Problem 2 The previous problem showed a smooth convex front. Problem 2, on the other hand, contains non-convex regions and dominated regions as well as convex and non-dominated regions. The problem is nonlinear in nature and has many local optima. The AWS offset distance δ J was.8, and ε was.5. Because the AWS method determined 17 solution points, the same number of solutions was prescribed for the WS, NC and NBI methods. The NSGA-II parameters were set at the same values as Problem 1 and were kept constant throughout this research. 43

58 As discussed in the literature [6], the solution of this problem is highly dependent on the choice of the initial design vector because there exist many local optimal solutions. Figure 4-4 shows a solution when an arbitrary initial design vector was used for the WS method; the Pareto front was poorly represented (Note that this is a maximize-maximize problem). Each sub-optimization was performed using several different initial design vectors and the best solutions among these trials were chosen. The choice of the initial design vectors is represented by a grid in Figure 4-5. In the case of grid size of 2, 16 initial design vectors for each sub-optimization were used, and with a grid size 1, 49 optimization trials were conducted. Table 4-2 shows the results with respect to four grids of the initial design vectors. J WS J 1 Figure 4-4: Poor representation of the Pareto front of Problem 2 when an arbitrary design vector is used. x 2 x 2 x 1 x 1 Figure 4-5: Grid size of 2 (left) and 1 (right) for initial design vectors in Problem 2. The lower and upper bounds are x L =-3 and x U =3 for x 1 and x 2. 44

59 Table 4-2: Numerical results for Problem 2. The initial condition grid setup is not applicable to the genetic algorithm. For NSGA-II, the first value represents the mean value obtained from multiple runs, and the standard deviation is shown in the parenthesis. WS AWS Grid size No. of function evaluations 5,853 9,22 19,838 7,892 24,246 38,53 75,74 263,93 Run time (s) Variance (1-4 ) No. of dominated solutions No. of non-optimal solutions NC NBI Grid size No. of function evaluations 6,836 12,37 25,34 95,93 11,828 16,733 36, ,265 Run time (s) Variance (1-4 ) No. of dominated solutions No. of non-optimal solutions 3 NSGA-II No. of function evaluations 1,33 (65) Run time (s) 4.3 (.717) Variance (1-4 ).42 (.71) No. of dominated solutions No. of non-optimal solutions Figure 4-6 shows the results by all methods with the grid size of 2. With the exception of the NBI method, the results were the same regardless of the grid size used. NBI (1) and NBI (2) show the results with grid sizes of 2 and 1.5, respectively; the other two finer grid sizes gave the same solutions as the grid size of

60 J 2 J WS J 1 NBI (1) Dominated solution Non-optimal solutions J 1 J 2 J AWS J 1 NBI (2) Dominated solution J 1 J 2 J Dominated solution NC J 1 NSGA-II J 1 Figure 4-6: Pareto front representations for Problem 2. The WS, AWS, and NC used a grid size of 2. NBI (1) used a grid size of 2, and NBI (2) used a grid size of 1.5. Only one set of NSGA-II solutions is shown. Other NSGA-II results show a similar trend. The problem type is maximize-maximize. The set of light-grey points represents the entire feasible solutions in the objective space Problem 3 The Pareto front in Problem 3 contains one convex and one concave region. A dominated region exists on the middle of the Pareto front (or lower parts of the concave region). The offset distance δ J in the AWS method was.1 and ε was.5. A total of 15 solution points were determined by all deterministic methods, and the numerical results are shown in Table

61 Table 4-3: Numerical results for Problem 3. For NSGA-II, the first value represents the mean value obtained from multiple runs, and the standard deviation is shown in the parenthesis. The NBI contains no dominated solutions regarding its own solutions, but one of the solutions is dominated if compared to the true Pareto front. WS AWS NC NBI NSGA-II No. of function evaluations ,735 (521) Run time (s) (1.696) Variance (1-4 ) (.2) No. of dominated solutions 8 1 [1] No. of non-optimal solutions [1] If only NBI solutions are compared each other, there is no dominated solution. However, compared to the true Pareto front, there is one dominated solution. Figure 4-7 shows the results obtained by all algorithms. The NC method initially gave one nonoptimal solution; but the parameters were perturbed to remove the non-optimal solution. The AWS solution showed the tip solution, which is located at the left end of the lower portion of the Pareto front. The WS, AWS and NSGA-II methods found the tip solution, also defined as a knee point. These types of solution require a significant sacrifice in at least one objective function to slightly improve another. In this problem, a better tip deflection is only obtained by increasing the volume by 4%. Due to their mathematical formulation, the NC and NBI algorithms overlooked this solution. 47

62 Volume [1-5 m 3 ] 1 WS Volume [1-5 m 3 ] 5E E [m] [m] Volume [1-5 m 3 ] 1 NBI AWS Tip Solution Volume [1-5 m 3 ] Volume [1-5 m 3 ] 1 NSGA-II E E [m] [m] 1 NC E [m] Figure 4-7: Pareto front representations for Problem 3. Fifteen points were obtained by the AWS, WS, NBI and NC methods, and 1 by the NSGA-II method. The set of light-grey points represents the entire feasible domain in the objective space. The problem type is minimize-minimize Problem 4 The 4 th test problem contains unbounded design variables. Because NSGA-II cannot handle this problem in the original form, reasonably large bounds (±1) were specified. The AWS parameters δ J and ε were set at.11 and.5, respectively, and the maximum number of iterations was 2. The AWS method created 17 solutions, and the same number of solutions was set with the NBI, WS and NC methods. The results can be found in Table 4-4. The MATLAB objective function tolerance was changed slightly to obtain better anchor points. Figure 4-8 shows the Pareto front representations. 48

63 Table 4-4: Numerical results for Problem 4. For NSGA-II, the first value represents the mean value obtained from multiple runs, and the standard deviation is shown in the parenthesis. WS AWS NC NBI NSGA-II No. of function evaluations 637 1, ,518 (2,589) Run time (s) (7.4) Variance (1-4 ) (1.9) No. of dominated solutions 1 1 No. of non-optimal solutions WS AWS NC J 2.6 J 2.6 J J J J NBI NSGA-II.8.8 J 2.6 J J J 1 Figure 4-8: Pareto front representations for Problem 4. The true anchor points in all plots are (2, ) and (, 1) Problem 5 This test problem revealed a fundamental weakness that all gradient-based methods have. The problem formulation contains an absolute value as shown in Equation 4-5, and therefore it is highly nonlinear. None of the gradient-based methods could solve this problem. Sample results by the AWS and NBI methods are shown in Figure

64 Probabilistic methods can handle non-continuously differentiable functions, and the NSGA-II method solved this problem properly; Figure 4-1 and Table 4-5 show the results. J 2 2 AWS J 1 J NBI J 1 Figure 4-9: Sample results for Problem 5 with the gradient-based algorithms. Table 4-5: Numerical results for Problem 5. The first value represents the mean value obtained from multiple runs, and the standard deviation is shown in the parenthesis. None of the gradient-based algorithms could adequately solve the problem; the results are not shown here. NSGA-II No. of function evaluations 3,388 (517) Run time (s) (1.77) Variance (1-4 ).49 (.1) No. of dominated solutions No. of non-optimal solution 2 NSGA-II J J 1 Figure 4-1: Pareto front representation for Problem 5. Only NSGA-II solved the problem properly. The problem type is minimize-minimize. 5

65 4.2.6 Problem 6 Problem 6 represents an engineering manufacturing application, and it requires the use of a finite element analysis solver. ANSYS was used in conjunction with MATLAB, running under a Linux operating system. For the AWS method, δ J and ε were.15 and.5, respectively. The number of initial points to be determined by the WS portion of the AWS algorithm was set at 2. The required number of solutions for the other methods was 14. Each of the four deterministic methods was run with several different initial conditions. NSGA-II was run with 6 trials; the optimization was terminated after subsequent Pareto fronts looked similar and all solutions belonged to the first front. The maximum number of function evaluations in MATLAB was changed to 4. Since the range of the objective values varied greatly before normalization, different tolerances were used on the objective functions. The maximum deflection (δ i ) was 1 cm, and the maximum mass (M i ) was kg. These constraint values were determined using average values of the upper and lower bounds of design variables. The algorithm results are summarized in Table 4-6, and the graphical results are shown in Figure Depending on the initial design vector, the results changed slightly. Only one set of converged results is presented for each method. Table 4-6: Numerical results for Problem 6. Due to the long computing time, only 6 NSGA-II analyses were conducted. For NSGA-II, the first value represents the mean value, and the standard deviation is shown in the parenthesis. WS AWS NC NBI NSGA-II No. of function evaluations 95 2, ,83 1,8 (3) Run time (s) 266, , ,61 88, ,61 (93,814) Variance (1-4 ) (.157) No. of dominated solutions No. of non-optimal solutions 51

66 12 1 WS 12 1 AWS 12 1 NC Energy [J] Energy [J] Energy [J] δ [m].5.1 δ [m].5.1 δ [m] 12 1 NBI 12 1 NSGA-II Energy [J] Energy [J] δ [m].5.1 δ [m] Figure 4-11: Pareto front representations for Problem 6. Each algorithm was tested with multiple initial design vectors, but only one of the converged solutions is presented here. The results of problems 1-6 can be found summarized in Figure Only the relative performance (shown as the height of the bar graph) is shown. For all performance metrics, the lower the bar, the better; a low bar indicates a lower number of function evaluations, a lower CPU time, a lower variance and less non-pareto solutions. No numerical values are shown for clarity. 52

67 Performance Metric WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II Number of function evaluations N/A WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II N/A N/A N/A N/A WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II Run (CPU) time Variance No. of dominated / non-optimal solutions WS AWS NC NBI N/A NSGA-II WS AWS NC NBI N/A NSGA-II N/A WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II N/A N/A N/A N/A WS AWS NC NBI NSGA-II N/A N/A N/A N/A WS AWS NC NBI NSGA-II N/A N/A N/A N/A WS AWS NC NBI NSGA-II WS AWS NC NBI NSGA-II Problem 1 Unbounded Eq. Constraint Problem 2 Bounded Unconstrained Problem 3 Bounded Constrained Problem 4 Unbounded Constrained Problem 5 Bounded Unconstrained Problem 6 Bounded Constrained Figure 4-12: Performance results comparison for all methods in all problems. The height represents the relative performance of the method compared to the other methods for all performance metrics, where the highest bar represents the worst performance. The number of dominated solutions and non-optimal solutions were combined into one. 53

68 4.3 Discussion WS The Weighted Sum (WS) method quickly provided a rough idea of the Pareto front. The MATLAB implementation was very simple; the average run time for mathematical problems was very short; and generally it required less function evaluations than other multiobjective optimization methods. The solutions, however, were not as effective as those by other methods. Problem 1 has a simple Pareto front, but the WS method performed poorly, exhibiting a large variance and more function evaluations than the NC method. It determined five identical solutions, and increasing the number of weights did not resolve the problem of uneven solution distribution. As already discussed in the literature [9], the WS method could not find solutions on non-convex regions. In Problems 2 and 3, the WS method represented only the convex regions, with very high variances in solution distribution. In Problem 4, the variance was high because the anchor points were far from the Pareto solutions. From the perspective of a trade-off scenario, however, there is very little trade-off near the anchor points. As in the case of other gradient-based algorithms, the WS method did not solve Problem 5. Lastly, the 6 th problem proved to be challenging for the WS method, but the overall run time was lower than those by other methods. The solution distribution was poor, but no dominated or nonoptimal solutions were obtained with the initial condition set shown. Changing initial conditions did not give a better solution distribution. 54

69 4.3.2 AWS The Adaptive Weighted Sum (AWS) method did not determine any dominated or non-optimal solutions, and the algorithm produced well-distributed solutions in all test problems (except Problem 5). Indeed the method could effectively handle Pareto fronts with non-convex regions. In Problem 2, AWS found the true Pareto front using the initial design vector grid with the lowest resolution (highest grid size). The AWS method required greater numbers of function evaluations and longer computing times than other deterministic methods in Problems 2 and 3, but no dominated solutions were obtained. Starting from a rough representation by the WS method run in the first iteration, the AWS method adaptively adds solutions on the Pareto front representation. This characteristic is generally advantageous, but it could be problematic in some unusual situations. In Problem 3, the AWS found the tip Pareto solution (see Figure 4-7), because the WS method could easily find this point even with a small number of weights; the NBI or the NC methods could not determine this solution. On the other hand, the solution dependency on the initial WS step could pose a concern as well: if the WS step does not find important regions of the Pareto front in the first iteration, the AWS method cannot correct this later and no solutions will be sought in the regions. It should be noted however that our experience shows such a case occurs rarely. As the AWS authors already discussed [6], the AWS method has difficulty dealing with nearly vertical or horizontal parts of a Pareto front. The Pareto fronts in Problems 4 and 6 have this type of shapes; the AWS method constructed less evenly distributed solutions in Problem 4, but it performed well in Problem 6. If the Pareto front is perfectly horizontal or vertical, however, the AWS method cannot be used. The WS algorithm is very useful in quickly finding solutions of interest, such as the tip solution in Problem 3, but it cannot deal with non-convex regions. This research has shown that the AWS 55

70 method, while maybe not as fast as the WS method, could potentially deal with problems in more classes than the NC or NBI methods. The ability to obtain only non-dominated solutions is an asset that cannot be overlooked. Lastly, computer implementation of the AWS algorithm was more difficult than the other methods NC The Normal Constraint (NC) method showed relatively high numerical efficiency in terms of the number of function evaluations and run time in all test problems. In Problem 6, the run time was only slightly longer than that of the WS algorithm and considerably shorter than those of the NBI, AWS or NSGA-II methods. The results of Problem 3 illustrated one of the drawbacks of the NC method: the tip-point solution was not found. In Problems 2 and 4, a dominated solution was obtained; for Problem 4, however, it was likely caused by the tolerance on the optimization function in MATLAB. The overall performance was good, but a Pareto filter is necessary to remove dominated solutions. The NC method allows a certain level of robustness which is not available in the NBI method. In highly nonlinear problems, satisfying an equality constraint can be problematic, whereas it is easier to satisfy inequality constraints. The NC method introduces inequality constraints instead of equality constraints, and this allows the algorithm to deal with nonlinear problems more effectively. In addition, the implementation of the NC algorithm was much simpler than those of any other methods, except the WS method. The NC authors provided a detailed and well laid out explanation on the method [8]. The method is considered to be easy to understand for beginners in multiobjective optimization arguably even easier than the WS method, in which the weights geometrically represent a coordinate rotation. 56

71 4.3.4 NBI The last gradient-based method looked at was the Normal Boundary Intersection (NBI) method. While similar to the NC method, it applies an equality constraint instead of an inequality constraint. From the view point of the overall performance, the NBI method was comparable to the NC method: it required more function evaluations in Problems 1, 2 and 6, but less in Problems 3 and 4. The NBI method, as all other methods tested here, gave a better solution distribution and variance than the WS method. The method was prone to finding dominated solutions, which means it will be necessary to use a Pareto filter afterwards. In Problem 3, the method found a solution in the dominated region and failed to determine the tip solution. Another problem is a numerical difficulty or lower computing efficiency caused by the introduction of an equality constraint. In Problem 2, as an example, a finer grid of initial design vectors was required in order to eliminate non-optimal solutions; and this increased the computing cost. Lastly, the NBI method produced the best solution distribution (or minimum variance) among all deterministic methods in Problem 6, but it took considerably more time than any other methods. This problem was highly nonlinear and posed a challenge for NBI that has an equality constraint. In addition, it was more difficult to implement the NBI algorithm than most other algorithms. The 5 th problem could not be solved by any of the gradient-based methods. Sequential Quadratic Programming (SQP), which is used in the MATLAB s optimization subroutine (fmincon), cannot effectively handle non-smooth functions such as the second objective function in Problem 5. The numerical results were obtained by trying multiple initial design vectors and by tweaking other parameters; however the quality of the solutions was not acceptable (Figure 4-9). The heuristic method, on the other hand, solved this problem easily. The fundamental difference between the 57

72 requirements of gradient-based algorithms and random search algorithms was evident in this example NSGA-II The performance of the Nondominated Sorting Genetic Algorithm-II (NSGA-II) varied according to the test problem. It could not solve Problem 1, which has two equality constraints and unbounded design variables. Reasonably large bounds were specified for the design variables, for which a priori knowledge on the Pareto front was required, and a tolerance on the constraints was allowed, which was not granted to the other methods; nevertheless, NSGA-II obtained solutions with very low quality. Problem 4 has two unbounded design variables, and the NSGA-II method produced unevenly distributed solutions. These drawbacks were investigated and some remedies were proposed [79]: equality constraint can be eliminated by mathematical manipulation, and this procedure reduces the number of design variables by one (see Problem 1). However, this technique cannot be used if the mathematical expression is complicated or if the equality constraint is not expressed by an explicit mathematical equation which is the case in most real-world applications. Another technique, which is useful in handling unbounded design variables, is to change the bounds of design variables using a mapping function; for example, the arctangent function maps the domain of (-, ) to the range of (-π/2, π/2). However, this type of method is useful only in the range where the sensitivity of the function is sufficiently large, and without foreknowledge of the problem s solution, it is difficult to choose the right parameters for the mapping function. Neither of the techniques was implemented for two main reasons. First, the objective of this research was to evaluate the performance of each multiobjective optimization algorithm against the test problems whose solutions were not known; therefore the comparison would not be fair if 58

73 human s mathematical inspiration and skills was introduced only for NSGA-II. Second, the equality constraint (with a small tolerance) in Problem 6 could not be represented in a closedform mathematical equation because it is a real-world application. It was necessary to implement some large bounds for design variables and tolerances for constraints because, without these modifications, NSGA-II could not have been executed. The NSGA-II algorithm performed very well for the problems that have only bounded design variables and that do not have any equality constraints. In Problem 2, the NSGA-II method demonstrated the smallest number of function evaluations and the best solution distribution among all tested algorithms. It also determined solutions of high quality in Problem 3, albeit at the expense of higher computing cost. The NSGA-II method was the only algorithm that was able to solve the 5 th test problem. Quantitative comparison with other methods was not even possible because no other methods found any meaningful solutions; Table 4-5 and Figure 4-8 clearly show that NSGA-II represented the Pareto front effectively. For all deterministic algorithms, a built-in MATLAB subroutine was used, and this made the computational implementation simpler. On the other hand, all subroutines for the NSGA-II method were programmed, but the implementation nevertheless was found to be simple. 59

74 Chapter 5 5 Multiobjective Robust Optimization Considering Pareto Front Changes due to Uncertainty 5.1 Fundamental Concepts Deterministic optimization obtains optimum solutions that are not robust when uncertainty is considered. In robust design, we seek designs that are insensitive to input variations, and a possible approach is to limit the allowable variation on the performance measure, which is expressed in terms of the objective functions in case of multiobjective optimization. The implementation of a robustness requirement would change the Pareto front and/or feasible domain. Figure 5-1 illustrates three possible modes of changes of a Pareto front and feasible domain. The far left graph in each row illustrates the entire feasible domain; the numerical value within each region indicates the maximum objective function value variation in percentage in that region. Variation, which is routinely used in this thesis, should be defined first. The term variation is used to collectively represent standard deviations of objective functions, and its mathematical definition is given in Equations (5-4) and (5-5). However, any other concepts in previous studies that quantitatively represent the degree of diversity or variation such as sensitivity index [44], 6

75 standard deviation [36, 39-41], dimensionless (or normalized) standard deviation [51], or robustness index [53] can serve the same purpose. For illustrative purposes, the entire feasible domain was separated into three regions: -33% variation, 33-66% variation, and 66-1% variation domains. Here, % represents the smallest possible variation for the given problem; the design vectors (in the design space) and feasible domain (in the objective space) with % variation are obtained by minimizing only variation of objective functions, without considering the mean values. 1% represents the largest possible variation in the problem, and the corresponding design vectors and feasible domain are determined by maximizing variation of objective functions. If mean values of objective functions are minimized, the resulting optimum solutions would probably have large variance; however, it is not necessarily the largest possible variance. Figure 5-1 (a) shows a shift of the Pareto front: the Pareto curve moves away from the Utopia point as the level of the robustness requirement increases. Figure 5-1 (b) and (c) show a Pareto front shortening and a case with no Pareto front change, respectively. Note that the feasible domain shrinks in all three cases. 61

76 J 2 J 2 J 2 J 2 66% 1% 33% 1% 66% 33% Utopia point J 1 J 1 J 1 (a) Pareto front shift J 1 J 2 J 2 J 2 J 2 33% 33% 66% 1% 66% 1% Utopia point J 1 J 1 J 1 J 1 (b) Pareto front shortening J 2 J 2 J 2 J 2 33% 66% 1% 1% 1% 66% 33% Utopia point J 1 J 1 (c) No effect on the Pareto front J 1 J 1 Figure 5-1: Three possible modes of Pareto front and feasible domain changes in a bi-objective case. Two metrics that quantitatively represent Pareto front changes were defined: S X% and R X%. As shown in Figure 5-2 (a), Pareto front shift distance, S X%, represents the distance in the normalized objective space between the Pareto front with X% allowable variation and the Pareto front with 1% variation (variation is still applied on the side constraints, but no requirement on variation). In this thesis, the distance is measured along the 45-degree line in the normalized objective space since the curves are often not parallel. The distance between the two points connecting the 62

77 optimal curve and the curve with X% allowable variation along the 45-degree line is measured. For highly non-parallel lines, taking the average distance at more than one point (for example 3, 45 and 6-degrees) is recommended. Pareto front length ratio, R X%, is defined as a ratio of the length of the Pareto front with X% variation to the length of the Pareto front with 1% variation (Figure 5-2 (b)): R X % L = L X % 1% (5-1) Only the Pareto solutions that overlap with the 1% Pareto front are considered in the calculation of L X% (red line in Figure 5-2 (b)). Note that the Pareto front lengths (L X% and L 1% ) are measured in the normalized objective space. Reduced Pareto curve Optimal Pareto curve Reduced Pareto curve Portion that remains Pareto Optimal Optimal Pareto curve L X% L X% R X % L = L X % 1% J 2 (Normalized) S X% L 1% J 2 (Normalized) L 1% 45 J 1 (Normalized) (a) Pareto front shift distance (S X% ) J 1 (Normalized) (b) Pareto front length ratio (R X% ) Figure 5-2: Quantification of the two modes of Pareto front changes: Pareto front shift and Pareto front shortening. 63

78 5.2 Method Overview The traditional multiobjective optimization problem, considering both design variables and additional input parameters, without uncertainty consideration is stated as follows: Minimize J1( x, p), J2( x, p),, JN ( x, p) Subject to gxp (, ) hxp (, ) = xil, xi xiu, i = 1,2,..., n (5-2) T where J is the objective function vector, N the total number of objective functions, x the design vector, p a vector of other input parameters (non-design variables), and g and h are vectors of inequality and equality constraints, respectively. The proposed method first determines the maximum and minimum possible variations of objective functions for the problem, and this variation range is divided into several levels. For each variation level, a sub-optimization is conducted with an additional constraint on variation, and a Pareto front is generated. This procedure generates a set of Pareto fronts, and each Pareto front has a different level of insensitivity or robustness. This framework allows for a design choice in two stages. In the first stage, we can visually examine how the Pareto front shifts or shrinks according to the robustness level and choose a proper Pareto front; indeed in this stage we can see the trade-off between robustness and the Pareto front quality (which is represented by its location, length, and shape). Once a Pareto front is chosen, we then examine the trade-off among objective functions on the Pareto front and select a proper optimum solution; only this second stage is available in the traditional multiobjective optimization. 64

79 5.3 Step-by-Step Procedure In this section, a step-by step procedure of the proposed method is described. Step 1: Optimization problem setup Firstly, representative metrics for varying objective function values must be selected. A widelyused method is to use the mean value and standard deviation; in this chapter, μ j ( x, p) and σ j ( x, p) denote the mean and standard deviation of the j th objective function. Secondly, the user must select the total number of Pareto fronts to be generated (P), which is the same as the number of robustness levels of objective functions. The Pareto front number, P, dictates the number of sub-optimizations, and in turn the computing cost as discussed in Step 3. Hence a reasonable number should be selected considering the required resolution in robustnesslevel representation and computing cost. Step 2: Minimum and maximum variations calculation For each objective function, the minimum and maximum possible variations are calculated while satisfying the inequality and equality constraints and side constraints of the design variables. In order to satisfy inequality constraints with input variations in this study, the maximum value of each constraint is taken and used as the active constraint. A more sophisticated approach, such as RBDO, may be used instead. The side constraints must also be changed: each of the upper and lower bounds shifts by Δx i, where Δx i is the one-sided uncertainty range on the i th design variable. It is not possible to satisfy equality constraints when there are variations, and thus only the nominal values of the design variables are considered for equality constraints; in real-life 65

80 problems, however, a proper tolerance may be implemented. The two optimization problems (maximum and minimum variations) for j th objective function are then defined as follows: For j = 1,, N: Minimize ±σ j ( x, p) Subject to Max gx ( xp, p) ± Δ ±Δ hxp (, ) = xi, L + Δxi xi xi, U Δ xi i = 1,2,..., n (5-3) where N is the total number of objective functions. Minimum and maximum variations for all N objective functions are summed up, producing normalized minimum and maximum variations: σ min N σ j,min =, μ j= 1 j σ max N σ j,max = (5-4) μ j= 1 j where μ j is the mean value of the j th objective function, and σ j,min and j,max σ are its minimum and maximum possible variations, respectively. If any of the mean values is close to or equal to zero, the singularity problem arises, and the total minimum and maximum variations are used instead: σ min N = σ, σmax = σ,max (5-5) j,min j= 1 j= 1 N j The number of single-objective optimizations to be performed in this step is 2N. 66

81 Step 3: Sub-optimization problem A sub-optimization with each of the robustness levels is performed P times, where P is the total number of Pareto fronts selected in Step 1. For k = 1,, P: Minimize μ ( J x±δ x p±δ p ) μn ( JN x±δ x p±δp ) (, ),, (, ) T 1 1 Subject to Max gx ( xp, p) ±Δ ±Δ hxp (, ) = k σ ( x) σ + ( σ σ ) P min max min xi, L +Δxi xi xi, U Δ xi i = 1,2,..., n (5-6) where P is the total number of Pareto fronts or the number of robustness-level divisions, N the number of objective functions, n the number of design variables, μ j the mean of the j th objective function, and σ ( x) is the variation which is determined by σ = σ / μ (if normalized N j= 1 j j minimum and maximum variations are used as in Equation 5-4) or σ = σ (when total minimum and maximum variations are used as in Equation 5-5). N j = 1 j If this problem is solved using the Weighted Sum method with α solutions on each Pareto front, the number of single-objective optimizations to be conducted in this step is α P. 5.4 Numerical Examples The proposed method is applied to four numerical examples. The first three examples are purely mathematically-defined optimization problems, but the last one is a real-world application that requires a finite element analysis for each function evaluation and that has a realistic dimensional 67

82 uncertainty specification. The number of Pareto fronts, P, was 4 in all cases, and two popular multiobjective algorithms were used: the Adaptive Weighted Sum (AWS) method [6, 1], and the Normal Constraint (NC) method [8, 11]. For all problems, Design of Experiments was used with three levels in order to reduce the total number of function evaluations; further details are presented in Appendix A. Design of Experiments has been used successfully in a variety of engineering design problems [73]. Examples 1, 2 and 3 were solved using a 32 bit Windows based operation system, with a Centrino dual core 2.53 GHz processor and 3.5 GB of RAM, and the fourth example was solved using a Sun Blade server running Linux with an Opteron dual core 3. GHz processor and 4 GB of RAM Example 1 The first example is a bi-objective optimization problem with two design variables and inequality constraints, which was used by Deb et al. [5] to demonstrate the efficiency of the Nondominated Sorting Genetic Algorithm-II. The modified problem statement, considering uncertainty, is defined as follows: For k = 1,, 4 : Minimize μ ( J ( x±δ x) ), μ ( J ( x±δx) ) T, where ( ) ( ) 2 9 ( 1) 2 2 J = x 2 + x J = x x Subject to ( x x ) ( x x ) Max ±Δ + ±Δ ( x x ) ( x x ) Max ±Δ 3 ±Δ k σ ( x) σ + ( σ σ ) 4 min max min 2 +Δx x 2 Δx i i i Δ x = 1, for i= 1,2. (5-7) i 68

83 The total standard deviation was used (Equation 5-4) because some of the μ 2 values approached zero. In order to visually examine the nature of the problem, the entire feasible domain was determined using a full-factorial analysis with sufficiently discretized design variables (Figure 5-3); this procedure requires a very large number of function evaluations and would not be feasible for complex problems. The color represents the degree of variance of each solution: blue solutions have a low standard deviation and red ones have a high standard deviation. 5 1% -5 J J1 % Figure 5-3: Feasible domain for Example 1 with ±1 as design variable uncertainty. Blue color represents solutions with low variance, and red denotes solutions with high variance. Figure 5-4 shows the Pareto front and feasible domain for each of the four robustness levels: 1%, 75%, 5%, and 25%. In Figure 5-5 (a), all four Pareto fronts are displayed together. The figure reveals that the dominant mode of the Pareto change in this problem was Pareto front shortening, and the Pareto curve with the lowest level of allowed variation (i.e. 25%) was much shorter than that with no variation limitation (i.e. 1%). As a secondary mode of the Pareto front change, a partial Pareto front shift was observed in all three cases (75%, 5%, and 25%). Both AWS and NC methods produced well spread solutions that clearly define actual Pareto fronts. 69

84 5 5 NC - 1% 5 NC - 75% 5 NC - 5% L1% J AWS - 1% 1 J AWS - 75% 1 J AWS - 5% L1% J J J AWS - 25% J2-5 J2 J2 J NC - 25% J2 J2 J2 J J J1 15 Figure 5-4: Pareto fronts and feasible domains with four different robustness levels for Example 1. The set of light grey solutions represent the entire feasible domain. Equation 5-1 was used to quantify the Pareto front shortening, as shown in Figure 5-5 (b). As discussed in Section 5.1, only the Pareto solutions overlapping with the optimal Pareto solutions (L1%, additional constraint on performance variation not considered) were used in determining the length of a reduce Pareto front. The figure indicates that the length of the Pareto curve decreased almost linearly from 1% to 5%, and in the next level of 25%, the rate of length shortening decreases. 1 Pareto front length ratio (R X% ) 5 1% 75% -5 J2 5% 25% J % 25 (a) Superimposed Pareto fronts for Example 1 5% 75% 1% Acceptable Variance (b) Pareto front length ratio Figure 5-5: Superimposed Pareto fronts and Pareto front length ratio (RX%) according to the robustness level for Example 1. 7

85 5.4.2 Example 2 The second example was developed by the author: it contains two design variables with the variation on each side of.5, and a single inequality constraint. The problem statement with uncertainty is as follows: For k = 1,, 4 : Minimize μ ( J ( x±δ x) ), μ ( J ( x±δx) ) T, where ( 1) J = x x J = x Subject to ( ) ( ( x1 x1 ) ) (( x2 x2) ) ( ) Max 1 ±Δ 4 / ±Δ 3 / ( x x ) ( x x ) Max ±Δ + ±Δ k σ ( x) σ + ( σ σ ) 4 min max min 1 +Δx x 1 Δx i i i Δ x =.5, for i= 1,2. (5-8) i The total standard deviation was used (Equation 5-4) because of the singularity problem. Figure 5-6 shows the entire feasible domain with the color representing the solution robustness. J J1 1% % Figure 5-6: Feasible domain for Example 2 with ±.5 as design variable uncertainty. Blue color represents solutions with low variance, and red denotes solutions with high variance. As we can expect from Figure 5-6, the Pareto front moved back from the optimal Pareto curve as the robustness level is increased: Figure 5-7 shows the Pareto fronts and feasible domains with 71

86 four different variation requirements: 1%, 75%, 5%, and 25%. Clearly, the dominant mode of the Pareto front change was a Pareto front shift, as seen in Figure 5-8 (a). The Pareto front change also involved length shortening; in this case, Pareto front length ratios (R X% ) of all three reduced Pareto fronts (75%, 5%, 25%) were zero because no Pareto solutions overlapped with the solutions of the optimal Pareto front. -1 NC - 1% -1 NC - 75% -1 NC - 5% -1 NC - 25% J2-5 J2-5 J2-5 J L 1% J1 J1 J1 J1-1 AWS- 1% -1 AWS- 75% -1 AWS- 5% -1 AWS - 25% J2-5 J2-5 J2-5 J L 1% J1 J1 J1 J1 Figure 5-7: Pareto fronts and feasible domains with four different robustness levels for Example 2. The set of light grey solutions represent the entire feasible domain. Pareto front shift distance, S X%, is shown in Figure 5-8. The change is almost linear, and the NC and AWS methods found nearly identical results. 72

87 J % 75% 5% 25% J 1 Pareto front shift distance (SX%) % 5% 75% 1% Acceptable Variance (a) Superimposed Pareto fronts for Example 2 (b) Pareto front length ratio Figure 5-8: Superimposed Pareto fronts and Pareto front shift distance (S X% ) according to the robustness level for Example Example 3 The third example is a three-bar truss optimization problem which was presented by Koski [77]. As seen in Figure 5-9, two concentrated loads were applied at the tip of the structure, and the cross-sectional areas of the three members were used as design variables. The maximum allowable stress in the structure was 2 MPa. The total volume and the tip deflection were to be minimized. L L 3 L A 1 A 2 A 3 F = 2 KN L = 1 m F F E = 2Gpa δ 2 δ 1 Figure 5-9: Three-bar truss problem [77]. The cross-sectional area of each truss member is optimized to minimize tip deflection and total volume. 73

88 In this example, different variations were implemented on the two directions of the nominal value of each design variable:.1 on the upper-side direction, and.5 on the lower-side direction. The mathematical optimization statement with uncertainty is as follows: For k = 1,, 4 : Minimize μ ( J ( A±Δ A*) ), μ ( J ( A±ΔA*) ) Subject to Max Stressi ( A ± ΔA*) 2 MPa k σ ( x) σ + ( σ σ ) 4 min max min T, where J1 = Volume( A) J2 =Δ ( A) =.25δ 1+.75δ 2.1 cm +ΔA A 2 cm ΔA 2 2 il, i iu, Δ A =.5 cm, Δ A =.1 cm for i= 1,2,3 (5-9) 2 2 il, iu, where Stress i is the stress in the i th truss member, and +ΔA*.1cm 2 and 2 ΔA*.5cm. Without having the singularity problem in this example, the normalized standard deviation (Equation 5-5) was used. Figure 5-1 shows the complete feasible domain with solution robustness represented in different colors % Volume [1-6 m 3 ] E Δ [m] % Figure 5-1: Feasible domain for Example 3 with +.1 and -.5 as design variable uncertainty. Blue color represents solutions with low variance, and red denotes solutions with high variance. Similar to the previous examples, Pareto fronts and feasible domains with four levels of maximum allowable variation or performance robustness were obtained (Figure 5-11). The 74

89 primary mode of the Pareto front change was Pareto front length shortening (Figure 5-12 (a)). Unlike Example 1, the secondary mode (Pareto front shift) was very weak in this example. To quantitatively determine the length shortening, Equation 5-1 was used, and the results are shown in Figure 5-12 (b) L1% 3 6E Δ [m] Δ [m] L1% 3 6E-5.11 Δ [m] AWS - 5% E-5.11 Δ [m] Δ [m].16 AWS - 25% Volume [1-6 m3 ] Volume [1-6 m3 ] Δ [m] 9 AWS - 75% 7 4 6E AWS - 1% 6 4 NC - 25% E NC - 5% 8 Volume [1-6 m3 ] Volume [1-6 m3 ] Volume [1-6 m3 ] 7 4 Volume [1-6 m3 ] 9 NC - 75% 8 Volume [1-6 m3 ] NC - 1% 8 Volume [1-6 m3 ] E-5.11 Δ [m].16 6E-5.11 Δ [m].16 Figure 5-11: Pareto fronts and feasible domains with four different robustness levels for Example 3. The set of light grey solutions represent the entire feasible domain. 1 Volume [1-6 m3 ] 8 7 1% 6 75% 5% 5 25% 4 Pareto front length ratio (RX% ) E Δ [m] 25%.16 (a) Superimposed Pareto fronts for Example 3 5% 75% 1% Acceptable Variance (b) Pareto front length ratio Figure 5-12: Superimposed Pareto fronts and Pareto front length ratio (RX%) according to the robustness level for Example 3. It is worthwhile to discuss the behaviour of AWS and NC methods in this numerical example. Both methods performed well and produced well-distributed solutions in all Pareto front 75

90 representations. The NC method, however, obtained a solution in the dominated region of the Pareto set, and it did not determine the tip solution, which is located at the left end of the lower segment of the Pareto front, in the 75% and 5% variation cases. The AWS method required slightly more function evaluations than the NC method Example 4 The last example represents a real-life multiobjective engineering design problem: a connecting rod which is used to transfer power from engine pistons to the crank shaft. The shape of the connecting rod was taken from literature [8], and Figure 5-13 shows the four geometric parameters that were used as design variables: W 1 and W 2 are part widths at two locations, t is the overall thickness, and t w is the web thickness, as well as the applied loads and boundary conditions. The part widths (W 1, W 2 ) are defined at two different locations to represent a tapered shape. The maximum allowable stress in the domain was 35 MPa. The material type was steel with a Young s Modulus of 2 GPa and Poisson s ratio of.3. The objective functions for minimization were the volume and the maximum deflection. ANSYS was used for the finite element analysis. 76

91 W 1 W 2 t W 1, W 2 t w t t w Figure 5-13: The finite element analysis model and the schematic of a cross section for Example 4. The small bearing end is fixed in all degrees of freedom, while axial and transverse loads were applied in the larger bearing. T Design variable variation of Δx [ ΔW1, ΔW2, Δtw, Δ t] = [.5",.5",.25",.4"] which was selected based on casting standards. The first, second, and fourth parameters T were used, ( 1 2 ΔW, ΔW,andΔ t) represent approximate CT8 tolerances (ISO-862), which does not require post-casting machining. A smaller variation for the third design variable ( Δ t w ) was set based on the assumption that the web thickness is machined down to the required thickness with a small tolerance and smooth surface in order to prevent crack generations due to surface imperfections. The multiobjective optimization with uncertainty is stated as follows: 77

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague.

Evolutionary Algorithms: Lecture 4. Department of Cybernetics, CTU Prague. Evolutionary Algorithms: Lecture 4 Jiří Kubaĺık Department of Cybernetics, CTU Prague http://labe.felk.cvut.cz/~posik/xe33scp/ pmulti-objective Optimization :: Many real-world problems involve multiple

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

Multi-objective Optimization

Multi-objective Optimization Some introductory figures from : Deb Kalyanmoy, Multi-Objective Optimization using Evolutionary Algorithms, Wiley 2001 Multi-objective Optimization Implementation of Constrained GA Based on NSGA-II Optimization

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading 11 th World Congress on Structural and Multidisciplinary Optimisation 07 th -12 th, June 2015, Sydney Australia Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response

More information

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS 6.1 Introduction Gradient-based algorithms have some weaknesses relative to engineering optimization. Specifically, it is difficult to use gradient-based algorithms

More information

Multi-Objective Optimization using Evolutionary Algorithms

Multi-Objective Optimization using Evolutionary Algorithms Multi-Objective Optimization using Evolutionary Algorithms Kalyanmoy Deb Department ofmechanical Engineering, Indian Institute of Technology, Kanpur, India JOHN WILEY & SONS, LTD Chichester New York Weinheim

More information

Multi-objective Optimization

Multi-objective Optimization Jugal K. Kalita Single vs. Single vs. Single Objective Optimization: When an optimization problem involves only one objective function, the task of finding the optimal solution is called single-objective

More information

Introduction to ANSYS DesignXplorer

Introduction to ANSYS DesignXplorer Lecture 5 Goal Driven Optimization 14. 5 Release Introduction to ANSYS DesignXplorer 1 2013 ANSYS, Inc. September 27, 2013 Goal Driven Optimization (GDO) Goal Driven Optimization (GDO) is a multi objective

More information

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 131 CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 6.1 INTRODUCTION The Orthogonal arrays are helpful in guiding the heuristic algorithms to obtain a good solution when applied to NP-hard problems. This

More information

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution

More information

AIRFOIL SHAPE OPTIMIZATION USING EVOLUTIONARY ALGORITHMS

AIRFOIL SHAPE OPTIMIZATION USING EVOLUTIONARY ALGORITHMS AIRFOIL SHAPE OPTIMIZATION USING EVOLUTIONARY ALGORITHMS Emre Alpman Graduate Research Assistant Aerospace Engineering Department Pennstate University University Park, PA, 6802 Abstract A new methodology

More information

CHAPTER 2 MULTI-OBJECTIVE REACTIVE POWER OPTIMIZATION

CHAPTER 2 MULTI-OBJECTIVE REACTIVE POWER OPTIMIZATION 19 CHAPTER 2 MULTI-OBJECTIE REACTIE POWER OPTIMIZATION 2.1 INTRODUCTION In this chapter, a fundamental knowledge of the Multi-Objective Optimization (MOO) problem and the methods to solve are presented.

More information

HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS

HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS NABEEL AL-MILLI Financial and Business Administration and Computer Science Department Zarqa University College Al-Balqa'

More information

Topological Machining Fixture Layout Synthesis Using Genetic Algorithms

Topological Machining Fixture Layout Synthesis Using Genetic Algorithms Topological Machining Fixture Layout Synthesis Using Genetic Algorithms Necmettin Kaya Uludag University, Mechanical Eng. Department, Bursa, Turkey Ferruh Öztürk Uludag University, Mechanical Eng. Department,

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION 1 CHAPTER 1 INTRODUCTION 1.1 OPTIMIZATION OF MACHINING PROCESS AND MACHINING ECONOMICS In a manufacturing industry, machining process is to shape the metal parts by removing unwanted material. During the

More information

Efficient Robust Shape Optimization for Crashworthiness

Efficient Robust Shape Optimization for Crashworthiness 10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA Efficient Robust Shape Optimization for Crashworthiness Milan Rayamajhi 1, Stephan Hunkeler

More information

Development of optimization methods for Volvo fuel economy simulation. Master s thesis in Signal and Systems RAFAEL KLÜPPEL SMIJTINK

Development of optimization methods for Volvo fuel economy simulation. Master s thesis in Signal and Systems RAFAEL KLÜPPEL SMIJTINK Development of optimization methods for Volvo fuel economy simulation. Master s thesis in Signal and Systems RAFAEL KLÜPPEL SMIJTINK 1 Development of optimization methods for Volvo fuel economy simulation.

More information

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach

Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Comparison of Evolutionary Multiobjective Optimization with Reference Solution-Based Single-Objective Approach Hisao Ishibuchi Graduate School of Engineering Osaka Prefecture University Sakai, Osaka 599-853,

More information

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION Nedim TUTKUN nedimtutkun@gmail.com Outlines Unconstrained Optimization Ackley s Function GA Approach for Ackley s Function Nonlinear Programming Penalty

More information

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms H. Ishibuchi, T. Doi, and Y. Nojima, Incorporation of scalarizing fitness functions into evolutionary multiobjective optimization algorithms, Lecture Notes in Computer Science 4193: Parallel Problem Solving

More information

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem etic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem R. O. Oladele Department of Computer Science University of Ilorin P.M.B. 1515, Ilorin, NIGERIA

More information

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing 1. Introduction 2. Cutting and Packing Problems 3. Optimisation Techniques 4. Automated Packing Techniques 5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing 6.

More information

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA

Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Mechanical Component Design for Multiple Objectives Using Elitist Non-Dominated Sorting GA Kalyanmoy Deb, Amrit Pratap, and Subrajyoti Moitra Kanpur Genetic Algorithms Laboratory (KanGAL) Indian Institute

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

An Introduction to Evolutionary Algorithms

An Introduction to Evolutionary Algorithms An Introduction to Evolutionary Algorithms Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical Information Technology Karthik.sindhya@jyu.fi http://users.jyu.fi/~kasindhy/

More information

CHAPTER 5 STRUCTURAL OPTIMIZATION OF SWITCHED RELUCTANCE MACHINE

CHAPTER 5 STRUCTURAL OPTIMIZATION OF SWITCHED RELUCTANCE MACHINE 89 CHAPTER 5 STRUCTURAL OPTIMIZATION OF SWITCHED RELUCTANCE MACHINE 5.1 INTRODUCTION Nowadays a great attention has been devoted in the literature towards the main components of electric and hybrid electric

More information

OPTIMIZATION METHODS. For more information visit: or send an to:

OPTIMIZATION METHODS. For more information visit:  or send an  to: OPTIMIZATION METHODS modefrontier is a registered product of ESTECO srl Copyright ESTECO srl 1999-2007 For more information visit: www.esteco.com or send an e-mail to: modefrontier@esteco.com NEOS Optimization

More information

Classification of Optimization Problems and the Place of Calculus of Variations in it

Classification of Optimization Problems and the Place of Calculus of Variations in it Lecture 1 Classification of Optimization Problems and the Place of Calculus of Variations in it ME256 Indian Institute of Science G. K. Ananthasuresh Professor, Mechanical Engineering, Indian Institute

More information

Assessing the Convergence Properties of NSGA-II for Direct Crashworthiness Optimization

Assessing the Convergence Properties of NSGA-II for Direct Crashworthiness Optimization 10 th International LS-DYNA Users Conference Opitmization (1) Assessing the Convergence Properties of NSGA-II for Direct Crashworthiness Optimization Guangye Li 1, Tushar Goel 2, Nielen Stander 2 1 IBM

More information

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.

More information

Incorporating Decision-Maker Preferences into the PADDS Multi- Objective Optimization Algorithm for the Design of Water Distribution Systems

Incorporating Decision-Maker Preferences into the PADDS Multi- Objective Optimization Algorithm for the Design of Water Distribution Systems Incorporating Decision-Maker Preferences into the PADDS Multi- Objective Optimization Algorithm for the Design of Water Distribution Systems Bryan A. Tolson 1, Mohammadamin Jahanpour 2 1,2 Department of

More information

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Search & Optimization Search and Optimization method deals with

More information

A New Efficient and Useful Robust Optimization Approach Design for Multi-Objective Six Sigma

A New Efficient and Useful Robust Optimization Approach Design for Multi-Objective Six Sigma A New Efficient and Useful Robust Optimization Approach Design for Multi-Objective Six Sigma Koji Shimoyama Department of Aeronautics and Astronautics University of Tokyo 3-1-1 Yoshinodai Sagamihara, Kanagawa,

More information

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem An Evolutionary Algorithm for the Multi-objective Shortest Path Problem Fangguo He Huan Qi Qiong Fan Institute of Systems Engineering, Huazhong University of Science & Technology, Wuhan 430074, P. R. China

More information

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization 2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic

More information

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems Repair and Repair in EMO Algorithms for Multiobjective 0/ Knapsack Problems Shiori Kaige, Kaname Narukawa, and Hisao Ishibuchi Department of Industrial Engineering, Osaka Prefecture University, - Gakuen-cho,

More information

An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls

An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls Ankur Sinha, Pekka Korhonen, Jyrki Wallenius Firstname.Secondname@aalto.fi,

More information

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis and Dimensioning II Department of Electronics and Communications Engineering Tampere University of Technology, Tampere, Finland March 19, 2014

More information

Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization

Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization 10 th World Congress on Structural and Multidisciplinary Optimization May 19-24, 2013, Orlando, Florida, USA Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization Sirisha Rangavajhala

More information

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Ke Li 1, Kalyanmoy Deb 1, Qingfu Zhang 2, and Sam Kwong 2 1 Department of Electrical and Computer

More information

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization adfa, p. 1, 2011. Springer-Verlag Berlin Heidelberg 2011 Devang Agarwal and Deepak Sharma Department of Mechanical

More information

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007) In the name of God Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm Spring 2009 Instructor: Dr. Masoud Yaghini Outlines Problem Definition Modeling As A Set Partitioning

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search

Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search Seventh International Conference on Hybrid Intelligent Systems Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search Crina Grosan and Ajith Abraham Faculty of Information Technology,

More information

Simplicial Global Optimization

Simplicial Global Optimization Simplicial Global Optimization Julius Žilinskas Vilnius University, Lithuania September, 7 http://web.vu.lt/mii/j.zilinskas Global optimization Find f = min x A f (x) and x A, f (x ) = f, where A R n.

More information

Multidisciplinary System Optimization of Spacecraft Interferometer Testbed

Multidisciplinary System Optimization of Spacecraft Interferometer Testbed Multidisciplinary System Optimization of Spacecraft Interferometer Testbed 16.888 Final Presentation 7 May 2003 Deborah Howell Space Systems Laboratory Chart: 1 SIM: Space Interferometry Mission Mission:

More information

Multi-Objective Memetic Algorithm using Pattern Search Filter Methods

Multi-Objective Memetic Algorithm using Pattern Search Filter Methods Multi-Objective Memetic Algorithm using Pattern Search Filter Methods F. Mendes V. Sousa M.F.P. Costa A. Gaspar-Cunha IPC/I3N - Institute of Polymers and Composites, University of Minho Guimarães, Portugal

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Dr.-Ing. Johannes Will CAD-FEM GmbH/DYNARDO GmbH dynamic software & engineering GmbH

Dr.-Ing. Johannes Will CAD-FEM GmbH/DYNARDO GmbH dynamic software & engineering GmbH Evolutionary and Genetic Algorithms in OptiSLang Dr.-Ing. Johannes Will CAD-FEM GmbH/DYNARDO GmbH dynamic software & engineering GmbH www.dynardo.de Genetic Algorithms (GA) versus Evolutionary Algorithms

More information

The Genetic Algorithm for finding the maxima of single-variable functions

The Genetic Algorithm for finding the maxima of single-variable functions Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding

More information

Evolutionary Algorithm for Embedded System Topology Optimization. Supervisor: Prof. Dr. Martin Radetzki Author: Haowei Wang

Evolutionary Algorithm for Embedded System Topology Optimization. Supervisor: Prof. Dr. Martin Radetzki Author: Haowei Wang Evolutionary Algorithm for Embedded System Topology Optimization Supervisor: Prof. Dr. Martin Radetzki Author: Haowei Wang Agenda Introduction to the problem Principle of evolutionary algorithm Model specification

More information

Tools & Applications 1. Introduction 2. Design of Matrix Turbines

Tools & Applications 1. Introduction 2. Design of Matrix Turbines NATIONAL TECHNICAL UNIVERSITY of ATHENS Lab. Thermal Turbomachines Parallel CFD & Optimization Unit Design Optimization Tools & Applications 1. Introduction 2. Design of Matrix Turbines Kyriakos C. Giannakoglou

More information

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization 6th WSEAS International Conference on SYSTEM SCIENCE and SIMULATION in ENGINEERING, Venice, Italy, November 21-23, 2007 18 Performance Evaluation of an Interior Point Filter Line Search Method for Constrained

More information

An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization

An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization Kalyanmoy Deb, Fellow, IEEE and Mohamed Abouhawwash Department of Electrical and Computer Engineering Computational

More information

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Tony Maciejewski, Kyle Tarplee, Ryan Friese, and Howard Jay Siegel Department of Electrical and Computer Engineering Colorado

More information

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Metaheuristic Optimization with Evolver, Genocop and OptQuest Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:

More information

Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li

Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li International Conference on Applied Science and Engineering Innovation (ASEI 215) Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm Yinling Wang, Huacong Li School of Power and

More information

Recent Developments in the Design and Optimization of Constant Force Electrical Contacts

Recent Developments in the Design and Optimization of Constant Force Electrical Contacts Recent Developments in the Design and Optimization of Constant orce Electrical Contacts John C. Meaders * Stephen P. Harston Christopher A. Mattson Brigham Young University Provo, UT, 84602, USA Abstract

More information

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

International Journal of Digital Application & Contemporary research Website:   (Volume 1, Issue 7, February 2013) Performance Analysis of GA and PSO over Economic Load Dispatch Problem Sakshi Rajpoot sakshirajpoot1988@gmail.com Dr. Sandeep Bhongade sandeepbhongade@rediffmail.com Abstract Economic Load dispatch problem

More information

Influence of the tape number on the optimized structural performance of locally reinforced composite structures

Influence of the tape number on the optimized structural performance of locally reinforced composite structures Proceedings of the 7th GACM Colloquium on Computational Mechanics for Young Scientists from Academia and Industry October 11-13, 2017 in Stuttgart, Germany Influence of the tape number on the optimized

More information

Varianzbasierte Robustheitsoptimierung

Varianzbasierte Robustheitsoptimierung DVM Workshop Zuverlässigkeit und Probabilistik München, November 2017 Varianzbasierte Robustheitsoptimierung unter Pareto Kriterien Veit Bayer Thomas Most Dynardo GmbH Weimar Robustness Evaluation 2 How

More information

Multicriterial Optimization Using Genetic Algorithm

Multicriterial Optimization Using Genetic Algorithm Multicriterial Optimization Using Genetic Algorithm 180 175 170 165 Fitness 160 155 150 145 140 Best Fitness Mean Fitness 135 130 0 Page 1 100 200 300 Generations 400 500 600 Contents Optimization, Local

More information

Efficient Algorithms for Robust Procurements

Efficient Algorithms for Robust Procurements Efficient Algorithms for Robust Procurements Shawn Levie January 8, 2013 Studentnr. 344412 E-mail: shawnlevie@gmail.com Supervisors: Dr. A. F. Gabor and Dr. Y. Zhang Erasmus School of Economics 1 Abstract

More information

A Comparative Study of Frequency-domain Finite Element Updating Approaches Using Different Optimization Procedures

A Comparative Study of Frequency-domain Finite Element Updating Approaches Using Different Optimization Procedures A Comparative Study of Frequency-domain Finite Element Updating Approaches Using Different Optimization Procedures Xinjun DONG 1, Yang WANG 1* 1 School of Civil and Environmental Engineering, Georgia Institute

More information

APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING

APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING Second Edition P. Venkataraman Rochester Institute of Technology WILEY JOHN WILEY & SONS, INC. CONTENTS PREFACE xiii 1 Introduction 1 1.1. Optimization Fundamentals

More information

Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems

Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems Dynamic Resampling for Preference-based Evolutionary Multi-Objective Optimization of Stochastic Systems Florian Siegmund a, Amos H. C. Ng a, and Kalyanmoy Deb b a School of Engineering, University of Skövde,

More information

Lecture 5: Optimization of accelerators in simulation and experiments. X. Huang USPAS, Jan 2015

Lecture 5: Optimization of accelerators in simulation and experiments. X. Huang USPAS, Jan 2015 Lecture 5: Optimization of accelerators in simulation and experiments X. Huang USPAS, Jan 2015 1 Optimization in simulation General considerations Optimization algorithms Applications of MOGA Applications

More information

An Efficient Constraint Handling Method for Genetic Algorithms

An Efficient Constraint Handling Method for Genetic Algorithms An Efficient Constraint Handling Method for Genetic Algorithms Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Department of Mechanical Engineering Indian Institute of Technology Kanpur Kanpur,

More information

Standard Error Dynamic Resampling for Preference-based Evolutionary Multi-objective Optimization

Standard Error Dynamic Resampling for Preference-based Evolutionary Multi-objective Optimization Standard Error Dynamic Resampling for Preference-based Evolutionary Multi-objective Optimization Florian Siegmund a, Amos H. C. Ng a, and Kalyanmoy Deb b a School of Engineering, University of Skövde,

More information

Challenge Problem 5 - The Solution Dynamic Characteristics of a Truss Structure

Challenge Problem 5 - The Solution Dynamic Characteristics of a Truss Structure Challenge Problem 5 - The Solution Dynamic Characteristics of a Truss Structure In the final year of his engineering degree course a student was introduced to finite element analysis and conducted an assessment

More information

Fixture Layout Optimization Using Element Strain Energy and Genetic Algorithm

Fixture Layout Optimization Using Element Strain Energy and Genetic Algorithm Fixture Layout Optimization Using Element Strain Energy and Genetic Algorithm Zeshan Ahmad, Matteo Zoppi, Rezia Molfino Abstract The stiffness of the workpiece is very important to reduce the errors in

More information

Evolutionary Algorithms and the Cardinality Constrained Portfolio Optimization Problem

Evolutionary Algorithms and the Cardinality Constrained Portfolio Optimization Problem Evolutionary Algorithms and the Cardinality Constrained Portfolio Optimization Problem Felix Streichert, Holger Ulmer, and Andreas Zell Center for Bioinformatics Tübingen (ZBIT), University of Tübingen,

More information

Engineering design using genetic algorithms

Engineering design using genetic algorithms Retrospective Theses and Dissertations 2007 Engineering design using genetic algorithms Xiaopeng Fang Iowa State University Follow this and additional works at: http://lib.dr.iastate.edu/rtd Part of the

More information

Escaping Local Optima: Genetic Algorithm

Escaping Local Optima: Genetic Algorithm Artificial Intelligence Escaping Local Optima: Genetic Algorithm Dae-Won Kim School of Computer Science & Engineering Chung-Ang University We re trying to escape local optima To achieve this, we have learned

More information

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery Monika Sharma 1, Deepak Sharma 2 1 Research Scholar Department of Computer Science and Engineering, NNSS SGI Samalkha,

More information

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences Upali K. Wickramasinghe and Xiaodong Li School of Computer Science and Information Technology, RMIT University,

More information

Design Optimization of Hydroformed Crashworthy Automotive Body Structures

Design Optimization of Hydroformed Crashworthy Automotive Body Structures Design Optimization of Hydroformed Crashworthy Automotive Body Structures Akbar Farahani a, Ronald C. Averill b, and Ranny Sidhu b a Engineering Technology Associates, Troy, MI, USA b Red Cedar Technology,

More information

Neural Network Weight Selection Using Genetic Algorithms

Neural Network Weight Selection Using Genetic Algorithms Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks

More information

An Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions

An Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions An Interactive Evolutionary Multi-Objective Optimization Method Based on Progressively Approximated Value Functions Kalyanmoy Deb, Ankur Sinha, Pekka Korhonen, and Jyrki Wallenius KanGAL Report Number

More information

Fast oriented bounding box optimization on the rotation group SO(3, R)

Fast oriented bounding box optimization on the rotation group SO(3, R) Fast oriented bounding box optimization on the rotation group SO(3, R) Chia-Tche Chang 1, Bastien Gorissen 2,3 and Samuel Melchior 1,2 chia-tche.chang@uclouvain.be bastien.gorissen@cenaero.be samuel.melchior@uclouvain.be

More information

Reliability-Based Topology Optimization with Analytic Sensitivities. Patrick Ryan Clark

Reliability-Based Topology Optimization with Analytic Sensitivities. Patrick Ryan Clark Reliability-Based Topology Optimization with Analytic Sensitivities Patrick Ryan Clark Thesis submitted to the faculty of the Virginia Polytechnic Institute and State University in partial fulfillment

More information

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization Richa Agnihotri #1, Dr. Shikha Agrawal #1, Dr. Rajeev Pandey #1 # Department of Computer Science Engineering, UIT,

More information

Using Genetic Algorithms to Solve the Box Stacking Problem

Using Genetic Algorithms to Solve the Box Stacking Problem Using Genetic Algorithms to Solve the Box Stacking Problem Jenniffer Estrada, Kris Lee, Ryan Edgar October 7th, 2010 Abstract The box stacking or strip stacking problem is exceedingly difficult to solve

More information

Lecture

Lecture Lecture.. 7 Constrained problems & optimization Brief introduction differential evolution Brief eample of hybridization of EAs Multiobjective problems & optimization Pareto optimization This slides mainly

More information

Evolutionary Multi-objective Optimization of Business Process Designs with Pre-processing

Evolutionary Multi-objective Optimization of Business Process Designs with Pre-processing Evolutionary Multi-objective Optimization of Business Process Designs with Pre-processing Kostas Georgoulakos Department of Applied Informatics University of Macedonia Thessaloniki, Greece mai16027@uom.edu.gr

More information

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding

Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding e Scientific World Journal, Article ID 746260, 8 pages http://dx.doi.org/10.1155/2014/746260 Research Article Path Planning Using a Hybrid Evolutionary Algorithm Based on Tree Structure Encoding Ming-Yi

More information

The Binary Genetic Algorithm. Universidad de los Andes-CODENSA

The Binary Genetic Algorithm. Universidad de los Andes-CODENSA The Binary Genetic Algorithm Universidad de los Andes-CODENSA 1. Genetic Algorithms: Natural Selection on a Computer Figure 1 shows the analogy between biological i l evolution and a binary GA. Both start

More information

ATI Material Do Not Duplicate ATI Material. www. ATIcourses.com. www. ATIcourses.com

ATI Material Do Not Duplicate ATI Material. www. ATIcourses.com. www. ATIcourses.com ATI Material Material Do Not Duplicate ATI Material Boost Your Skills with On-Site Courses Tailored to Your Needs www.aticourses.com The Applied Technology Institute specializes in training programs for

More information

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization J.Venkatesh 1, B.Chiranjeevulu 2 1 PG Student, Dept. of ECE, Viswanadha Institute of Technology And Management,

More information

Heuristic Optimisation

Heuristic Optimisation Heuristic Optimisation Part 10: Genetic Algorithm Basics Sándor Zoltán Németh http://web.mat.bham.ac.uk/s.z.nemeth s.nemeth@bham.ac.uk University of Birmingham S Z Németh (s.nemeth@bham.ac.uk) Heuristic

More information

What is GOSET? GOSET stands for Genetic Optimization System Engineering Tool

What is GOSET? GOSET stands for Genetic Optimization System Engineering Tool Lecture 5: GOSET 1 What is GOSET? GOSET stands for Genetic Optimization System Engineering Tool GOSET is a MATLAB based genetic algorithm toolbox for solving optimization problems 2 GOSET Features Wide

More information

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving

More information

A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization

A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization A Multiobjective Memetic Algorithm Based on Particle Swarm Optimization Dr. Liu Dasheng James Cook University, Singapore / 48 Outline of Talk. Particle Swam Optimization 2. Multiobjective Particle Swarm

More information

CS5401 FS2015 Exam 1 Key

CS5401 FS2015 Exam 1 Key CS5401 FS2015 Exam 1 Key This is a closed-book, closed-notes exam. The only items you are allowed to use are writing implements. Mark each sheet of paper you use with your name and the string cs5401fs2015

More information

A Sequential, Multi-Complexity Topology Optimization Process for Aeroelastic Wing Structure Design

A Sequential, Multi-Complexity Topology Optimization Process for Aeroelastic Wing Structure Design A Sequential, Multi-Complexity Topology Optimization Process for Aeroelastic Wing Structure Design Bill Crossley, crossley@purdue.edu Significant content from graduate student Mark Guiles and input from

More information

A genetic algorithms approach to optimization parameter space of Geant-V prototype

A genetic algorithms approach to optimization parameter space of Geant-V prototype A genetic algorithms approach to optimization parameter space of Geant-V prototype Oksana Shadura CERN, PH-SFT & National Technical Univ. of Ukraine Kyiv Polytechnic Institute Geant-V parameter space [1/2]

More information

Evolutionary multi-objective algorithm design issues

Evolutionary multi-objective algorithm design issues Evolutionary multi-objective algorithm design issues Karthik Sindhya, PhD Postdoctoral Researcher Industrial Optimization Group Department of Mathematical Information Technology Karthik.sindhya@jyu.fi

More information