Multicriterial Optimization Using Genetic Algorithm 180 175 170 165 Fitness 160 155 150 145 140 Best Fitness Mean Fitness 135 130 0 Page 1 100 200 300 Generations 400 500 600
Contents Optimization, Local and Global Optimization Multicriterial Optimization Constraints Methods of Solution Examples Task of the Desicion Maker Page 2
Global optimization Global optimization is the process of finding the global extreme value (minimum or maximum) within some search space S. The single objective global optimization problem can be formally defined as follows: Page 3
Global optimization * Then x is the global solution(s), f is the objective function, and the set Ω is the feasible region ( Ω S ). The problem to finding the minimum solution(s) is called the global optimization problem. The maximalization can explain from minimalization with the next fromula: max{ f ( x )} = min{ f ( x )} Page 4
Optimization Local optimums and the Global optimum Local optimums and the others Global optimum Page 5
Multicriterial optimalization Altough single-objective optimalization problem may have an unique optimal solution (global optimum). Multiobjective Optimalization Problem (MOPs) as a rule present a possibility of uncountable set of solutions, which when evaluated, produce vectors whose components represent trade-offs of objective space. A decision maker (DM) then implicitly chooses an acceptable solution (or solutions) by selecting one or more of these vectors. Page 6
Multicriterial optimalization f2 objective functions F = [ f1, f2 ] f1 = Best solutions Page 7 + = Normal solutions
Multicriterial optimalization The Multiobjective Optimalization Problem also called multicriteria optimisation or vector optimisation problem can then be determined (in words) as a problem of finding a vector of decision variables which satisfies constraints and optimises a vector function whose elements represent the objective functions. This functions form a mathematical description of performance criteria which are usually in conflict with each other. Hence the term optimise means finding such a solution which would give the values of all the objective functions acceptable to the decision maker. Page 8
Decision variables The decision variables are the numerical quantities for which values are to be chosen in an optimalization problem. Page 9
Constraints In most optimalization problem there are always restrictions imposed by the particular characteristics of the environment or resources available (e. g. physical limitations, time restrictions, e.t.c. ). These restrictions must be satisfied in order to consider that certain solution is acceptable. All these restrictions in general are called constrains and they describe dependences among decision variables and contants (or parameters) involved in the problem. Page 10
Constraints These constrains are expressed in form of mathematical inequalities: where p < n and n is the size of decision vector Page 11
Constraints The number p of equality constrains, must be less than n, the number of decision variables, because if p >= n the problem is said to be overconstrained, since there are no degrees of freedom left for optimizing (more unknowns than equations). The number of degrees of freedom is given by (n p). Also constrains can be explicit (i.e. given in algebraic form) or implicit in which case the algorithm to compute gi (x) for any given vector x must be known. Page 12
Objective Functions In order to know how good a certain solution is, it is nessesary to have some criteria to evaluate it. (For example the profit, the number of employee, etc.) These criteria are expressed as computable functions of the decision variables, that are called objective functions. In real word problems, some of them in conflict with others, and some have to be minimized while the others are maximized. These objective functions may be commensurable (measured in the same unit) or non-commensurable (measured in different units). Page 13
Types of Multicriterial Optimization Problem In multiobjective optimization problems, there are three possible situations: Minimize all objective functions Maximize all objective functions Minimize some and maximize others Page 14
Objective Functions The multiple objectives being optimized almost always conflict, placing a partial, rather than total, ordering on the search space. In fact finding the global optimum of a general MOP is NP-Complete (Bäck 1996). Page 15
Attributes, Critereia, Objectives and Goals Attributes: are often thought of as differentiating aspects, properties or characteristics or alternatives or consequences. Criteria: generally denote evaluative measures, dimensions or scales against which alternatives may be gauged in a value or worth sence. Objectives: are sometimes viewed in the same way, but also denote specific desired levels of attainment or vague ideals. Goals: usually indicate either of the latter notations. A distiction commonly made in Operation Research is to use the term goal to designate potentially attainable levels, and objectives to designate unattainable ideas. Page 16
Attributes, Critereia, Objectives and Goals The convention adopted in this presentation is the same assummed by several researcher { Horn (1997), Fishburn (1978) } of using the terms objective, criteria, and attribute interchangeably to represent an MOP s goal or objectives (i.e. distinct mathematical functions) to be achived. The terms objective space or objective function space are also used to denote the coordinate space within which vectors resulting from evaluating an MOP are plotted. Page 17
Objectives Functions Page 18
Euclidean space The set of all n-tuples of real numbers denoted by Rn is called Euclidean n-space. Two Euclidean spaces are considered: The n-dimensional space of decision variables in which each coordinate axis corresponds to a component of vector x. The k-dimensional space of objective functions in which each coordinate axis corresponds to a component of vector f(x). Page 19
Euclidean space Every point in the first space (decision variables ) represents a solution and gives a certain point in the second space (objective functions ), which determines a quality of solution in term of the values of the objective functions. Page 20
Euclidean space Page 21
General Multicriterial Optimization Problem Page 22
General Multicriterial Optimization Problem Page 23
Convert of Multicriterial Optimization Problem For simplicity reasons, normally all functions are converted to a maximization or minimization form. For example, the following identity may be used to convert all functions which are to be maximized into a form which allows their minimalization: min{ f ( x )} = max{ f ( x )} Page 24
Convert of Multicriterial Optimization Problem Similarity, of the inequality constrains of the form gi ( x ) 0 i = 1, 2,..., m can be converted to (1.8) form by multiplying by 1 and changing the sign of the inequality. Thus, the previous equation is equivalent to gi ( x ) 0 Page 25 i = 1, 2,..., m
Multicriterial Optimization Problem Ideal Solution Page 26
Multicriterial Optimization Problem Ideal Solution Page 27
Multicriterial Optimization Problem Ideal Solution f2( x ) f2( x* ) Single optimal solution Figure 1.1 Page 28 f1( x ) f1( x * ) Singe x* solution vector
Multicriterial Optimization Problem Ideal Vector Page 29
Multicriterial Optimization Problem Convexity Page 30
Multicriterial Optimization Problem Convex Sets Page 31
Multicriterial Optimization Problem Non-convex Sets Page 32
Multicriterial Optimization Problem Pareto Optimality Page 33
Multicriterial Optimization Problem Pareto Optimality In words this definition says that is Pareto optimal if there is exists no feasible vector which would decrese some criterion without causing a simultaneous increase in the last one other criterion. The phrase Pareto optimal is considered to mean which respect to the entire decision variable space unless otherwise specified. Page 34
Multicriterial Optimization Problem Pareto Optimality f2 objective functions F = [ f1, f2 ] f1 = Pareto Optimal Set Page 35 + = Normal solutions
Multicriterial Optimization Problem Pareto Optimality Page 36
Multicriterial Optimization Problem Pareto Front The minima in the Pareto sence are going to the boundary of the design region, or in the locus of the tangent points of the objective functions. In the Figure 1.6 a bold curve is used to mark the boundary for a bi-objective problem. The region of the points defined by this bold curve is called the Pareto front. Page 37
Multicriterial Optimization Problem Pareto Front f2 objective functions F = [ f1, f2 ] F Pareto Front f1 Page 38
Multicriterial Optimization Problem Global Optimization Defining an MOP global optimum is not a trivial task as the best compromise solution is really dependent on the specific preferences (or biases) of the (human) decision maker. Solutions may also have some temporal dependences (e.g. acceptable resource expeditures may vary from month to month). Thus, there is no universally accepted definition for the MOP global optimization problem. (But there are implemented more and more individual solutions...) Page 39
General Optimization Algorithms Overview Genaral search and optimization techniques are classified into three categories: enumarative, detereministic and stochastic (random). (Figure 1.11 on next page) As many real-world problems are computationally intensive, some means of limiting the search space must be implemented to find acceptable solutions in acceptable time (Mihalewicz and Fogel 2000) Deterministic algorithms attempt this by incorporating problem domain knowledge. Many of graph/tree search algorithms are known and applied. Page 40
General Optimization Algorithms Overview Page 41
General Optimization Algorithms Genetic Algorithm Page 42
General Optimization Algorithms Genetic Algorithm Page 43
General Optimization Algorithms Genetic Algorithm Page 44
General Optimization Algorithms Genetic Algorithm Chromosome- (Floating point Coding) 4.67........ 5.78........ 98.... 1.98........ Selection Individual.................... Page 45 3.45........ Fitness Values Generáció 1.25.... Objective Values
General Optimization Algorithms Genetic Algorithm Crossover 1.25 4.67 5.78 98 1.98 3.45 Parents 0.15 3.32 1.83 7.54 2.00 Mutation CROSSOVER Children Page 46 6.12 1.00 4.12 5.10 12.5 1.99 4.15 0.78 3.65 2.61 34.5 1.98 5.12
General Optimization Algorithms Genetic Algorithm Mutation Decision Variable Space Variable_2 Parents Crossover Children Variable_1 Page 47
MOGA Optimization Algorithms Genetic Algorithm Objective Function Space Objective 2 Local Pareto Front Rank = 3 Rank = 2 Rank = 1 Pareto Front Page 48 Objective 1
MOGA Optimization Algorithms Genetic Algorithm (Dummy) Fitness Nonlinear Fitness Assignment Rank Page 49 RankMAX
MOGA Optimization Algorithms Genetic Algorithm Objective_2 MOGA operation (theoretically) Pareto Front Page 50 Objective_1
MOGA Optimization Algorithms Genetic Algorithm Objective_2 Genetic Drift (real operation of MOGA) Pareto Front Page 51 Objective_1
MOGA Optimization Algorithms Genetic Algorithm Genetic Drift Break with Fitness Correction 1 Normalization Objective_2 [0,1]x[0,1] 2 Objective_1 Page 52
MOGA Optimization Algorithms Genetic Algorithm 1 1 Niche count distanc e σshare Σ(Niche Count) = 1/Wi 0 1 Page 53
MOGA Optimization Algorithms Genetic Algorithm Calculation of Fitness Correction Factors Page 54
MOGA Optimization Algorithms Genetic Algorithm Fitness Genetic Drift Break with Fitness Correction 3 3 3 2 2 2 1 1 0 0 0 Rank Values Page 55
MOGA Optimization Examples Examples: MOP-1 MOP-1 normal MOP-1 with Drift Break 5 5 where 10 x 10 2 f 1 ( x ) = ( x 2 ) f1( x ) = x 2 MOP-2 f1( x, y ) = x x 2 x sin( 8πx ) f 2 ( x, y ) = ( 1 + 10 y ) 1 1 + 10 y 1 + 10 y 0 x,y 1 Page 56 MOP-2 normal where MOP-2 with Drift Break
MOGA Optimization Examples MOP 3 ( ( ) f 1 ( x, y ) = 1 + ( A1 B1 )2 + ( A2 B2 )2 where 2 2 f2 ( x, y ) = 1 + ( x + 3 ) + ( y + 1 ) ) A1 = 0.5 sin( 1 ) + 2 cos( 1 ) + sin( 2 ) 1.5 cos( 2 ) A2 = 1.5 sin( 1 ) cos( 1 ) + 2 sin( 2 ) 0.5 cos( 2 ) B1 = 0.5 sin( x ) + 2 cos( x ) + sin( y ) 1.5 cos( y ) conditions B2 = 1.5 sin( x ) cos( x ) + 2 sin( y ) 0.5 cos( y ) π x, y π MOP-3 normal Page 57 MOP-3 with Drift Break
MOGA Optimization Examples MOP 4 2 n 1 f 1 ( xi ) = 1 exp xi i =1 n where i = 1, 2 and 2 n 1 f 2 ( xi ) = 1 exp xi + i =1 n 4 xi 4 Page 58 MOP-4 normal MOP-4 with Drift Break
Decision Maker Mathematically, every Pareto optimal point is an equally acceptable solution of the multiobjective optimalization problem. However, it is generally desirable to obtain one point as a solution. Selecting one of the set of Pareto optimal solutions call for information that is not contained in the objective function. That is why compared to single objective optimalization a new element is added in multiobjective optimalization. Page 59
Decision Maker We need a decision maker to make the selection. The decision maker is a person (or a group of persons) who is supposed to have better insight into the problem and who can express preference repations between different solutions. Usually, the decision maker is responsible for the final solution. Page 60
Decision Maker Solving a multiobjective optimalization problem calls for the co-operation of the decision maker and an analyst. By an analyst we have mean a person or a computer program responsibile for mathematical side of the solution process. The analyst generates information for the decisition maker to consider and the solution is selected according to the preferences of the decision maker. Page 61
Decision Maker It is assummedin the following that we have a single decision maker or an unanimous group of decision makers. Generally, group decision making is a world of its own. It calls for negotiations and specific methods when searching for compromises between different interest groups. Page 62
Thank you for your attention Questions? Page 63