Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 1. Module 8 Lecture Notes 2. Multi-objective Optimization

Similar documents
Linear Programming. L.W. Dasanayake Department of Economics University of Kelaniya

OPTIMIZATION METHODS

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

OPERATIONS RESEARCH. Linear Programming Problem

Lecture 5: Duality Theory

Graphing Linear Inequalities in Two Variables.

LINEAR PROGRAMMING (LP), GRAPHICAL PRESENTATION GASPAR ASAMPANA

Convexity. 1 X i is convex. = b is a hyperplane in R n, and is denoted H(p, b) i.e.,

Real life Problem. Review

Lecture 5: Properties of convex sets

AM 221: Advanced Optimization Spring 2016

Lesson 08 Linear Programming

Lecture 2 September 3

CHAPTER 2 MULTI-OBJECTIVE REACTIVE POWER OPTIMIZATION

Lecture 2 - Introduction to Polytopes

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Lecture 3: Convex sets

Lecture 25 Nonlinear Programming. November 9, 2009

Introduction to Modern Control Systems

Convexity and Optimization

Sensor Tasking and Control

Multicriterial Optimization Using Genetic Algorithm

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

a) Alternative Optima, b) Infeasible(or non existing) solution, c) unbounded solution.

Generating Uniformly Distributed Pareto Optimal Points for Constrained and Unconstrained Multicriteria Optimization

Linear Programming and its Applications

Martin Luther Universität Halle Wittenberg Institut für Mathematik

An Improved Progressively Interactive Evolutionary Multi-objective Optimization Algorithm with a Fixed Budget of Decision Maker Calls

LECTURE NOTES Non-Linear Programming

Convexity and Optimization

Applied Lagrange Duality for Constrained Optimization

A Comparative Study on Optimization Techniques for Solving Multi-objective Geometric Programming Problems

Applications of Linear Programming

A compromise method for solving fuzzy multi objective fixed charge transportation problem

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

CHAPTER 5 FUZZY LOGIC CONTROL

Introduction to Linear Programming. Algorithmic and Geometric Foundations of Optimization

Combinatorial optimization and its applications in image Processing. Filip Malmberg

OPTIMUM DESIGN. Dr. / Ahmed Nagib Elmekawy. Lecture 3

Algebra Reviews & LP Graphic Solutions

Lecture 1: Introduction

Linear Programming with Bounds

4 Integer Linear Programming (ILP)

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points.

Intro to Linear Programming. The problem that we desire to address in this course is loosely stated below.

UNIT 2 LINEAR PROGRAMMING PROBLEMS

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No.

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Fundamentals of Integer Programming

Optimization. Industrial AI Lab.

Some Advanced Topics in Linear Programming

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1: Introduction to Optimization. Instructor: Shaddin Dughmi

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

Bilinear Programming

Lecture 9: Linear Programming

Lecture 12: Feasible direction methods

Chapter 15 Introduction to Linear Programming

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Introduction to ANSYS DesignXplorer

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Classification of Optimization Problems and the Place of Calculus of Variations in it

UNIT 6 MODELLING DECISION PROBLEMS (LP)

Optimization Methods: Optimization using Calculus Kuhn-Tucker Conditions 1. Module - 2 Lecture Notes 5. Kuhn-Tucker Conditions

Linear Programming. Meaning of Linear Programming. Basic Terminology

COMPENDIOUS LEXICOGRAPHIC METHOD FOR MULTI-OBJECTIVE OPTIMIZATION. Ivan P. Stanimirović. 1. Introduction

Rubber bands. Chapter Rubber band representation

II. Linear Programming

Demo 1: KKT conditions with inequality constraints

Lecture 8: The EM algorithm

Chapter 4. Linear Programming

Programming, numerics and optimization

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Dynamic Programming. Other Topics

A Hierarchical Fair Service Curve Algorithm for Link-Sharing, Real-Time, and Priority Services

Machine Learning for Signal Processing Lecture 4: Optimization

Mathematical Programming and Research Methods (Part II)

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

Homework 1 (a and b) Convex Sets and Convex Functions

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

A gradient-based multiobjective optimization technique using an adaptive weighting method

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

A Short SVM (Support Vector Machine) Tutorial

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Kernels and Constrained Optimization

Multi-objective Optimization

Linear Mathematical Programming (LP)

Revision Topic 11: Straight Line Graphs

THE REFERENCE POINT METHOD APPLIED TO DECISION SELECTION IN THE PROCESS OF BILATERAL NEGOTIATIONS

WJEC MATHEMATICS INTERMEDIATE GRAPHS STRAIGHT LINE GRAPHS (PLOTTING)

MGMT 372. (Updated: February 8, 2000, Homework 5. ² Problem 1, p. 260 (graph and solve). ² Problem 2, p. 260 (graph and solve).

5. Lecture notes on matroid intersection

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

Introduction to RTC-Tools. Jorn Baayen

THE LINEAR MULTIPLE CHOICE KNAPSACK PROBLEM WITH TWO CRITERIA: PROFIT AND EQUITY

11 Linear Programming

6. Lecture notes on matroid intersection

Lecture 19 Subgradient Methods. November 5, 2008

Linear Programming CISC4080, Computer Algorithms CIS, Fordham Univ. Linear Programming

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Review for Mastery Using Graphs and Tables to Solve Linear Systems

Transcription:

Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 1 Module 8 Lecture Notes 2 Multi-objective Optimization Introduction In a real world problem it is very unlikely that we will meet the situation of single objective and multiple constraints more often than not. Thus the rigidity provided by the General Problem (GP) is, many a times, far away from the practical design problems. In many of the water resources optimization problems maximizing aggregated net benefits is a common objective. At the same time maximizing water quality, regional development, resource utilization, and various social issues are other objectives which are to be maximized. There may be conflicting objectives along with the main objective like irrigation, hydropower, recreation etc. Generally multiple objectives or parameters have to be met before any acceptable solution can be obtained. Here it should be noticed that the word acceptable implicates that there is normally no single solution to the problems of the above type. Actually methods of multi-criteria or multi-objective analysis are not designed to identify the best solution, but only to provide information on the tradeoffs between given sets of quantitative performance criteria. In the present discussion on multi-objective optimization, we will first introduce the mathematical definition and then talk about two broad classes of solution methods typically known as (i) Utility Function Method (Weighting function method) (ii) Bounded Objective Function Method (Reduced Feasible Region Method or Constraint Method ). Multi-objective Problem A multi-objective optimization problem with inequality (or equality) constraints may be formulated as

Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 2 x1 x2. Find X = (1).. x n which minimizes f 1 (X), f 2 (X),, f k (X) (2) subject to g j ( X ) 0, j= 1, 2,, m (3) Here k denotes the number of objective functions to be minimized and m is the number of constraints. It is worthwhile to mention that objective functions and constraints need not be linear but when they are, it is called Multi-objective Linear Programming (MOLP). For the problems of the type mentioned above the very notion of optimization changes and we try to find good trade-offs rather than a single solution as in GP. The most commonly adopted notion of the optimum proposed by Pareto is depicted below. A vector of the decision variable X is called Pareto Optimal (efficient) if there does not exist another Y such that fi( Y ) fi( X ) for i = 1, 2,, k with f j ( Y ) < fi ( X ) for at least one j. In other words a solution vector X is called optimal if there is no other vector Y that reduces some objective functions without causing simultaneous increase in at least one other objective function.

Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 3 k j i Fig. 1 As shown in above figure there are three objectives i, j, k. Direction of their increment is also indicated. The surface (which is formed based on constraints) is efficient because no objective can be reduced without a simultaneous increase in at least one of the other objectives. Utility Function Method (Weighting function method) In this method a utility function is defined for each of the objectives according to the relative importance of f i.. A simple utility function may be defined as α i f i (X) for i th objective where α i is a scalar and represents the weight assigned to the corresponding objective. Then the total utility U may be defined as weighted sum of objective functions as below k U = α f ( X ), αi > 0, i = 1, 2,, k. (4) i= 1 i i The solution vector X may be found by maximizing the total utility as defined above with the constraint (3). Without any loss to generality, it is customary to assume that k α i i= 1 = 1 although it is not essential. Also αi values indicate the relative utility of each of the objectives.

Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 4 The following figure represents the decision space for a given set of constraints and utility functions. Here X = x1 x2 and two objectives are f1(x) and f 2 (X) with upper bound constraints* of type (3) as in figure 2. g 1 (X) g 3 (X) x 2 B C g 4 (X) A D g 2 (X) g 5 (X) E O g 6 (X) x 1 Decision Space Fig. 2 *constraints g 1 (X) to g 6 (X) represent x 1, x 2 0 For Linear Programming (LP), the Pareto front is obtained by plotting the values of objective functions at common points (points of intersection) of constraints and joining them through straight lines in objective space. It should be noted that all the points on the constraint surface need not be efficient in Pareto sense as point A in the following figure.

Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 5 f 2 B C A D E f 1 Objective Space Fig. 3 By looking at Figure 3 one may qualitatively infer that it follows Pareto Optimal definition. Now optimizing utility function means moving along the efficient front and looking for the maximum value of utility function U defined by equation (4). One major limitation is that this method cannot generate the complete set of efficient solutions unless the efficiency frontier is strictly convex. If a part of it is concave, only the end points of this can be obtained by the weighing technique. Bounded objective function method In this method we try to trap the optimal solution of the objective functions in a bounded or reduced feasible region. In formulating the problem one objective function is maximized while all other objectives are converted into constraints with lower bounds along with other constraints of the problem. Mathematically the problem may be formulated as Maximize f i (X) Subject to g j ( X ) 0, j= 1, 2,, m (5)

Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 6 f k (X) e k k i here e k represents lower bound of the k th objective. In this approach the feasible region S represented by g j ( X ) 0, j= 1, 2,, m is further reduced to S by (k-1) constraints f k (X) e k k i. e.g. let there are three objectives which are to be maximized in the region of constraints S. The problem may be formulated as: maximize{objective-1} maximize{objective-2} maximize{objective-3} subject to X = x1 S x2 In the above problem S identifies the region given by g j ( X ) 0, j= 1, 2,, m. In the bounded objective function method, the same problem may be formulated as maximize{objective-1} subject to {objective-2} {objective-3} X S e 1 e 2 As may be seen, one of the objectives ({objective-1}) is now the only objective and all other objectives are included as constraints. There are lower bounds specified for other objectives which are the minimum values at least to be attained. Subject to these additional constraints, the objective is maximized. Figure 4 illustrates the scheme.

Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 7 x 2 A B S C P D e 2 w 3 w 2 w 1 S E e 1 F x 1 Fig. 4 In the above figure w 1, w 2, and w 3 are gradients of the three objectives respectively. If {objective-1} was to be maximized in the region S, without taking into consideration the other objectives, then solution point is E. But due to lower bounds of the other objectives the feasible region reduces to S and solution point is P now. It may be seen that changing e 1 does not affect {objective-1} as much as changing e 2. This fact gives rise to sensitivity analysis. Exercise Problem A reservoir is planned both for gravity and lift irrigation through withdrawals from its storage. The total storage available for both the uses is limited to 5 units each year. It is decided to limit the gravity irrigation withdrawals in a year to 4 units. If X 1 is the allocation of water for gravity irrigation and X 2 is the allocation for lift irrigation, the two objectives planned to be maximized are expressed as Maximize Z 1 (X) = 3x 1-2x 2 and Z 2 (X) = - x 1 + 4x 2

Optimization Methods: Advanced Topics in Optimization - Multi-objective Optimization 8 For above problem, do the following (i) Generate a Pareto Front of non-inferior (efficient) solutions by plotting Decision space and Objective space. (ii) Formulate multi objective optimization model using weighting approach with w 1 and w 2 as weights for gravity and lift irrigation respectively. (iii) Solve it, for (i) w 1 =1 and w 2 =2 (ii) w 1 =2 and w 2 =1 (iv) Formulate the problem using constraint method [Solution: (i) X 1 =0, X 2 =5; (ii) X 1 =4, X 2 =0 to 1 ]