Demo 1: KKT conditions with inequality constraints

Similar documents
Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Optimization III: Constrained Optimization

Programming, numerics and optimization

Lecture Notes: Constraint Optimization

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

Introduction to Machine Learning

A Short SVM (Support Vector Machine) Tutorial

Introduction to Constrained Optimization

Introduction to Optimization

Constrained Optimization COS 323

Lecture 2 Optimization with equality constraints

5 Day 5: Maxima and minima for n variables.

In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function.

California Institute of Technology Crash-Course on Convex Optimization Fall Ec 133 Guilherme Freitas

Kernel Methods & Support Vector Machines

QEM Optimization, WS 2017/18 Part 4. Constrained optimization

Kernels and Constrained Optimization

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

Convex Programs. COMPSCI 371D Machine Learning. COMPSCI 371D Machine Learning Convex Programs 1 / 21

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

Convex Optimization and Machine Learning

EC5555 Economics Masters Refresher Course in Mathematics September Lecture 6 Optimization with equality constraints Francesco Feri

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem

Lecture 7: Support Vector Machine

Linear methods for supervised learning

Applied Lagrange Duality for Constrained Optimization

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

Computational Methods. Constrained Optimization

Optimization Methods: Optimization using Calculus Kuhn-Tucker Conditions 1. Module - 2 Lecture Notes 5. Kuhn-Tucker Conditions

Introduction to Optimization

Lagrange Multipliers

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

1. Show that the rectangle of maximum area that has a given perimeter p is a square.

21-256: Lagrange multipliers

Constrained Optimization and Lagrange Multipliers

COMS 4771 Support Vector Machines. Nakul Verma

16.410/413 Principles of Autonomy and Decision Making

A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming

Local and Global Minimum

Solution Methods Numerical Algorithms

Computational Optimization. Constrained Optimization Algorithms

Support Vector Machines.

Chapter II. Linear Programming

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points.

9. Support Vector Machines. The linearly separable case: hard-margin SVMs. The linearly separable case: hard-margin SVMs. Learning objectives

Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I

Lesson 11: Duality in linear programming

Machine Learning for Signal Processing Lecture 4: Optimization

Lecture 2 September 3

Characterizing Improving Directions Unconstrained Optimization

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma

Lecture 18: March 23

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

Optimization under uncertainty: modeling and solution methods

MATH 19520/51 Class 10

Math 233. Lagrange Multipliers Basics

Perceptron Learning Algorithm

Linear Programming Problems

x 6 + λ 2 x 6 = for the curve y = 1 2 x3 gives f(1, 1 2 ) = λ actually has another solution besides λ = 1 2 = However, the equation λ

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual

Gate Sizing by Lagrangian Relaxation Revisited

Search direction improvement for gradient-based optimization problems

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints

Perceptron Learning Algorithm (PLA)

Lagrange multipliers October 2013

Convex Optimization. Lijun Zhang Modification of

Convex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c)

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

Lagrange multipliers 14.8

Lagrange multipliers. Contents. Introduction. From Wikipedia, the free encyclopedia

(c) 0 (d) (a) 27 (b) (e) x 2 3x2

Lecture 15: Log Barrier Method

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Lagrangian Multipliers

Math 21a Homework 22 Solutions Spring, 2014

MAC2313 Test 3 A E g(x, y, z) dy dx dz

Chapter 13-1 Notes Page 1

In this lecture, we ll look at applications of duality to three problems:

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations

Delay-Constrained Optimized Packet Aggregation in High-Speed Wireless Networks

3 INTEGER LINEAR PROGRAMMING

Math 233. Lagrange Multipliers Basics

maximize c, x subject to Ax b,

Theoretical Concepts of Machine Learning

Lagrangian Relaxation: An overview

Math 209 (Fall 2007) Calculus III. Solution #5. 1. Find the minimum and maximum values of the following functions f under the given constraints:

Kernel Methods. Chapter 9 of A Course in Machine Learning by Hal Daumé III. Conversion to beamer by Fabrizio Riguzzi

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

3/1/2016. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder

Interior Penalty Functions. Barrier Functions A Problem and a Solution

Mathematical Programming and Research Methods (Part II)

Lagrangian Multipliers

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

we wish to minimize this function; to make life easier, we may minimize

A FACTOR GRAPH APPROACH TO CONSTRAINED OPTIMIZATION. A Thesis Presented to The Academic Faculty. Ivan Dario Jimenez

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

6.1 Evaluate Roots and Rational Exponents

Transcription:

MS-C5 Introduction to Optimization Solutions 9 Ehtamo Demo : KKT conditions with inequality constraints Using the Karush-Kuhn-Tucker conditions, see if the points x (x, x ) (, 4) or x (x, x ) (6, ) are the local optima of the problem. Draw a picture. max x + x s.e. x + x x + x x x Solution The feasible set is illustrated in the following picture. 9 8 7 6 x + x x + x 5 x 4 x x 4 5 6 7 x The KKT conditions are: L λ i g i (x) i λ i i where the function L(x, λ) is the problem s Lagrange function. defined as L(x, λ) f(x) + λ g (x) + λ g (x) + λ g (x) The function is Functions g i (x) are constraint functions and λ i the respective Lagrange multipliers. Constraints are always of the form g i (x). Now the constraint functions g i (x) are: g (x) x + x g (x) x + x g (x) x x

Note. The conditions above are the KKT conditions for a maximization problem. For a minimization problem the conditions are similar, except the inequality of the last condition is reversed, i.e. λ i i. Let us first look at x (, 4). We determine which constraints are active: g (, 4) 4 g (, 4) + 4 g (, 4) 7 Constraints g and g are active in the current point, i.e. they have the value. The third constraint on the other hand is not active. According to the third KKT condition, the Lagrange multipliers of inactive constraints are zero, i.e. now λ. The gradients of the objective function and constraint functions are: [ x f(x) x g (x) g (x) g (x) The gradients at the point under examination: L(, 4, λ, λ, λ ) f(, 4) + λ g (, 4) + λ g (, 4) + λ g (, 4) + λ + λ + λ + 4λ + λ + λ + λ + λ 4λ We remember that λ. Now the first KKT condition ( L ) yields: + 4λ + λ L + λ + λ From the lower equation we solve λ : Replace this into the upper equation: λ λ + + λ 4 λ Now we can solve λ 5, and from the lower equation λ 4 5. We have solved both Lagrange multipliers and the other one is positive. The KKT conditions require that at the optimum all multipliers are negative i.e. the KKT conditions are not satisfied and the point is not a local optimum. Let us examine the other point: x (6, ). In this point the active constraints are: g (6, ) 6 + 4 g (6, ) 6 +

g (6, ) 6 The active constraints are constraints and. The first constaint is not active, so λ. We replace the point x (6, ) into the Lagrange function s gradient and require it to be zero: [ 6 L(6,, λ, λ, λ ) + λ + λ + λ λ Now we can solve the Lagrange multipliers, which are: λ 65.4 ja λ 4.6. The multipliers are negative and thus the KKT conditions are satisfied. The point satisfies the KKT conditions and it is a optimum candidate. [. Demo : KKT conditions with equality and inequality constraints Solve the problem graphically and see if the point satisfies the KKT conditions. min x + x s.e. (x ) + x x x Solution The feasible area is illustrated in the following picture. 5 4.5 (x ) + x 4.5 x.5.5 x x.5.5.5.5.5 4 4.5 5 x In the problem we minimize the distance to the origin. The optimum is thus the nearest point of the feasible set, x (, ). The KKT conditions are: L(x, λ, µ) f(x) + λg(x) + µh(x) L λg(x) i λ i i

Note. The equality constraints Lagrange multipliers do not have similar positivity or negativity constraints as the inequality constraints. The equality constraints Lagrange multipliers are often denoted µ i. The constraint functions g(x) and h(x) are g(x) (x ) x + h(x) x x + We determine if the inequality constraint is active. g(, ) ( ) + The constraint is active and thus the respective Lagrange multiplier can be non-zero. We solve the point in which the gradient of the Lagrange function is zero. For this, we first need to determine the gradients of the objective and constraint functions: [ x (x ) f(x) g(x) h(x) x x The gradient of the Lagrange function is zero in the optimum: L(,, λ, µ) f(, ) + λ g(, ) + µ h(, ) [ + ( )λ + µ λ µ [ 4 λ + µ 4 λ µ We can solve the equations e.g. by first solving λ from the first equation and then replacing it into the second. This yields λ and µ. The Lagrange multiplier of the inequality is positive, so point (,) satisfies the necessary KKT conditions of the minimization problem. Problem : KKT conditions with inequality constraints Using the Karush-Kuhn-Tucker conditions, see if the point x (x, x ) (, ) is the optimum of the problem max x + x s.e. x + x x 4

Solution.5 x.5 x.5 x + x.5.5.5.5.5.5 4 x The constraint functions are: g (x) x + x g (x) x We determine the active constraints at the point: g (, ) ( ) + g (, ) Both constraints are active. We determine if the point is the zero point of the Lagrange function s gradient. The gradients of the objective and constraint functions are: x f(x) g x (x) g x (x) We write the gradient of the Lagrange function in the examination point (,-): L(,, λ, λ ) f(, ) + λ g (, ) + λ g (, ) [ + λ ( ) + λ 4 λ + λ ( ) + λ ( ) + λ λ Now λ can be solved from the lower equation: λ. Replacing this into the upper equation yields λ 5. In a maximization problem the KKT conditions require that the Lagrange multipliers are negative. Now the multipliers are negative and so the point satisfies the KKT conditions. Thus, the point is a local optimum candidate. According to the picture the point is the global optimum. Problem : KKT conditions with equality and inequality constraints Solve the problem and see if the solution satisfies the KKT conditions. min x s.e. (x + 4) x x x + 4 x 5

Solution The feasible area: x (x + 4) x x x x + 4 9 8 7 6 5 4 x The constraint functions are: g (x) (x + 4) x g (x) x x + 4 From the picture we see that the optimum is at (-5,-). Let s determine if it satifies the KKT conditions. The active constraints: g ( 5, ) ( 5 + 4) ( ) g ( 5, ) ( 5) 5 Constrain is active, but the other inequality constraint is not. This means that the Lagrange multiplier of the second inequality constraint is zero, λ. Next we see if the point is the zero point of the gradient of the Lagrange function. The gradients of the objective and constraint functions are: f(x) [ g (x) [ (x + 4) The gradient of the Lagrange function is zero: h(x) [ g (x) [ L( 5,, λ, µ, λ ) f( 5, )+λ g ( 5, )+µ h( 5, )+λ g ( 5, ) + λ ( 5 + 4) + µ ( ) λ µ + λ ( ) + µ λ + µ From the lower equation we get λ µ, which can be replaced in the upper equation. This yields µ µ, i.e. µ. Furthermore, λ µ. The problem is a minimization problem and the KKT conditions require that the Lagrange multipliers for inequality constraints are positive. This is now the case and the point satisfies the KKT conditions. 6

Problem : Linear Programming Problem a) Solve the problem graphically and determine if the solution satisfies the KKT conditions. b) Find the dual of the optimization problem and solve it. c) Compare the solution of the dual and the Lagrange multipliers of the primal problem. max x + x s.e. x + x 9 x + x 8 x, x Solution a) The feasible set is illustrated in the following picture: 5 4 x x + x 8 x x + x 9 x 4 5 x THe picture shows that the optimum is at the point (,). functions are: g (x) x + x 9 g (x) x + x 8 The constraint g (x) x g 4 (x) x Determine the active constraints: g (, ) + 9 g (, ) + 8 g (, ) g 4 (, ) 7

Constraints and are active, constraints and 4 are not. Thus, λ λ 4 and we can leave the respective terms out of the Lagrange function. The gradient of the Lagrange function is zero: L(,, λ, λ ) f(, ) + λ g (, ) + λ g (, ) + λ + λ + λ + λ The solution of the equations is λ 5 and λ 5. The multipliers are negative and the the problem is a maximization problem. Thus, the KKT conditions are satisfied. b) The dual of the problem is: min 9y + 8y s.e. y + y y + y y, y y.5 y y + y.5 y + y y.5.5.5.5.5 y The dual can be solved graphically or using Excel. The solution is y ( 5, 5 ). c) We see that the solution of the dual of a linear problem is the same as the Lagrange multipliers of the primal. Now the Lagrange multipliers are λ 5 and λ 5, as the solution of the dual is y 5 and y 5. Duality and Lagrange multipliers are linked very closely to each other. More on the connection between them will be introduced on lecture. Note. Here the Lagrange multipliers are the negative of the dual variables. Some times the Lagrange function is defined L(x, λ, µ) f(x) λg(x) µh(x), and then the dual variables and Lagrange multipliers have the same sign. Problem 4: Something s fishy Solve the problem graphically and see if it satisfies the KKT conditions. If not, what might be the reason? max x s.e. x (x 4) x 8

Solution The feasible set is illustrated in the following picture: 6 5 4 x (x 4) x x 4 5 6 x The optimum is at (x, x ) (4, ). Let us determine if the point satisfies the KKT conditions. The constraint functions are g (x) x + (x 4) g (x) x Both constraints are active at the optimum so the respective Lagrange multipliers can be non-zero. The gradients of the objective and constraint functions are: [ (x 4) f(x) g (x) g (x) The gradient of the Lagrange function is zero: L(4,, λ, λ ) f(4, ) + λ g (4, ) + λ g (4, ) [ + λ (4 4) + λ + λ + λ ( ) λ λ The first equation cannot be satisfied with any values of the multipliers λ and λ. The point is clearly the optimum, so what went wrong? The KKT conditions are the necessary conditions, assuming the gradients of the constraint functions are linearly independent. Now the gradients at the optimum are g (4, ) [ g (4, ) [ i.e. g (4, ) g (4, ). The gradients are linearly dependent and thus the optimum does not need to satisfy the KKT conditions. Note that this point is still the optimum. Note. Usually one does not need to check the linear dependence of the constraints. 9

Problem 5: The legendary Enchanted forest Sepeteus is trying to catch the legendary forest troll. The Enchanted forest limits the whereabouts of the troll to a certain area. The area is defined the constraints : x e x x x 4 x x x + Sepeteus has at his disposal a trap. He has also build a noise making contraption onto point (,-). He knows the troll is afraid of the contraption and runs as far away from the contraption as possible. Although, the troll will stay inside the Enchanted forest. a) Solve the point where the troll will run, i.e. the Sepeteus should place the trap. You can solve the point graphically. b) Determine if the point satisfies the KKT conditions. Solution Let s draw a map of the Enchanted forest: x e x x 4 x x x x + x x The noise making contraption is marked with a cross. The troll wants to maximize the distance from the contraption so we can write the troll s optimization problem: max (x ) + (x + ) s.e. x e x x x 4 x x x + From the map we see that the troll should run to the point (,). Let s determine whether the point satisfies the KKT conditions. The constaint functions are: g (x) x e x

g (x) x g (x) 4 x x g 4 (x) x + x The active constraints: g (, ) e g (, ) g (, ) 4 g 4 (, ) + Constraints and are not active, so the respective multipliers are zero: λ λ. The gradients of the objective and constraint functions: (x ) e x f(x) g (x + 4) (x) g (x) g (x) [ x The gradient of the Lagrange function is zero: g 4 (x) [ L(,, λ, λ, λ, λ 4 ) f(, )+λ g (, )+λ g (, )+λ g (, )+λ 4 g 4 (, ) [ ( ) + λ e + + + λ 4 ( ) ( + ) + λ + + ( ) + λ 4 + λ λ 4 6 + λ + λ 4 The solution to this is λ 5 and λ 4 7. Both Lagrange multipliers are negative, so the KKT conditions are satisfied.