Demo 1: KKT conditions with inequality constraints

Size: px
Start display at page:

Download "Demo 1: KKT conditions with inequality constraints"

Transcription

1 MS-C5 Introduction to Optimization Solutions 9 Ehtamo Demo : KKT conditions with inequality constraints Using the Karush-Kuhn-Tucker conditions, see if the points x (x, x ) (, 4) or x (x, x ) (6, ) are the local optima of the problem. Draw a picture. max x + x s.e. x + x x + x x x Solution The feasible set is illustrated in the following picture x + x x + x 5 x 4 x x x The KKT conditions are: L λ i g i (x) i λ i i where the function L(x, λ) is the problem s Lagrange function. defined as L(x, λ) f(x) + λ g (x) + λ g (x) + λ g (x) The function is Functions g i (x) are constraint functions and λ i the respective Lagrange multipliers. Constraints are always of the form g i (x). Now the constraint functions g i (x) are: g (x) x + x g (x) x + x g (x) x x

2 Note. The conditions above are the KKT conditions for a maximization problem. For a minimization problem the conditions are similar, except the inequality of the last condition is reversed, i.e. λ i i. Let us first look at x (, 4). We determine which constraints are active: g (, 4) 4 g (, 4) + 4 g (, 4) 7 Constraints g and g are active in the current point, i.e. they have the value. The third constraint on the other hand is not active. According to the third KKT condition, the Lagrange multipliers of inactive constraints are zero, i.e. now λ. The gradients of the objective function and constraint functions are: [ x f(x) x g (x) g (x) g (x) The gradients at the point under examination: L(, 4, λ, λ, λ ) f(, 4) + λ g (, 4) + λ g (, 4) + λ g (, 4) + λ + λ + λ + 4λ + λ + λ + λ + λ 4λ We remember that λ. Now the first KKT condition ( L ) yields: + 4λ + λ L + λ + λ From the lower equation we solve λ : Replace this into the upper equation: λ λ + + λ 4 λ Now we can solve λ 5, and from the lower equation λ 4 5. We have solved both Lagrange multipliers and the other one is positive. The KKT conditions require that at the optimum all multipliers are negative i.e. the KKT conditions are not satisfied and the point is not a local optimum. Let us examine the other point: x (6, ). In this point the active constraints are: g (6, ) g (6, ) 6 +

3 g (6, ) 6 The active constraints are constraints and. The first constaint is not active, so λ. We replace the point x (6, ) into the Lagrange function s gradient and require it to be zero: [ 6 L(6,, λ, λ, λ ) + λ + λ + λ λ Now we can solve the Lagrange multipliers, which are: λ 65.4 ja λ 4.6. The multipliers are negative and thus the KKT conditions are satisfied. The point satisfies the KKT conditions and it is a optimum candidate. [. Demo : KKT conditions with equality and inequality constraints Solve the problem graphically and see if the point satisfies the KKT conditions. min x + x s.e. (x ) + x x x Solution The feasible area is illustrated in the following picture (x ) + x 4.5 x.5.5 x x x In the problem we minimize the distance to the origin. The optimum is thus the nearest point of the feasible set, x (, ). The KKT conditions are: L(x, λ, µ) f(x) + λg(x) + µh(x) L λg(x) i λ i i

4 Note. The equality constraints Lagrange multipliers do not have similar positivity or negativity constraints as the inequality constraints. The equality constraints Lagrange multipliers are often denoted µ i. The constraint functions g(x) and h(x) are g(x) (x ) x + h(x) x x + We determine if the inequality constraint is active. g(, ) ( ) + The constraint is active and thus the respective Lagrange multiplier can be non-zero. We solve the point in which the gradient of the Lagrange function is zero. For this, we first need to determine the gradients of the objective and constraint functions: [ x (x ) f(x) g(x) h(x) x x The gradient of the Lagrange function is zero in the optimum: L(,, λ, µ) f(, ) + λ g(, ) + µ h(, ) [ + ( )λ + µ λ µ [ 4 λ + µ 4 λ µ We can solve the equations e.g. by first solving λ from the first equation and then replacing it into the second. This yields λ and µ. The Lagrange multiplier of the inequality is positive, so point (,) satisfies the necessary KKT conditions of the minimization problem. Problem : KKT conditions with inequality constraints Using the Karush-Kuhn-Tucker conditions, see if the point x (x, x ) (, ) is the optimum of the problem max x + x s.e. x + x x 4

5 Solution.5 x.5 x.5 x + x x The constraint functions are: g (x) x + x g (x) x We determine the active constraints at the point: g (, ) ( ) + g (, ) Both constraints are active. We determine if the point is the zero point of the Lagrange function s gradient. The gradients of the objective and constraint functions are: x f(x) g x (x) g x (x) We write the gradient of the Lagrange function in the examination point (,-): L(,, λ, λ ) f(, ) + λ g (, ) + λ g (, ) [ + λ ( ) + λ 4 λ + λ ( ) + λ ( ) + λ λ Now λ can be solved from the lower equation: λ. Replacing this into the upper equation yields λ 5. In a maximization problem the KKT conditions require that the Lagrange multipliers are negative. Now the multipliers are negative and so the point satisfies the KKT conditions. Thus, the point is a local optimum candidate. According to the picture the point is the global optimum. Problem : KKT conditions with equality and inequality constraints Solve the problem and see if the solution satisfies the KKT conditions. min x s.e. (x + 4) x x x + 4 x 5

6 Solution The feasible area: x (x + 4) x x x x x The constraint functions are: g (x) (x + 4) x g (x) x x + 4 From the picture we see that the optimum is at (-5,-). Let s determine if it satifies the KKT conditions. The active constraints: g ( 5, ) ( 5 + 4) ( ) g ( 5, ) ( 5) 5 Constrain is active, but the other inequality constraint is not. This means that the Lagrange multiplier of the second inequality constraint is zero, λ. Next we see if the point is the zero point of the gradient of the Lagrange function. The gradients of the objective and constraint functions are: f(x) [ g (x) [ (x + 4) The gradient of the Lagrange function is zero: h(x) [ g (x) [ L( 5,, λ, µ, λ ) f( 5, )+λ g ( 5, )+µ h( 5, )+λ g ( 5, ) + λ ( 5 + 4) + µ ( ) λ µ + λ ( ) + µ λ + µ From the lower equation we get λ µ, which can be replaced in the upper equation. This yields µ µ, i.e. µ. Furthermore, λ µ. The problem is a minimization problem and the KKT conditions require that the Lagrange multipliers for inequality constraints are positive. This is now the case and the point satisfies the KKT conditions. 6

7 Problem : Linear Programming Problem a) Solve the problem graphically and determine if the solution satisfies the KKT conditions. b) Find the dual of the optimization problem and solve it. c) Compare the solution of the dual and the Lagrange multipliers of the primal problem. max x + x s.e. x + x 9 x + x 8 x, x Solution a) The feasible set is illustrated in the following picture: 5 4 x x + x 8 x x + x 9 x 4 5 x THe picture shows that the optimum is at the point (,). functions are: g (x) x + x 9 g (x) x + x 8 The constraint g (x) x g 4 (x) x Determine the active constraints: g (, ) + 9 g (, ) + 8 g (, ) g 4 (, ) 7

8 Constraints and are active, constraints and 4 are not. Thus, λ λ 4 and we can leave the respective terms out of the Lagrange function. The gradient of the Lagrange function is zero: L(,, λ, λ ) f(, ) + λ g (, ) + λ g (, ) + λ + λ + λ + λ The solution of the equations is λ 5 and λ 5. The multipliers are negative and the the problem is a maximization problem. Thus, the KKT conditions are satisfied. b) The dual of the problem is: min 9y + 8y s.e. y + y y + y y, y y.5 y y + y.5 y + y y y The dual can be solved graphically or using Excel. The solution is y ( 5, 5 ). c) We see that the solution of the dual of a linear problem is the same as the Lagrange multipliers of the primal. Now the Lagrange multipliers are λ 5 and λ 5, as the solution of the dual is y 5 and y 5. Duality and Lagrange multipliers are linked very closely to each other. More on the connection between them will be introduced on lecture. Note. Here the Lagrange multipliers are the negative of the dual variables. Some times the Lagrange function is defined L(x, λ, µ) f(x) λg(x) µh(x), and then the dual variables and Lagrange multipliers have the same sign. Problem 4: Something s fishy Solve the problem graphically and see if it satisfies the KKT conditions. If not, what might be the reason? max x s.e. x (x 4) x 8

9 Solution The feasible set is illustrated in the following picture: x (x 4) x x x The optimum is at (x, x ) (4, ). Let us determine if the point satisfies the KKT conditions. The constraint functions are g (x) x + (x 4) g (x) x Both constraints are active at the optimum so the respective Lagrange multipliers can be non-zero. The gradients of the objective and constraint functions are: [ (x 4) f(x) g (x) g (x) The gradient of the Lagrange function is zero: L(4,, λ, λ ) f(4, ) + λ g (4, ) + λ g (4, ) [ + λ (4 4) + λ + λ + λ ( ) λ λ The first equation cannot be satisfied with any values of the multipliers λ and λ. The point is clearly the optimum, so what went wrong? The KKT conditions are the necessary conditions, assuming the gradients of the constraint functions are linearly independent. Now the gradients at the optimum are g (4, ) [ g (4, ) [ i.e. g (4, ) g (4, ). The gradients are linearly dependent and thus the optimum does not need to satisfy the KKT conditions. Note that this point is still the optimum. Note. Usually one does not need to check the linear dependence of the constraints. 9

10 Problem 5: The legendary Enchanted forest Sepeteus is trying to catch the legendary forest troll. The Enchanted forest limits the whereabouts of the troll to a certain area. The area is defined the constraints : x e x x x 4 x x x + Sepeteus has at his disposal a trap. He has also build a noise making contraption onto point (,-). He knows the troll is afraid of the contraption and runs as far away from the contraption as possible. Although, the troll will stay inside the Enchanted forest. a) Solve the point where the troll will run, i.e. the Sepeteus should place the trap. You can solve the point graphically. b) Determine if the point satisfies the KKT conditions. Solution Let s draw a map of the Enchanted forest: x e x x 4 x x x x + x x The noise making contraption is marked with a cross. The troll wants to maximize the distance from the contraption so we can write the troll s optimization problem: max (x ) + (x + ) s.e. x e x x x 4 x x x + From the map we see that the troll should run to the point (,). Let s determine whether the point satisfies the KKT conditions. The constaint functions are: g (x) x e x

11 g (x) x g (x) 4 x x g 4 (x) x + x The active constraints: g (, ) e g (, ) g (, ) 4 g 4 (, ) + Constraints and are not active, so the respective multipliers are zero: λ λ. The gradients of the objective and constraint functions: (x ) e x f(x) g (x + 4) (x) g (x) g (x) [ x The gradient of the Lagrange function is zero: g 4 (x) [ L(,, λ, λ, λ, λ 4 ) f(, )+λ g (, )+λ g (, )+λ g (, )+λ 4 g 4 (, ) [ ( ) + λ e λ 4 ( ) ( + ) + λ + + ( ) + λ 4 + λ λ λ + λ 4 The solution to this is λ 5 and λ 4 7. Both Lagrange multipliers are negative, so the KKT conditions are satisfied.

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

Optimization III: Constrained Optimization

Optimization III: Constrained Optimization Optimization III: Constrained Optimization CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization III: Constrained Optimization

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Lecture Notes: Constraint Optimization

Lecture Notes: Constraint Optimization Lecture Notes: Constraint Optimization Gerhard Neumann January 6, 2015 1 Constraint Optimization Problems In constraint optimization we want to maximize a function f(x) under the constraints that g i (x)

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Maximum Margin Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

Introduction to Constrained Optimization

Introduction to Constrained Optimization Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Constrained Optimization Marc Toussaint U Stuttgart Constrained Optimization General constrained optimization problem: Let R n, f : R n R, g : R n R m, h : R n R l find min

More information

Constrained Optimization COS 323

Constrained Optimization COS 323 Constrained Optimization COS 323 Last time Introduction to optimization objective function, variables, [constraints] 1-dimensional methods Golden section, discussion of error Newton s method Multi-dimensional

More information

Lecture 2 Optimization with equality constraints

Lecture 2 Optimization with equality constraints Lecture 2 Optimization with equality constraints Constrained optimization The idea of constrained optimisation is that the choice of one variable often affects the amount of another variable that can be

More information

5 Day 5: Maxima and minima for n variables.

5 Day 5: Maxima and minima for n variables. UNIVERSITAT POMPEU FABRA INTERNATIONAL BUSINESS ECONOMICS MATHEMATICS III. Pelegrí Viader. 2012-201 Updated May 14, 201 5 Day 5: Maxima and minima for n variables. The same kind of first-order and second-order

More information

In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function.

In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function. 1 The Lagrange multipliers is a mathematical method for performing constrained optimization of differentiable functions. Recall unconstrained optimization of differentiable functions, in which we want

More information

California Institute of Technology Crash-Course on Convex Optimization Fall Ec 133 Guilherme Freitas

California Institute of Technology Crash-Course on Convex Optimization Fall Ec 133 Guilherme Freitas California Institute of Technology HSS Division Crash-Course on Convex Optimization Fall 2011-12 Ec 133 Guilherme Freitas In this text, we will study the following basic problem: maximize x C f(x) subject

More information

Kernel Methods & Support Vector Machines

Kernel Methods & Support Vector Machines & Support Vector Machines & Support Vector Machines Arvind Visvanathan CSCE 970 Pattern Recognition 1 & Support Vector Machines Question? Draw a single line to separate two classes? 2 & Support Vector

More information

QEM Optimization, WS 2017/18 Part 4. Constrained optimization

QEM Optimization, WS 2017/18 Part 4. Constrained optimization QEM Optimization, WS 2017/18 Part 4 Constrained optimization (about 4 Lectures) Supporting Literature: Angel de la Fuente, Mathematical Methods and Models for Economists, Chapter 7 Contents 4 Constrained

More information

Kernels and Constrained Optimization

Kernels and Constrained Optimization Machine Learning 1 WS2014 Module IN2064 Sheet 8 Page 1 Machine Learning Worksheet 8 Kernels and Constrained Optimization 1 Kernelized k-nearest neighbours To classify the point x the k-nearest neighbours

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Convex Programs. COMPSCI 371D Machine Learning. COMPSCI 371D Machine Learning Convex Programs 1 / 21

Convex Programs. COMPSCI 371D Machine Learning. COMPSCI 371D Machine Learning Convex Programs 1 / 21 Convex Programs COMPSCI 371D Machine Learning COMPSCI 371D Machine Learning Convex Programs 1 / 21 Logistic Regression! Support Vector Machines Support Vector Machines (SVMs) and Convex Programs SVMs are

More information

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP): Linköping University Optimization TAOP3(0) Department of Mathematics Examination Oleg Burdakov of 30 October 03 Assignment Consider the following linear programming problem (LP): max z = x + x s.t. x x

More information

Convex Optimization and Machine Learning

Convex Optimization and Machine Learning Convex Optimization and Machine Learning Mengliu Zhao Machine Learning Reading Group School of Computing Science Simon Fraser University March 12, 2014 Mengliu Zhao SFU-MLRG March 12, 2014 1 / 25 Introduction

More information

EC5555 Economics Masters Refresher Course in Mathematics September Lecture 6 Optimization with equality constraints Francesco Feri

EC5555 Economics Masters Refresher Course in Mathematics September Lecture 6 Optimization with equality constraints Francesco Feri EC5555 Economics Masters Refresher Course in Mathematics September 2013 Lecture 6 Optimization with equality constraints Francesco Feri Constrained optimization The idea of constrained optimisation is

More information

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem Computational Learning Theory Fall Semester, 2012/13 Lecture 10: SVM Lecturer: Yishay Mansour Scribe: Gitit Kehat, Yogev Vaknin and Ezra Levin 1 10.1 Lecture Overview In this lecture we present in detail

More information

Lecture 7: Support Vector Machine

Lecture 7: Support Vector Machine Lecture 7: Support Vector Machine Hien Van Nguyen University of Houston 9/28/2017 Separating hyperplane Red and green dots can be separated by a separating hyperplane Two classes are separable, i.e., each

More information

Linear methods for supervised learning

Linear methods for supervised learning Linear methods for supervised learning LDA Logistic regression Naïve Bayes PLA Maximum margin hyperplanes Soft-margin hyperplanes Least squares resgression Ridge regression Nonlinear feature maps Sometimes

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS GRADO EN A.D.E. GRADO EN ECONOMÍA GRADO EN F.Y.C. ACADEMIC YEAR 2011-12 INDEX UNIT 1.- AN INTRODUCCTION TO OPTIMIZATION 2 UNIT 2.- NONLINEAR PROGRAMMING

More information

Computational Methods. Constrained Optimization

Computational Methods. Constrained Optimization Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on

More information

Optimization Methods: Optimization using Calculus Kuhn-Tucker Conditions 1. Module - 2 Lecture Notes 5. Kuhn-Tucker Conditions

Optimization Methods: Optimization using Calculus Kuhn-Tucker Conditions 1. Module - 2 Lecture Notes 5. Kuhn-Tucker Conditions Optimization Methods: Optimization using Calculus Kuhn-Tucker Conditions Module - Lecture Notes 5 Kuhn-Tucker Conditions Introduction In the previous lecture the optimization of functions of multiple variables

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information

Lagrange Multipliers

Lagrange Multipliers Lagrange Multipliers Christopher Croke University of Pennsylvania Math 115 How to deal with constrained optimization. How to deal with constrained optimization. Let s revisit the problem of finding the

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i

More information

1. Show that the rectangle of maximum area that has a given perimeter p is a square.

1. Show that the rectangle of maximum area that has a given perimeter p is a square. Constrained Optimization - Examples - 1 Unit #23 : Goals: Lagrange Multipliers To study constrained optimization; that is, the maximizing or minimizing of a function subject to a constraint (or side condition).

More information

21-256: Lagrange multipliers

21-256: Lagrange multipliers 21-256: Lagrange multipliers Clive Newstead, Thursday 12th June 2014 Lagrange multipliers give us a means of optimizing multivariate functions subject to a number of constraints on their variables. Problems

More information

Constrained Optimization and Lagrange Multipliers

Constrained Optimization and Lagrange Multipliers Constrained Optimization and Lagrange Multipliers MATH 311, Calculus III J. Robert Buchanan Department of Mathematics Fall 2011 Constrained Optimization In the previous section we found the local or absolute

More information

COMS 4771 Support Vector Machines. Nakul Verma

COMS 4771 Support Vector Machines. Nakul Verma COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming

A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming Gianni Di Pillo (dipillo@dis.uniroma1.it) Giampaolo Liuzzi (liuzzi@iasi.cnr.it) Stefano Lucidi (lucidi@dis.uniroma1.it)

More information

Local and Global Minimum

Local and Global Minimum Local and Global Minimum Stationary Point. From elementary calculus, a single variable function has a stationary point at if the derivative vanishes at, i.e., 0. Graphically, the slope of the function

More information

Solution Methods Numerical Algorithms

Solution Methods Numerical Algorithms Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class

More information

Computational Optimization. Constrained Optimization Algorithms

Computational Optimization. Constrained Optimization Algorithms Computational Optimization Constrained Optimization Algorithms Same basic algorithms Repeat Determine descent direction Determine step size Take a step Until Optimal But now must consider feasibility,

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview 1. Overview of SVMs 2. Margin Geometry 3. SVM Optimization 4. Overlapping Distributions 5. Relationship to Logistic Regression 6. Dealing

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points.

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points. 5.93 Optimization Methods Final Examination Instructions:. There are 5 problems each w i t h 2 p o i n ts for a maximum of points. 2. You are allowed to use class notes, your homeworks, solutions to homework

More information

9. Support Vector Machines. The linearly separable case: hard-margin SVMs. The linearly separable case: hard-margin SVMs. Learning objectives

9. Support Vector Machines. The linearly separable case: hard-margin SVMs. The linearly separable case: hard-margin SVMs. Learning objectives Foundations of Machine Learning École Centrale Paris Fall 25 9. Support Vector Machines Chloé-Agathe Azencot Centre for Computational Biology, Mines ParisTech Learning objectives chloe agathe.azencott@mines

More information

Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I

Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I CS/EE 5590 / ENG 401 Special Topics (17804, 17815, 17803) Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I Zhu Li Course Web: http://l.web.umkc.edu/lizhu/teaching/2016sp.video-communication/main.html

More information

Lesson 11: Duality in linear programming

Lesson 11: Duality in linear programming Unit 1 Lesson 11: Duality in linear programming Learning objectives: Introduction to dual programming. Formulation of Dual Problem. Introduction For every LP formulation there exists another unique linear

More information

Machine Learning for Signal Processing Lecture 4: Optimization

Machine Learning for Signal Processing Lecture 4: Optimization Machine Learning for Signal Processing Lecture 4: Optimization 13 Sep 2015 Instructor: Bhiksha Raj (slides largely by Najim Dehak, JHU) 11-755/18-797 1 Index 1. The problem of optimization 2. Direct optimization

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Characterizing Improving Directions Unconstrained Optimization

Characterizing Improving Directions Unconstrained Optimization Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not

More information

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma Support Vector Machines James McInerney Adapted from slides by Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake

More information

Lecture 18: March 23

Lecture 18: March 23 0-725/36-725: Convex Optimization Spring 205 Lecturer: Ryan Tibshirani Lecture 8: March 23 Scribes: James Duyck Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

Optimization under uncertainty: modeling and solution methods

Optimization under uncertainty: modeling and solution methods Optimization under uncertainty: modeling and solution methods Paolo Brandimarte Dipartimento di Scienze Matematiche Politecnico di Torino e-mail: paolo.brandimarte@polito.it URL: http://staff.polito.it/paolo.brandimarte

More information

MATH 19520/51 Class 10

MATH 19520/51 Class 10 MATH 19520/51 Class 10 Minh-Tam Trinh University of Chicago 2017-10-16 1 Method of Lagrange multipliers. 2 Examples of Lagrange multipliers. The Problem The ingredients: 1 A set of parameters, say x 1,...,

More information

Math 233. Lagrange Multipliers Basics

Math 233. Lagrange Multipliers Basics Math 233. Lagrange Multipliers Basics Optimization problems of the form to optimize a function f(x, y, z) over a constraint g(x, y, z) = k can often be conveniently solved using the method of Lagrange

More information

Perceptron Learning Algorithm

Perceptron Learning Algorithm Perceptron Learning Algorithm An iterative learning algorithm that can find linear threshold function to partition linearly separable set of points. Assume zero threshold value. 1) w(0) = arbitrary, j=1,

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Two common formulations of linear programming (LP) problems are: min Subject to: 1,,, 1,2,,;, max Subject to: 1,,, 1,2,,;, Linear Programming Problems The standard LP problem

More information

x 6 + λ 2 x 6 = for the curve y = 1 2 x3 gives f(1, 1 2 ) = λ actually has another solution besides λ = 1 2 = However, the equation λ

x 6 + λ 2 x 6 = for the curve y = 1 2 x3 gives f(1, 1 2 ) = λ actually has another solution besides λ = 1 2 = However, the equation λ Math 0 Prelim I Solutions Spring 010 1. Let f(x, y) = x3 y for (x, y) (0, 0). x 6 + y (4 pts) (a) Show that the cubic curves y = x 3 are level curves of the function f. Solution. Substituting y = x 3 in

More information

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro

More information

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal

More information

Gate Sizing by Lagrangian Relaxation Revisited

Gate Sizing by Lagrangian Relaxation Revisited Gate Sizing by Lagrangian Relaxation Revisited Jia Wang, Debasish Das, and Hai Zhou Electrical Engineering and Computer Science Northwestern University Evanston, Illinois, United States October 17, 2007

More information

Search direction improvement for gradient-based optimization problems

Search direction improvement for gradient-based optimization problems Computer Aided Optimum Design in Engineering IX 3 Search direction improvement for gradient-based optimization problems S Ganguly & W L Neu Aerospace and Ocean Engineering, Virginia Tech, USA Abstract

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS Linear Programming Larry Blume Cornell University & The Santa Fe Institute & IHS Linear Programs The general linear program is a constrained optimization problem where objectives and constraints are all

More information

ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints

ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints Instructor: Kevin Ross Scribe: Pritam Roy May 0, 2005 Outline of topics for the lecture We will discuss

More information

Perceptron Learning Algorithm (PLA)

Perceptron Learning Algorithm (PLA) Review: Lecture 4 Perceptron Learning Algorithm (PLA) Learning algorithm for linear threshold functions (LTF) (iterative) Energy function: PLA implements a stochastic gradient algorithm Novikoff s theorem

More information

Lagrange multipliers October 2013

Lagrange multipliers October 2013 Lagrange multipliers 14.8 14 October 2013 Example: Optimization with constraint. Example: Find the extreme values of f (x, y) = x + 2y on the ellipse 3x 2 + 4y 2 = 3. 3/2 1 1 3/2 Example: Optimization

More information

Convex Optimization. Lijun Zhang Modification of

Convex Optimization. Lijun Zhang   Modification of Convex Optimization Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Modification of http://stanford.edu/~boyd/cvxbook/bv_cvxslides.pdf Outline Introduction Convex Sets & Functions Convex Optimization

More information

Convex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c)

Convex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c) Convex Optimization (for CS229) Erick Delage, and Ashutosh Saxena October 20, 2006 1 Convex Sets Definition: A set G R n is convex if every pair of point (x, y) G, the segment beteen x and y is in A. More

More information

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5.1 DUALITY Associated with every linear programming problem (the primal) is another linear programming problem called its dual. If the primal involves

More information

Lagrange multipliers 14.8

Lagrange multipliers 14.8 Lagrange multipliers 14.8 14 October 2013 Example: Optimization with constraint. Example: Find the extreme values of f (x, y) = x + 2y on the ellipse 3x 2 + 4y 2 = 3. 3/2 Maximum? 1 1 Minimum? 3/2 Idea:

More information

Lagrange multipliers. Contents. Introduction. From Wikipedia, the free encyclopedia

Lagrange multipliers. Contents. Introduction. From Wikipedia, the free encyclopedia Lagrange multipliers From Wikipedia, the free encyclopedia In mathematical optimization problems, Lagrange multipliers, named after Joseph Louis Lagrange, is a method for finding the local extrema of a

More information

(c) 0 (d) (a) 27 (b) (e) x 2 3x2

(c) 0 (d) (a) 27 (b) (e) x 2 3x2 1. Sarah the architect is designing a modern building. The base of the building is the region in the xy-plane bounded by x =, y =, and y = 3 x. The building itself has a height bounded between z = and

More information

Lecture 15: Log Barrier Method

Lecture 15: Log Barrier Method 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 15: Log Barrier Method Scribes: Pradeep Dasigi, Mohammad Gowayyed Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 05 Lecture - 24 Solving LPs with mixed type of constraints In the

More information

Lagrangian Multipliers

Lagrangian Multipliers Università Ca Foscari di Venezia - Dipartimento di Management - A.A.2017-2018 Mathematics Lagrangian Multipliers Luciano Battaia November 15, 2017 1 Two variables functions and constraints Consider a two

More information

Math 21a Homework 22 Solutions Spring, 2014

Math 21a Homework 22 Solutions Spring, 2014 Math 1a Homework Solutions Spring, 014 1. Based on Stewart 11.8 #6 ) Consider the function fx, y) = e xy, and the constraint x 3 + y 3 = 16. a) Use Lagrange multipliers to find the coordinates x, y) of

More information

MAC2313 Test 3 A E g(x, y, z) dy dx dz

MAC2313 Test 3 A E g(x, y, z) dy dx dz MAC2313 Test 3 A (5 pts) 1. If the function g(x, y, z) is integrated over the cylindrical solid bounded by x 2 + y 2 = 3, z = 1, and z = 7, the correct integral in Cartesian coordinates is given by: A.

More information

Chapter 13-1 Notes Page 1

Chapter 13-1 Notes Page 1 Chapter 13-1 Notes Page 1 Constrained Optimization Constraints We will now consider how to maximize Sales Revenue & Contribution Margin; or minimize costs when dealing with limited resources (constraints).

More information

In this lecture, we ll look at applications of duality to three problems:

In this lecture, we ll look at applications of duality to three problems: Lecture 7 Duality Applications (Part II) In this lecture, we ll look at applications of duality to three problems: 1. Finding maximum spanning trees (MST). We know that Kruskal s algorithm finds this,

More information

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Peter Englert Machine Learning and Robotics Lab Universität Stuttgart Germany

More information

Delay-Constrained Optimized Packet Aggregation in High-Speed Wireless Networks

Delay-Constrained Optimized Packet Aggregation in High-Speed Wireless Networks Teymoori P, Yazdani N. Delay-constrained optimized packet aggregation in high-speed wireless networks. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 28(3): 525 539 May 2013. DOI 10.1007/s11390-013-1353-1

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

Math 233. Lagrange Multipliers Basics

Math 233. Lagrange Multipliers Basics Math 33. Lagrange Multipliers Basics Optimization problems of the form to optimize a function f(x, y, z) over a constraint g(x, y, z) = k can often be conveniently solved using the method of Lagrange multipliers:

More information

maximize c, x subject to Ax b,

maximize c, x subject to Ax b, Lecture 8 Linear programming is about problems of the form maximize c, x subject to Ax b, where A R m n, x R n, c R n, and b R m, and the inequality sign means inequality in each row. The feasible set

More information

Theoretical Concepts of Machine Learning

Theoretical Concepts of Machine Learning Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

Math 209 (Fall 2007) Calculus III. Solution #5. 1. Find the minimum and maximum values of the following functions f under the given constraints:

Math 209 (Fall 2007) Calculus III. Solution #5. 1. Find the minimum and maximum values of the following functions f under the given constraints: Math 9 (Fall 7) Calculus III Solution #5. Find the minimum and maximum values of the following functions f under the given constraints: (a) f(x, y) 4x + 6y, x + y ; (b) f(x, y) x y, x + y 6. Solution:

More information

Kernel Methods. Chapter 9 of A Course in Machine Learning by Hal Daumé III. Conversion to beamer by Fabrizio Riguzzi

Kernel Methods. Chapter 9 of A Course in Machine Learning by Hal Daumé III.   Conversion to beamer by Fabrizio Riguzzi Kernel Methods Chapter 9 of A Course in Machine Learning by Hal Daumé III http://ciml.info Conversion to beamer by Fabrizio Riguzzi Kernel Methods 1 / 66 Kernel Methods Linear models are great because

More information

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018 Section Notes 4 Duality, Sensitivity, and the Dual Simplex Algorithm Applied Math / Engineering Sciences 121 Week of October 8, 2018 Goals for the week understand the relationship between primal and dual

More information

3/1/2016. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder

3/1/2016. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder 1 Intermediate Microeconomics W3211 Lecture 5: Choice and Demand Introduction Columbia Universit, Spring 2016 Mark Dean: mark.dean@columbia.edu 2 The Stor So Far. 3 Toda s Aims 4 We have now have had a

More information

Interior Penalty Functions. Barrier Functions A Problem and a Solution

Interior Penalty Functions. Barrier Functions A Problem and a Solution Interior Penalty Functions Barrier Functions A Problem and a Solution Initial Steps A second type of penalty function begins with an initial point inside the feasible region, which is why these procedures

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

Lagrangian Multipliers

Lagrangian Multipliers Università Ca Foscari di Venezia - Dipartimento di Economia - A.A.2016-2017 Mathematics (Curriculum Economics, Markets and Finance) Lagrangian Multipliers Luciano Battaia November 15, 2017 1 Two variables

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

we wish to minimize this function; to make life easier, we may minimize

we wish to minimize this function; to make life easier, we may minimize Optimization and Lagrange Multipliers We studied single variable optimization problems in Calculus 1; given a function f(x), we found the extremes of f relative to some constraint. Our ability to find

More information

A FACTOR GRAPH APPROACH TO CONSTRAINED OPTIMIZATION. A Thesis Presented to The Academic Faculty. Ivan Dario Jimenez

A FACTOR GRAPH APPROACH TO CONSTRAINED OPTIMIZATION. A Thesis Presented to The Academic Faculty. Ivan Dario Jimenez A FACTOR GRAPH APPROACH TO CONSTRAINED OPTIMIZATION A Thesis Presented to The Academic Faculty By Ivan Dario Jimenez In Partial Fulfillment of the Requirements for the Degree B.S. in Computer Science with

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

6.1 Evaluate Roots and Rational Exponents

6.1 Evaluate Roots and Rational Exponents VOCABULARY:. Evaluate Roots and Rational Exponents Radical: We know radicals as square roots. But really, radicals can be used to express any root: 0 8, 8, Index: The index tells us exactly what type of

More information