In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function.

Size: px
Start display at page:

Download "In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function."

Transcription

1 1

2 The Lagrange multipliers is a mathematical method for performing constrained optimization of differentiable functions. Recall unconstrained optimization of differentiable functions, in which we want to find the extreme values (max or min values) of a differentiable function. In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function. We determine the extrema of by first finding the function s critical domain points, which are points where the gradient (i.e., each partial derivative) is zero. These points may yield (local) maxima, (local) minima or saddle points of the function. We then check the properties of the second derivatives or simply inspect the function values to determine the function s extreme values. 2

3 In constrained optimization of differentiable functions, we still have a differentiable function we want to maximize or minimize. But, we have restrictions on the domain points that we can consider. The set of these points is called the feasible region, and they are given by a constraint function, typically formulated as. Example: consider an inverted paraboloid as the function to maximize, constrained by a set of points defined by a line in the x-y plane. Function s value at each of the domain points chosen from the constraint curve. Consider only the domain points that are on the constraint curve. 3

4 Example: Constrained Maximization Suppose you want to maximize constraint (See ), subject to the 4. Function's maximum constrained value:,. 1. The feasible set consists of points on the unit circle, plotted in the x-y plane. 3. Function s value for each point of the feasible set. 2. Feasible set, dropped for visual clarity. 4

5 We can solve this constrained optimization problem using Calculus and substitution. First, we write the formal expression for the constrained optimization (maximization): Solution by substitution: Now, set 1 st derivative to zero, and find critical points: Substituting the critical point, as determined by inspection, into : 5

6 Lagrange Multiplier Method Outline We now consider an alternative way to perform constrained optimization of differentiable functions, called the Lagrange multiplier method, or just the Lagrangian. We will give the formulation of the basic Lagrange multiplier method and a procedure of how to solve it. This will be followed by an intuitive derivation of the basic Lagrange equations. In addition, an intuitive derivation of the generalized Lagrange multiplier method will be given. Finally, the primal and dual forms of the Lagrange multiplier method will be given. The primal and dual forms offer equivalent methods for performing constrained optimization. 6

7 (Basic) Lagrange Multiplier Method Consider a basic constrained optimization (maximize of minimize) problem:, The formulation of the Basic Lagrange Multiplier method for constrained optimization problem is: We set the partial derivatives of the Lagrangian to zero, and then find the optimal values of the variables that maximize (or minimize) the function. The is called the Lagrange multiplier. 7

8 , Why does the Basic Lagrange Multiplier equation include? We know from basic calculus that setting the function s derivative to zero yields the function s critical points, and then, from the critical points, we can determine the extrema of the function. The constraint function must also be satisfied, and setting the partial derivative gives the constraint, and so, there is some intuition for including in the Basic Lagrange equation. The set of simultaneous equations ensures that only the points satisfying the constraint may be chosen as critical points. 8

9 More Motivation for Including To obtain an intuitive appreciation for why the Lagrange is formulated as such, consider a contour plot of the previous optimization example: Contour plot (-2,2,0) of. Surface plot of. (2,-2,0) These are level curves of. 9

10 Gradient of Note that the gradient of is perpendicular to the level curves of and points in the direction of the maximum rate of change of the function : Note that this direction is given in the plane. Gradient is Parallel with the diagonal in the x-y axis. 10

11 Contour of and Now consider the contour of and the constraint level curve : Contour plot of. Standard Form Constraint level curve:. 11

12 Gradient of The constraint is a level curve of the paraboloid, but plotted in the plane. The gradient of the level curve is perpendicular to the level curve and points in the outward direction.,, Constraint level curve:. 12

13 At Solution Points Informally, notice that, the slope of the tangent line of the contour of (note: the tangent line is the contour line itself) is equal to the slope of the tangent line of the constraint level curve at the critical points, Also, informally, note that at the intersection point of any other contour line of and the level curve, the slope of their tangent lines appear to be different. Contour line. Point on contour line:. Contour line. Their slopes appear to be the same. Constraint level curve:. 13

14 close all; d=linspace(-2,2,1000); [x,y]=meshgrid(d,d); figure; hold all; grid on; contour(x,y,x+y,50); surf(x,y,x+y), shading interp; theta = linspace(0,2*pi); [x1,y1] = pol2cart(theta,1.0); plot3(x1,y1,x1+y1,'linewidth',2,'color','k'); plot3(sqrtm(2)/2.0,sqrtm(2)/2.0,sqrtm(2), 'bo', 'markerfacecolor','b', 'markersize',6); set(gca, 'PlotBoxAspectRatio', [ ]); set(gca,'gridalpha', 0.5, 'GridLineStyle', '--', 'FontSize',22); set(gca,'zlim',[-6 4],'XLim',[-2 2],'YLim',[-2 2]); set(gca,'xtick',[-2:1:2], 'YTick',[-2:1:2], 'ZTick',[-6:2.0:4]); colormap 'jet'; view(49,12) % adjust view so it is easy to see 14

15 Aside: Derivative Interpretation The slope of the tangent line of a curve gives the direction we should travel to stay on the curve. Recall the definition of the derivative (evaluated at : rise run In the limit as at a point, the function must become linear at point ; otherwise the function would not be differentiable. This is because the only way the function would not be linear as at point, would be if the function had a corner at point, and, if so, the function would have an infinite derivative at point, and would not be differentiable at point. Starting from the point at which the derivative is taken,, the slope of the tangent line gives the rise and the run we should take to get to the next infinitesimally closest point of the function, and hence stay on the function. 15

16 Moving along the Tangent Line Imagine you are at one of the points (in this example, or ), and you make an infinitesimally small step along the constraint curve, and along the tangent line of at the point. Your movement will keep you on the constraint curve, since, the slope of the tangent line of a curve gives the direction we should travel to stay on the curve. Constraint level curve:. 16

17 Your infinitesimally small movement will cause one of two possible outcomes: Move parallel with and along a level curve of For example: as shown at point, you will move parallel with and along Cross over a level curve For example: as shown at point, you will move across :, or 17

18 Consider Intersection Point Where Tangents are Different In this case, a movement to the right will cross the level curve, and will touch another level, which represents an increase in, i.,e.,. 18

19 Cannot be an Extrema Point Since your movement touches another level curve, while staying on the constraint curve (i.e., your movement takes you to a valid point in the feasible region on the constraint curve, this means the point under consideration,, cannot be a maximum point, because the function s value at the new point of intersection is. In other words, starting from, and moving along the feasible region, one finds another point in the feasible region that has a greater function value. Therefore, is not an extreme point. 19

20 Tangent Lines are Different at Notice that the tangent line of the level curve is different than the tangent line of the constraint level curve at point. Also, note that if you were to move along the tangent line of the level curve, your movement would take you off the constraint level curve, and hence take you out of the feasible region, i.e.,. Tangent of Tangent of at at 20

21 Conclusion Regarding Intersection Point In general, point cannot be a critical point if the slope of the tangent line of the constraint level curve is different than the slope of the tangent line of the objective level curve at intersection point. If the slopes are different at an intersection point, then that point cannot be an extremum. 21

22 Considering Touching Point Where Tangents are Equal Previously, we considered a point where the tangent of the constraint level curve and the tangent of a level curve of were different. Next, consider where your movement takes you in the case of a touching point, where the tangents are equal. Tangent of Tangent of at 22

23 Considering Touching Point Tangents are Equal Once again, consider moving along the tangent line of the constraint curve, but now in more detail. Say, you took an infinitesimally small step from to along the tangent line of the constraint curve, at the touching point. Tangent of at 23

24 Considering Touching Point Tangents are Equal This infinitesimally small step from to along the tangent line keeps you on the constraint curve, staying in the feasible region. Also, since the tangents are equal, you can move along the objective level curve using the same step. This keeps you on the objective level curve, and does not change the value of the function,. Tangent of Tangent of at 24

25 Touching Point is a Solution Tangents are Equal Therefore, a local extremum occurs at point where the tangent of is equal to a level curve of, because we assume an extremum exists, and the slopes can either be different or same. Finally, we have shown that when the slopes are different at a point #, that point of intersection cannot be an extremum. Therefore, an extremum exists at point where the slopes are the same. In general, if the slopes of the tangents at a touching point of the constraint level function and an objective level curve are equal, then the touching point is a critical point in the constrained extremum problem. In other words; In a constrained maximization or minimization problem, we are constrained to finding an extremum point of considering only those points that satisfy the constraint. The extreme value occurs at a point, where the objective level curve touches, but does not cross, the constraint level curve. At that point, the tangent of the constraint level curve is equal to the tangent of the objective level curve. At this point the slopes touch but do not cross. At this point the extreme value of the function is. 25

26 Next, we use the fact that at a solution point, the slope of the tangent line of the constraint level curve is equal to the slope of the tangent line of the objective level curve, to derive the Lagrange multiplier constrained optimization equation. We will show that the gradient of the objective function can be written as a scalar multiple of the gradient of the constraint function, i.e., 26

27 Known: the slope of the tangent line of the constraint level curve is equal to the slope of the tangent line of the objective level curve at the critical point. Their normal vectors are parallel. Known: the gradient of the objective function is perpendicular to the tangent line of the objective level curve at the critical point. Therefore, the gradient is the normal vector of the objective level curve at the critical point. Known: the gradient of the constraint function is perpendicular to the tangent line of the level curve at the critical point. Therefore, the gradient is the normal vector of the level curve at the critical point. This means the gradient of the objective function is related to the gradient of the constraint function through a scalar multiple. Note: can be either positive or negative. Tangent at Normal vector Therefore, we can write that: for negative for positive 27

28 At a non-solution point, the tangent of the constraint curve is not parallel to the tangent of the objective level curve. At a solution point, the tangent of the constraint curve is parallel to the tangent of the objective level curve. At a solution point, since the tangents are parallel, the normal of the constraint level curve and the normal of the objective level curve are also parallel. This means that at the solution point, the gradient of the objective function is either parallel or anti-parallel to the gradient of the constraint level curve. This means is related to through a scalar multiple. Note: can be either positive or negative. Therefore, we can write that: Tangent at Normal vector for negative for positive 28

29 Thought Experiment: Exhaustive Search of Extrema Do the following for every point on the constraint level curve, i.e., for every point in the feasible region: Imagine you are at a point of the constraint level curve. You take note of the value of the objective level curve at point. You take an infinitesimally small step on the curve to get to point, and so you stay on the curve, i.e., you stay in the feasible region. You take note of the value of the objective level curve at point. If the value of the objective level curve at point is different than the value at point, then point cannot be an extremum. You will note that the slopes of the tangents lines of the two functions and at the point are different. If the value of the objective level curve at point is the same as the value at point, then point is an extremum. You will note that the slopes of the tangents lines of the two functions and at the point are the same. 29

30 Lagrange Optimization Equation The above can be written as: Undoing the differentiation and removing the setting to zero procedure: Since can be either positive or negative, we can now write the Lagrangian: The is called the Lagrange multiplier. 30

31 Lagrange Minimization/Maximization Procedure We set the partial derivatives of the Lagrangian to zero, and then find the optimal values of the variables that maximize (or minimize) the function. Notice that the above three equations comprise the gradient equation derived earlier: And that the last equation extracts the constraint 31

32 (Basic) Lagrange Multiplier Method Consider a basic constrained optimization (maximization or minimization) problem:, The formulation of the basic Lagrange constrained optimization problem is: We set the partial derivatives of the Lagrangian to zero, and then find the optimal values of the variables that maximize (or minimize) the function. The is called the Lagrange multiplier. 32

33 Example Lagrange Computation, The formulation of the basic Lagrange constrained optimization problem is: Now, setting the partial derivatives to zero, and finding critical points: Substituting the critical points into, we find:, 33

34 Graphical Interpretation of a Solution Consider a basic constrained optimization (maximization or minimization) problem:, Assume a constrained solution exists. A solution can exist only in the feasible region, i.e., for points only on the constraint curve. For a point in the feasible region, the tangent of the constraint curve and the tangent of the objective level curve will either intersect or touch (cross or not cross); there is no other alternative. Tangents intersect (cross) Tangents touch (do not cross) On the one hand, at a point where the tangents cross, the point is not a critical point, because if you take an infinitely small step along the constraint curve, you stay on the constraint curve, but you meet an either higher or lower level curve of ; therefore, that point is not an extremum. On the other hand, at a point where the tangents touch, but do not cross, the point is a critical point, because if you take an infinitely small step along the constraint curve, you stay on the constraint curve, and you stay on the level curve of the objective function ; therefore, that point must be an extremum. For example, in 2D, we can visualize a solution by drawing the level curves of and the level curve of, and observing at which point(s) the tangents touch but do not cross (i.e., do not intersect). A solution exists where the tangents touch but do not cross. 34

35 Lagrangian with Multiple Constraints Consider an optimization problem with multiple constraints :, The feasible region is the set of points at which the functions intersect. The Lagrangian may be written as: Note that the following partial derivatives reveal the constraints: The gradient of is the vector sum of scalar multiples of the gradients of the constraint curves. 35

36 Lagrange with Inequality Constraints Consider an optimization problem with an inequality constraint : The Lagrangian with inequality constraint is written as: Notice that the equation appears very similar to the Lagrange multiplier method with equality constraint, except that the is constrained to be. Why should the multiplier with inequality constraints be limited to 36

37 Lagrange with Inequality Constraints To show intuitively why this must be the case, first consider the possibilities: 1. No solution exists and the lines of constant and feasible region do not touch or intersect. 2. A solution exists at the boundary of the feasible region. 3. A solution exists inside the feasible region. In the interesting case where a solution exists, we will show there will be two cases:.. Consider a maximization example with two variables and one inequality constraint. To maximize subject to, let us first look at the boundary of the region allowed by the inequality, i.e.,. 37

38 Lagrange with Inequality Constraints: Solution on Boundary Consider a sketch of the level curve and the level curve We assume a solution exists at point This means that some level curve touches. Then, we assert that and must be parallel (not anti-parallel), and this point will give a maximum (not a minimum) of for the region, because of the following argument: Feasible Region 38

39 Solution on Boundary: Gradient Points Away From Feasible Region At the point, where and touch, the gradient would be pointing away from the feasible region, since: The gradient always points in the direction of maximum increase of a function, and The function decreases as you move towards the inside of the feasible region. Feasible Region 39

40 Solution on Boundary: and Point in Same Direction Now, let s determine the direction of : If and were pointing in opposite directions, then would be pointing inwards towards the feasible region, meaning would have greater values inside the feasible region. Feasible Region If we were to find another point where touches inside the feasible region, we would find an increase in ; since we are still in the feasible region, we must conclude that the point on the boundary cannot be a critical point (not a solution). This contradicts our initial assumption that is a solution. 40

41 Solution on Boundary: and Point in Same Direction Therefore, and must be parallel and be pointing in the same direction (i.e., not anti-parallel). This implies. Feasible Region 41

42 Solution on Boundary: Gives a Maximum of The gradient indicates the function increases away from the feasible region. If a solution exists on the boundary, then this solution must be a maximum, due to the feasible region being at its outer limit. Feasible Region 42

43 Solution on Boundary: Effective and Binding Constraint In the case where a critical point exists on the boundary, the inequality constraint is said to be effective and is called a binding constraint, and Feasible Region 43

44 Summary for Solution on Boundary Consider a sketch of the level curve and the level curve We assume a solution exists at point where touches. Then, and must be parallel (not anti-parallel), and this point will give a maximum (not a minimum) of for the region, because of the following argument: At the point, where, and, touch, the gradient would be pointing away from the feasible region, since, as you move towards the inside of the feasible region. If and were pointing in opposite directions, then would be pointing inwards towards the feasible region, meaning would have greater values inside the feasible region. If we were to find another point where, touches, inside the feasible region, we would find an increase in ; since we are still in the feasible region,, we must conclude that the point on the boundary cannot be a critical point (not a solution). This contradicts our initial assumption. Therefore, and must be parallel and be pointing in the same direction (i.e., not anti-parallel). Therefore, at the critical point on, and must be parallel (not anti-parallel) This implies that 0for. In the case where a critical point exists on the boundary, the inequality constraint is said to be effective and is called a binding constraint, and 44

45 Lagrange with Inequality Constraints: Solution Within Boundary In the case where a critical point exists inside the feasible region, i.e.,, then we can consider any point within the feasible region to determine the extrema of. I.e., the problem is unconstrained, if we assume a solution exists within the feasible region. In other words, the optimizer will not find a solution at the boundary, because we assume a solution exists within the boundary; and within the boundary the optimizer can consider any point, i.e., it is unconstrained within the boundary. This implies to remove the constraint. In this case we say the constraint is not binding, or the constraint is ineffective. The maximum is then found by looking for the unconstrained maximum of, assuming that we look only inside the feasible region. In this case: 45

46 Lagrange with Inequality Constraints: Summary The Lagrangian with inequality constraints can be written as: If the extremum occurs at the boundary of the constraint (the constraint is binding and effective): If the constraint is not binding and ineffective, then the above reduces to: I.e., unconstrained optimization. Argument: We assume that there is a solution, and that the solution does not exist at the boundary. Therefore, any critical point the optimizer finds must be inside the feasible region, due to the constraint. Finally, the optimizer is free to choose any point within the feasible region to find a solution. 46

47 The Lagrange optimization method has a dual form, one called the primal optimization method and the other called the dual optimization method. In some applications it is more suitable to use the dual optimization method, as it leads to a simpler and quicker solution, while in other applications, the primal method is better. In the following we show that under certain conditions, the primal and dual optimization methods are equivalent and lead to the exact same solution to an optimization problem. As an example use of the primal and dual optimization methods being equivalent, we can show that the condition, is also true for a minimization problem. Note that our intuitive verification for the condition, was based on the assumption of a maximization problem. 47

48 Lagrange Optimization: Basic Formulation Consider an optimization problem of the following form: The basic Lagrange formulation (Lagrangian) for this problem is: The are called the Lagrange multipliers for equality constraints. We would then find and set s partial derivatives to zero: Finally, solve for and, and then locate the minima. 48

49 Lagrange Optimization: Generalized Formulation Consider the following, which is called the primal optimization problem: The generalized Lagrangian is given by: The and are called the Lagrange multipliers. 49

50 Deriving an Alternate Expression for the Primal Optimization Problem We will now derive an alternative expression for the primal optimization problem. We call this the min max expression for the primal optimization problem. 50

51 Min Max Expression for Primal Optimization Problem Consider the following quantity: If the choice of violates any of the primal constraints (below), then : For instance, if, then can be chosen as to maximize, and therefore,. In addition, if, then can be chosen as to maximize, and therefore,. 51

52 Min Max Expression for Primal Optimization Problem Now, if the choice of satisfies the primal constraints (below), then : For instance, if, then the value of is irrelevant, since, irrespective of. In addition, if, then will be chosen as to maximize. Taken together,. 52

53 Min Max Expression for Primal Optimization Problem Now, if the choice of satisfies the primal constraints (below), then : Note also that if were allowed to be negative, then would be chosen as to maximize, in which case, and so we would not have a solution, even for good ). This provides further reason for requiring that. 53

54 Min Max Expression for Primal Optimization Problem Therefore: Next, consider the minimization problem: This means that, after performing the maximization of, which we found to be, given that the constraints are satisfied, then minimize the resulting function by finding the optimal value of. I.e., given that the constraints are satisfied: 54

55 Min Max Expression for Primal Optimization Problem In other words, optimization problem. is the same as our original primal Min max representation Original primal optimization problem Finally, define the optimal value of as : We call this the value of the primal optimization problem. 55

56 Deriving a Dual Expression for the Primal Optimization Problem We will now derive the dual expression of the generalized Lagrange optimization formulation. We call the dual expression the Max min expression. We will relate the dual expression to the primal expression, and hence show that the dual expression can also be used to express the generalized Lagrange optimization formulation. Finally, we will show that under certain conditions, the dual expression is equivalent to the primal expression, and thus, either of them can be used to solve an optimization problem posed as a generalized Lagrange optimization formulation. 56

57 The Dual Max Min Expression Consider the quantity: Note that, whereas in the definition of (below) we were optimizing (maximizing) with respect to and, here (above) we are minimizing with respect to. 57

58 The Dual Max Min Expression Now, add a maximization term: This is exactly the same as our primal problem (below), except that the order of the max and the min are now exchanged. Finally, define the optimal value of as : We call this the value of dual optimization problem. 58

59 Primal and Dual Relationship It can be shown that (see next 2 slides): Furthermore, it can be shown that, under certain conditions: This means that under certain conditions, we can solve a given optimization problem by using either the primal or dual methods, and we ll pick the most suitable one. 59

60 2-D Case PROOF: Since we can chose any, we can write on the LHS that. On the RHS, since this implies: implies that we can choose a that minimizes the RHS: 60

61 What does the following mean? For each, we find a that minimizes the function. This will generate answers (i.e., minimums of ), one for each value of. Does the following make sense then? Yes, because for each value of you choose, the term represents the minimum value of the function over all. 61

62 Our Case PROOF: implies that we can choose a combination that maximizes both sides, even with the restriction : implies that we can choose a that minimizes the RHS: 62

63 63

64 Recall: Lagrange Optimization Generalized Formulation Consider the following, which is called the primal optimization problem: The generalized Lagrangian is given by: The and are called the Lagrange multipliers. 64

65 Conditions for Equality : Given without Proof Let,, be the optimal domain points that optimize the objective function: Under certain assumptions, there must exist,, so that:. Once determined,,, will satisfy the Karush-Kuhn-Tucker (KKT) conditions, which are as follows: 65

66 The first two conditions (below) follow from the Lagrange optimization procedure. We set the partial derivatives of to zero, and solve for the variables : Of course, when we solve for the variables and find the optimal values, they should satisfy the following two KKT conditions: 66

67 The last two conditions (below) follow from the initial constraints of the problem. The is the initial constraint: Of course, when we solve for the variables and find the optimal values, should satisfy : 67

68 The third condition (below) follows from the analysis of the primal optimization problem. Recall that in the derivation of the primal optimization problem we wanted to perform the maximization: For the above term to be maximum, and, it is required that, subject to. Furthermore: We note that in the case of strict feasibility (i.e., when is active and a solution exists at ), the condition is satisfied and also is free to vary. When is inactive and, then. 68

69 [1] J. Kitchin, "Matlab in Chemical Engineering at CMU," [Online]. Available: [Accessed 20 February 2015]. [2] C. A. Jones. [Online]. Available: [Accessed 23 February 2015]. [3] D. Klein, "Dan Klein's Homepage," [Online]. Available: [Accessed 23 February 2015]. [4] J. Beggs, "Introduction to the Lagrange Multiplier," [Online]. Available: [Accessed 2 March 2015]. 69

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

Constrained Optimization and Lagrange Multipliers

Constrained Optimization and Lagrange Multipliers Constrained Optimization and Lagrange Multipliers MATH 311, Calculus III J. Robert Buchanan Department of Mathematics Fall 2011 Constrained Optimization In the previous section we found the local or absolute

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers

3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers 3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers Prof. Tesler Math 20C Fall 2018 Prof. Tesler 3.3 3.4 Optimization Math 20C / Fall 2018 1 / 56 Optimizing y = f (x) In Math 20A, we

More information

Mat 241 Homework Set 7 Due Professor David Schultz

Mat 241 Homework Set 7 Due Professor David Schultz Mat 41 Homework Set 7 Due Professor David Schultz Directions: Show all algebraic steps neatly and concisely using proper mathematical symbolism When graphs and technology are to be implemented, do so appropriately

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Kernel Methods & Support Vector Machines

Kernel Methods & Support Vector Machines & Support Vector Machines & Support Vector Machines Arvind Visvanathan CSCE 970 Pattern Recognition 1 & Support Vector Machines Question? Draw a single line to separate two classes? 2 & Support Vector

More information

Lagrange multipliers. Contents. Introduction. From Wikipedia, the free encyclopedia

Lagrange multipliers. Contents. Introduction. From Wikipedia, the free encyclopedia Lagrange multipliers From Wikipedia, the free encyclopedia In mathematical optimization problems, Lagrange multipliers, named after Joseph Louis Lagrange, is a method for finding the local extrema of a

More information

Section 18-1: Graphical Representation of Linear Equations and Functions

Section 18-1: Graphical Representation of Linear Equations and Functions Section 18-1: Graphical Representation of Linear Equations and Functions Prepare a table of solutions and locate the solutions on a coordinate system: f(x) = 2x 5 Learning Outcome 2 Write x + 3 = 5 as

More information

Winter 2012 Math 255 Section 006. Problem Set 7

Winter 2012 Math 255 Section 006. Problem Set 7 Problem Set 7 1 a) Carry out the partials with respect to t and x, substitute and check b) Use separation of varibles, i.e. write as dx/x 2 = dt, integrate both sides and observe that the solution also

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Maximum Margin Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574

More information

Convex Programs. COMPSCI 371D Machine Learning. COMPSCI 371D Machine Learning Convex Programs 1 / 21

Convex Programs. COMPSCI 371D Machine Learning. COMPSCI 371D Machine Learning Convex Programs 1 / 21 Convex Programs COMPSCI 371D Machine Learning COMPSCI 371D Machine Learning Convex Programs 1 / 21 Logistic Regression! Support Vector Machines Support Vector Machines (SVMs) and Convex Programs SVMs are

More information

x 6 + λ 2 x 6 = for the curve y = 1 2 x3 gives f(1, 1 2 ) = λ actually has another solution besides λ = 1 2 = However, the equation λ

x 6 + λ 2 x 6 = for the curve y = 1 2 x3 gives f(1, 1 2 ) = λ actually has another solution besides λ = 1 2 = However, the equation λ Math 0 Prelim I Solutions Spring 010 1. Let f(x, y) = x3 y for (x, y) (0, 0). x 6 + y (4 pts) (a) Show that the cubic curves y = x 3 are level curves of the function f. Solution. Substituting y = x 3 in

More information

MATH2111 Higher Several Variable Calculus Lagrange Multipliers

MATH2111 Higher Several Variable Calculus Lagrange Multipliers MATH2111 Higher Several Variable Calculus Lagrange Multipliers Dr. Jonathan Kress School of Mathematics and Statistics University of New South Wales Semester 1, 2016 [updated: February 29, 2016] JM Kress

More information

LECTURE 18 - OPTIMIZATION

LECTURE 18 - OPTIMIZATION LECTURE 18 - OPTIMIZATION CHRIS JOHNSON Abstract. In this lecture we ll describe extend the optimization techniques you learned in your first semester calculus class to optimize functions of multiple variables.

More information

Lagrange multipliers October 2013

Lagrange multipliers October 2013 Lagrange multipliers 14.8 14 October 2013 Example: Optimization with constraint. Example: Find the extreme values of f (x, y) = x + 2y on the ellipse 3x 2 + 4y 2 = 3. 3/2 1 1 3/2 Example: Optimization

More information

Question 2: How do you solve a linear programming problem with a graph?

Question 2: How do you solve a linear programming problem with a graph? Question : How do you solve a linear programming problem with a graph? Now that we have several linear programming problems, let s look at how we can solve them using the graph of the system of inequalities.

More information

Demo 1: KKT conditions with inequality constraints

Demo 1: KKT conditions with inequality constraints MS-C5 Introduction to Optimization Solutions 9 Ehtamo Demo : KKT conditions with inequality constraints Using the Karush-Kuhn-Tucker conditions, see if the points x (x, x ) (, 4) or x (x, x ) (6, ) are

More information

Lagrange multipliers 14.8

Lagrange multipliers 14.8 Lagrange multipliers 14.8 14 October 2013 Example: Optimization with constraint. Example: Find the extreme values of f (x, y) = x + 2y on the ellipse 3x 2 + 4y 2 = 3. 3/2 Maximum? 1 1 Minimum? 3/2 Idea:

More information

Surfaces and Partial Derivatives

Surfaces and Partial Derivatives Surfaces and James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 15, 2017 Outline 1 2 Tangent Planes Let s go back to our simple surface

More information

Chapter 1. Linear Equations and Straight Lines. 2 of 71. Copyright 2014, 2010, 2007 Pearson Education, Inc.

Chapter 1. Linear Equations and Straight Lines. 2 of 71. Copyright 2014, 2010, 2007 Pearson Education, Inc. Chapter 1 Linear Equations and Straight Lines 2 of 71 Outline 1.1 Coordinate Systems and Graphs 1.4 The Slope of a Straight Line 1.3 The Intersection Point of a Pair of Lines 1.2 Linear Inequalities 1.5

More information

Machine Learning for Signal Processing Lecture 4: Optimization

Machine Learning for Signal Processing Lecture 4: Optimization Machine Learning for Signal Processing Lecture 4: Optimization 13 Sep 2015 Instructor: Bhiksha Raj (slides largely by Najim Dehak, JHU) 11-755/18-797 1 Index 1. The problem of optimization 2. Direct optimization

More information

5 Day 5: Maxima and minima for n variables.

5 Day 5: Maxima and minima for n variables. UNIVERSITAT POMPEU FABRA INTERNATIONAL BUSINESS ECONOMICS MATHEMATICS III. Pelegrí Viader. 2012-201 Updated May 14, 201 5 Day 5: Maxima and minima for n variables. The same kind of first-order and second-order

More information

Lagrange Multipliers. Lagrange Multipliers. Lagrange Multipliers. Lagrange Multipliers. Lagrange Multipliers. Lagrange Multipliers

Lagrange Multipliers. Lagrange Multipliers. Lagrange Multipliers. Lagrange Multipliers. Lagrange Multipliers. Lagrange Multipliers In this section we present Lagrange s method for maximizing or minimizing a general function f(x, y, z) subject to a constraint (or side condition) of the form g(x, y, z) = k. Figure 1 shows this curve

More information

13.1. Functions of Several Variables. Introduction to Functions of Several Variables. Functions of Several Variables. Objectives. Example 1 Solution

13.1. Functions of Several Variables. Introduction to Functions of Several Variables. Functions of Several Variables. Objectives. Example 1 Solution 13 Functions of Several Variables 13.1 Introduction to Functions of Several Variables Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. Objectives Understand

More information

14.5 Directional Derivatives and the Gradient Vector

14.5 Directional Derivatives and the Gradient Vector 14.5 Directional Derivatives and the Gradient Vector 1. Directional Derivatives. Recall z = f (x, y) and the partial derivatives f x and f y are defined as f (x 0 + h, y 0 ) f (x 0, y 0 ) f x (x 0, y 0

More information

Section 4: Extreme Values & Lagrange Multipliers.

Section 4: Extreme Values & Lagrange Multipliers. Section 4: Extreme Values & Lagrange Multipliers. Compiled by Chris Tisdell S1: Motivation S2: What are local maxima & minima? S3: What is a critical point? S4: Second derivative test S5: Maxima and Minima

More information

Intro to Linear Programming. The problem that we desire to address in this course is loosely stated below.

Intro to Linear Programming. The problem that we desire to address in this course is loosely stated below. . Introduction Intro to Linear Programming The problem that we desire to address in this course is loosely stated below. Given a number of generators make price-quantity offers to sell (each provides their

More information

Multivariate Calculus Review Problems for Examination Two

Multivariate Calculus Review Problems for Examination Two Multivariate Calculus Review Problems for Examination Two Note: Exam Two is on Thursday, February 28, class time. The coverage is multivariate differential calculus and double integration: sections 13.3,

More information

Introduction to Constrained Optimization

Introduction to Constrained Optimization Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR

More information

Review for Mastery Using Graphs and Tables to Solve Linear Systems

Review for Mastery Using Graphs and Tables to Solve Linear Systems 3-1 Using Graphs and Tables to Solve Linear Systems A linear system of equations is a set of two or more linear equations. To solve a linear system, find all the ordered pairs (x, y) that make both equations

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 1 3.1 Linearization and Optimization of Functions of Vectors 1 Problem Notation 2 Outline 3.1.1 Linearization 3.1.2 Optimization of Objective Functions 3.1.3 Constrained

More information

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane?

This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? Intersecting Circles This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? This is a problem that a programmer might have to solve, for example,

More information

Inverse and Implicit functions

Inverse and Implicit functions CHAPTER 3 Inverse and Implicit functions. Inverse Functions and Coordinate Changes Let U R d be a domain. Theorem. (Inverse function theorem). If ϕ : U R d is differentiable at a and Dϕ a is invertible,

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

LINEAR PROGRAMMING: A GEOMETRIC APPROACH. Copyright Cengage Learning. All rights reserved.

LINEAR PROGRAMMING: A GEOMETRIC APPROACH. Copyright Cengage Learning. All rights reserved. 3 LINEAR PROGRAMMING: A GEOMETRIC APPROACH Copyright Cengage Learning. All rights reserved. 3.1 Graphing Systems of Linear Inequalities in Two Variables Copyright Cengage Learning. All rights reserved.

More information

Surfaces and Partial Derivatives

Surfaces and Partial Derivatives Surfaces and Partial Derivatives James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 9, 2016 Outline Partial Derivatives Tangent Planes

More information

Local and Global Minimum

Local and Global Minimum Local and Global Minimum Stationary Point. From elementary calculus, a single variable function has a stationary point at if the derivative vanishes at, i.e., 0. Graphically, the slope of the function

More information

we wish to minimize this function; to make life easier, we may minimize

we wish to minimize this function; to make life easier, we may minimize Optimization and Lagrange Multipliers We studied single variable optimization problems in Calculus 1; given a function f(x), we found the extremes of f relative to some constraint. Our ability to find

More information

f xx (x, y) = 6 + 6x f xy (x, y) = 0 f yy (x, y) = y In general, the quantity that we re interested in is

f xx (x, y) = 6 + 6x f xy (x, y) = 0 f yy (x, y) = y In general, the quantity that we re interested in is 1. Let f(x, y) = 5 + 3x 2 + 3y 2 + 2y 3 + x 3. (a) Final all critical points of f. (b) Use the second derivatives test to classify the critical points you found in (a) as a local maximum, local minimum,

More information

Optimizations and Lagrange Multiplier Method

Optimizations and Lagrange Multiplier Method Introduction Applications Goal and Objectives Reflection Questions Once an objective of any real world application is well specified as a function of its control variables, which may subject to a certain

More information

Lesson 08 Linear Programming

Lesson 08 Linear Programming Lesson 08 Linear Programming A mathematical approach to determine optimal (maximum or minimum) solutions to problems which involve restrictions on the variables involved. 08 - Linear Programming Applications

More information

Optimization III: Constrained Optimization

Optimization III: Constrained Optimization Optimization III: Constrained Optimization CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization III: Constrained Optimization

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview 1. Overview of SVMs 2. Margin Geometry 3. SVM Optimization 4. Overlapping Distributions 5. Relationship to Logistic Regression 6. Dealing

More information

30. Constrained Optimization

30. Constrained Optimization 30. Constrained Optimization The graph of z = f(x, y) is represented by a surface in R 3. Normally, x and y are chosen independently of one another so that one may roam over the entire surface of f (within

More information

MATH 19520/51 Class 10

MATH 19520/51 Class 10 MATH 19520/51 Class 10 Minh-Tam Trinh University of Chicago 2017-10-16 1 Method of Lagrange multipliers. 2 Examples of Lagrange multipliers. The Problem The ingredients: 1 A set of parameters, say x 1,...,

More information

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem Computational Learning Theory Fall Semester, 2012/13 Lecture 10: SVM Lecturer: Yishay Mansour Scribe: Gitit Kehat, Yogev Vaknin and Ezra Levin 1 10.1 Lecture Overview In this lecture we present in detail

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

Increasing/Decreasing Behavior

Increasing/Decreasing Behavior Derivatives and the Shapes of Graphs In this section, we will specifically discuss the information that f (x) and f (x) give us about the graph of f(x); it turns out understanding the first and second

More information

SECTION 1.2 (e-book 2.3) Functions: Graphs & Properties

SECTION 1.2 (e-book 2.3) Functions: Graphs & Properties SECTION 1.2 (e-book 2.3) Functions: Graphs & Properties Definition (Graph Form): A function f can be defined by a graph in the xy-plane. In this case the output can be obtained by drawing vertical line

More information

LINEAR PROGRAMMING (LP), GRAPHICAL PRESENTATION GASPAR ASAMPANA

LINEAR PROGRAMMING (LP), GRAPHICAL PRESENTATION GASPAR ASAMPANA LINEAR PROGRAMMING (LP), GRAPHICAL PRESENTATION GASPAR ASAMPANA INTRODUCTION Linear Programming a is combination of a linear objective function and set of linear constraints. The linear constraints express

More information

Increasing/Decreasing Behavior

Increasing/Decreasing Behavior Derivatives and the Shapes of Graphs In this section, we will specifically discuss the information that f (x) and f (x) give us about the graph of f(x); it turns out understanding the first and second

More information

Worksheet 2.7: Critical Points, Local Extrema, and the Second Derivative Test

Worksheet 2.7: Critical Points, Local Extrema, and the Second Derivative Test Boise State Math 275 (Ultman) Worksheet 2.7: Critical Points, Local Extrema, and the Second Derivative Test From the Toolbox (what you need from previous classes) Algebra: Solving systems of two equations

More information

Graphing Linear Inequalities in Two Variables.

Graphing Linear Inequalities in Two Variables. Many applications of mathematics involve systems of inequalities rather than systems of equations. We will discuss solving (graphing) a single linear inequality in two variables and a system of linear

More information

Continuity and Tangent Lines for functions of two variables

Continuity and Tangent Lines for functions of two variables Continuity and Tangent Lines for functions of two variables James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University April 4, 2014 Outline 1 Continuity

More information

Overview for Families

Overview for Families unit: Graphing Equations Mathematical strand: Algebra The following pages will help you to understand the mathematics that your child is currently studying as well as the type of problems (s)he will solve

More information

Chapter 13-1 Notes Page 1

Chapter 13-1 Notes Page 1 Chapter 13-1 Notes Page 1 Constrained Optimization Constraints We will now consider how to maximize Sales Revenue & Contribution Margin; or minimize costs when dealing with limited resources (constraints).

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

WEEK 4 REVIEW. Graphing Systems of Linear Inequalities (3.1)

WEEK 4 REVIEW. Graphing Systems of Linear Inequalities (3.1) WEEK 4 REVIEW Graphing Systems of Linear Inequalities (3.1) Linear Programming Problems (3.2) Checklist for Exam 1 Review Sample Exam 1 Graphing Linear Inequalities Graph the following system of inequalities.

More information

Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives

Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives. Directional Derivatives Recall that if z = f(x, y), then the partial derivatives f x and f y are defined as and represent the rates of change of z in the x- and y-directions, that is, in the directions of the unit vectors i and

More information

1. Show that the rectangle of maximum area that has a given perimeter p is a square.

1. Show that the rectangle of maximum area that has a given perimeter p is a square. Constrained Optimization - Examples - 1 Unit #23 : Goals: Lagrange Multipliers To study constrained optimization; that is, the maximizing or minimizing of a function subject to a constraint (or side condition).

More information

10-2 Circles. Warm Up Lesson Presentation Lesson Quiz. Holt Algebra2 2

10-2 Circles. Warm Up Lesson Presentation Lesson Quiz. Holt Algebra2 2 10-2 Circles Warm Up Lesson Presentation Lesson Quiz Holt Algebra2 2 Warm Up Find the slope of the line that connects each pair of points. 1. (5, 7) and ( 1, 6) 1 6 2. (3, 4) and ( 4, 3) 1 Warm Up Find

More information

A-SSE.1.1, A-SSE.1.2-

A-SSE.1.1, A-SSE.1.2- Putnam County Schools Curriculum Map Algebra 1 2016-2017 Module: 4 Quadratic and Exponential Functions Instructional Window: January 9-February 17 Assessment Window: February 20 March 3 MAFS Standards

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Two common formulations of linear programming (LP) problems are: min Subject to: 1,,, 1,2,,;, max Subject to: 1,,, 1,2,,;, Linear Programming Problems The standard LP problem

More information

Mathematics for Business and Economics - I. Chapter7 Linear Inequality Systems and Linear Programming (Lecture11)

Mathematics for Business and Economics - I. Chapter7 Linear Inequality Systems and Linear Programming (Lecture11) Mathematics for Business and Economics - I Chapter7 Linear Inequality Systems and Linear Programming (Lecture11) A linear inequality in two variables is an inequality that can be written in the form Ax

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Constrained Optimization Marc Toussaint U Stuttgart Constrained Optimization General constrained optimization problem: Let R n, f : R n R, g : R n R m, h : R n R l find min

More information

MAT175 Overview and Sample Problems

MAT175 Overview and Sample Problems MAT175 Overview and Sample Problems The course begins with a quick review/overview of one-variable integration including the Fundamental Theorem of Calculus, u-substitutions, integration by parts, and

More information

Math 213 Exam 2. Each question is followed by a space to write your answer. Please write your answer neatly in the space provided.

Math 213 Exam 2. Each question is followed by a space to write your answer. Please write your answer neatly in the space provided. Math 213 Exam 2 Name: Section: Do not remove this answer page you will return the whole exam. You will be allowed two hours to complete this test. No books or notes may be used other than a onepage cheat

More information

QEM Optimization, WS 2017/18 Part 4. Constrained optimization

QEM Optimization, WS 2017/18 Part 4. Constrained optimization QEM Optimization, WS 2017/18 Part 4 Constrained optimization (about 4 Lectures) Supporting Literature: Angel de la Fuente, Mathematical Methods and Models for Economists, Chapter 7 Contents 4 Constrained

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

2.4. Rates of Change and Tangent Lines. Copyright 2007 Pearson Education, Inc. Publishing as Pearson Prentice Hall

2.4. Rates of Change and Tangent Lines. Copyright 2007 Pearson Education, Inc. Publishing as Pearson Prentice Hall 2.4 Rates of Change and Tangent Lines Copyright 2007 Pearson Education, Inc. Publishing as Pearson Prentice Hall What you ll learn about Average Rates of Change Tangent to a Curve Slope of a Curve Normal

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Chapter 3 Linear Programming: A Geometric Approach

Chapter 3 Linear Programming: A Geometric Approach Chapter 3 Linear Programming: A Geometric Approach Section 3.1 Graphing Systems of Linear Inequalities in Two Variables y 4x + 3y = 12 4 3 4 x 3 y 12 x y 0 x y = 0 2 1 P(, ) 12 12 7 7 1 1 2 3 x We ve seen

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Constrained Optimization 1 / 46 EC2040 Topic 5 - Constrained Optimization Reading 1 Chapters 12.1-12.3

More information

Direction Fields; Euler s Method

Direction Fields; Euler s Method Direction Fields; Euler s Method It frequently happens that we cannot solve first order systems dy (, ) dx = f xy or corresponding initial value problems in terms of formulas. Remarkably, however, this

More information

3.7. Vertex and tangent

3.7. Vertex and tangent 3.7. Vertex and tangent Example 1. At the right we have drawn the graph of the cubic polynomial f(x) = x 2 (3 x). Notice how the structure of the graph matches the form of the algebraic expression. The

More information

Graphical Analysis. Figure 1. Copyright c 1997 by Awi Federgruen. All rights reserved.

Graphical Analysis. Figure 1. Copyright c 1997 by Awi Federgruen. All rights reserved. Graphical Analysis For problems with 2 variables, we can represent each solution as a point in the plane. The Shelby Shelving model (see the readings book or pp.68-69 of the text) is repeated below for

More information

a) y = x 3 + 3x 2 2 b) = UNIT 4 CURVE SKETCHING 4.1 INCREASING AND DECREASING FUNCTIONS

a) y = x 3 + 3x 2 2 b) = UNIT 4 CURVE SKETCHING 4.1 INCREASING AND DECREASING FUNCTIONS UNIT 4 CURVE SKETCHING 4.1 INCREASING AND DECREASING FUNCTIONS We read graphs as we read sentences: left to right. Plainly speaking, as we scan the function from left to right, the function is said to

More information

Section 3.1 Objective 1: Plot Points in the Rectangular Coordinate System Video Length 12:35

Section 3.1 Objective 1: Plot Points in the Rectangular Coordinate System Video Length 12:35 Section 3.1 Video Guide The Rectangular Coordinate System and Equations in Two Variables Objectives: 1. Plot Points in the Rectangular Coordinate System 2. Determine If an Ordered Pair Satisfies an Equation

More information

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5.1 DUALITY Associated with every linear programming problem (the primal) is another linear programming problem called its dual. If the primal involves

More information

14.6 Directional Derivatives and the Gradient Vector

14.6 Directional Derivatives and the Gradient Vector 14 Partial Derivatives 14.6 and the Gradient Vector Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. and the Gradient Vector In this section we introduce

More information

EXTREME POINTS AND AFFINE EQUIVALENCE

EXTREME POINTS AND AFFINE EQUIVALENCE EXTREME POINTS AND AFFINE EQUIVALENCE The purpose of this note is to use the notions of extreme points and affine transformations which are studied in the file affine-convex.pdf to prove that certain standard

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Week 5: Geometry and Applications

Week 5: Geometry and Applications Week 5: Geometry and Applications Introduction Now that we have some tools from differentiation, we can study geometry, motion, and few other issues associated with functions of several variables. Much

More information

AP Calculus. Extreme Values: Graphically. Slide 1 / 163 Slide 2 / 163. Slide 4 / 163. Slide 3 / 163. Slide 5 / 163. Slide 6 / 163

AP Calculus. Extreme Values: Graphically. Slide 1 / 163 Slide 2 / 163. Slide 4 / 163. Slide 3 / 163. Slide 5 / 163. Slide 6 / 163 Slide 1 / 163 Slide 2 / 163 AP Calculus Analyzing Functions Using Derivatives 2015-11-04 www.njctl.org Slide 3 / 163 Table of Contents click on the topic to go to that section Slide 4 / 163 Extreme Values

More information

Linear methods for supervised learning

Linear methods for supervised learning Linear methods for supervised learning LDA Logistic regression Naïve Bayes PLA Maximum margin hyperplanes Soft-margin hyperplanes Least squares resgression Ridge regression Nonlinear feature maps Sometimes

More information

Math 21a Homework 22 Solutions Spring, 2014

Math 21a Homework 22 Solutions Spring, 2014 Math 1a Homework Solutions Spring, 014 1. Based on Stewart 11.8 #6 ) Consider the function fx, y) = e xy, and the constraint x 3 + y 3 = 16. a) Use Lagrange multipliers to find the coordinates x, y) of

More information

Solutions for Operations Research Final Exam

Solutions for Operations Research Final Exam Solutions for Operations Research Final Exam. (a) The buffer stock is B = i a i = a + a + a + a + a + a 6 + a 7 = + + + + + + =. And the transportation tableau corresponding to the transshipment problem

More information

Outcomes List for Math Multivariable Calculus (9 th edition of text) Spring

Outcomes List for Math Multivariable Calculus (9 th edition of text) Spring Outcomes List for Math 200-200935 Multivariable Calculus (9 th edition of text) Spring 2009-2010 The purpose of the Outcomes List is to give you a concrete summary of the material you should know, and

More information

Appendix E Calculating Normal Vectors

Appendix E Calculating Normal Vectors OpenGL Programming Guide (Addison-Wesley Publishing Company) Appendix E Calculating Normal Vectors This appendix describes how to calculate normal vectors for surfaces. You need to define normals to use

More information

MEI Desmos Tasks for AS Pure

MEI Desmos Tasks for AS Pure Task 1: Coordinate Geometry Intersection of a line and a curve 1. Add a quadratic curve, e.g. y = x² 4x + 1 2. Add a line, e.g. y = x 3 3. Select the points of intersection of the line and the curve. What

More information

AB Calculus: Extreme Values of a Function

AB Calculus: Extreme Values of a Function AB Calculus: Extreme Values of a Function Name: Extrema (plural for extremum) are the maximum and minimum values of a function. In the past, you have used your calculator to calculate the maximum and minimum

More information

Chapter 7. Linear Programming Models: Graphical and Computer Methods

Chapter 7. Linear Programming Models: Graphical and Computer Methods Chapter 7 Linear Programming Models: Graphical and Computer Methods To accompany Quantitative Analysis for Management, Eleventh Edition, by Render, Stair, and Hanna Power Point slides created by Brian

More information

3/1/2016. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder

3/1/2016. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder 1 Intermediate Microeconomics W3211 Lecture 5: Choice and Demand Introduction Columbia Universit, Spring 2016 Mark Dean: mark.dean@columbia.edu 2 The Stor So Far. 3 Toda s Aims 4 We have now have had a

More information

Chapter 2 An Introduction to Linear Programming

Chapter 2 An Introduction to Linear Programming Chapter 2 An Introduction to Linear Programming MULTIPLE CHOICE 1. The maximization or minimization of a quantity is the a. goal of management science. b. decision for decision analysis. c. constraint

More information

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma Support Vector Machines James McInerney Adapted from slides by Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D. Linear Programming Module Outline Introduction The Linear Programming Model Examples of Linear Programming Problems Developing Linear Programming Models Graphical Solution to LP Problems The Simplex Method

More information

(1) Given the following system of linear equations, which depends on a parameter a R, 3x y + 5z = 2 4x + y + (a 2 14)z = a + 2

(1) Given the following system of linear equations, which depends on a parameter a R, 3x y + 5z = 2 4x + y + (a 2 14)z = a + 2 (1 Given the following system of linear equations, which depends on a parameter a R, x + 2y 3z = 4 3x y + 5z = 2 4x + y + (a 2 14z = a + 2 (a Classify the system of equations depending on the values of

More information