Lagrange multipliers. Contents. Introduction. From Wikipedia, the free encyclopedia

Similar documents
Lecture 2 Optimization with equality constraints

EC5555 Economics Masters Refresher Course in Mathematics September Lecture 6 Optimization with equality constraints Francesco Feri

In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function.

Constrained Optimization and Lagrange Multipliers

Lagrange Multipliers. Lagrange Multipliers. Lagrange Multipliers. Lagrange Multipliers. Lagrange Multipliers. Lagrange Multipliers

Optimizations and Lagrange Multiplier Method

1. Show that the rectangle of maximum area that has a given perimeter p is a square.

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

13.1. Functions of Several Variables. Introduction to Functions of Several Variables. Functions of Several Variables. Objectives. Example 1 Solution

Lagrangian Multipliers

Constrained Optimization COS 323

21-256: Lagrange multipliers

we wish to minimize this function; to make life easier, we may minimize

5 Day 5: Maxima and minima for n variables.

Constrained Optimization

Lagrange multipliers 14.8

MATH2111 Higher Several Variable Calculus Lagrange Multipliers

MATH 19520/51 Class 10

Lagrange multipliers October 2013

Introduction to Machine Learning

REVIEW I MATH 254 Calculus IV. Exam I (Friday, April 29) will cover sections

Lagrangian Multipliers

13.7 LAGRANGE MULTIPLIER METHOD

Chapter 5 Partial Differentiation

Machine Learning for Signal Processing Lecture 4: Optimization

x 6 + λ 2 x 6 = for the curve y = 1 2 x3 gives f(1, 1 2 ) = λ actually has another solution besides λ = 1 2 = However, the equation λ

A Short SVM (Support Vector Machine) Tutorial

QEM Optimization, WS 2017/18 Part 4. Constrained optimization

Inverse and Implicit functions

1. Suppose that the equation F (x, y, z) = 0 implicitly defines each of the three variables x, y, and z as functions of the other two:

Chapter 3 Numerical Methods

Section 4: Extreme Values & Lagrange Multipliers.

Lagrange Multipliers and Problem Formulation

Math 209 (Fall 2007) Calculus III. Solution #5. 1. Find the minimum and maximum values of the following functions f under the given constraints:

Chapter 7 page 1 MATH 105 FUNCTIONS OF SEVERAL VARIABLES

CS231A. Review for Problem Set 1. Saumitro Dasgupta

Data Mining: Concepts and Techniques. Chapter 9 Classification: Support Vector Machines. Support Vector Machines (SVMs)

Optimization Methods: Optimization using Calculus Kuhn-Tucker Conditions 1. Module - 2 Lecture Notes 5. Kuhn-Tucker Conditions

3/1/2016. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder. Calculus: A Reminder

California Institute of Technology Crash-Course on Convex Optimization Fall Ec 133 Guilherme Freitas

Math 21a Homework 22 Solutions Spring, 2014

Lagrange Multipliers. Joseph Louis Lagrange was born in Turin, Italy in Beginning

Classroom Tips and Techniques: The Lagrange Multiplier Method

Winter 2012 Math 255 Section 006. Problem Set 7

Applied Lagrange Duality for Constrained Optimization

MATH 2400, Analytic Geometry and Calculus 3

Optimization III: Constrained Optimization

Solution 2. ((3)(1) (2)(1), (4 3), (4)(2) (3)(3)) = (1, 1, 1) D u (f) = (6x + 2yz, 2y + 2xz, 2xy) (0,1,1) = = 4 14

Local and Global Minimum

Constrained extrema of two variables functions

Second Midterm Exam Math 212 Fall 2010

Math 233. Lagrange Multipliers Basics

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem

Worksheet 2.7: Critical Points, Local Extrema, and the Second Derivative Test

MATH Lagrange multipliers in 3 variables Fall 2016

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

Absolute extrema of two variables functions

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Constrained and Unconstrained Optimization

Chapter II. Linear Programming

Support Vector Machines.

Demo 1: KKT conditions with inequality constraints

Support Vector Machines.

Bounded, Closed, and Compact Sets

Optimal Control Techniques for Dynamic Walking

Math 213 Exam 2. Each question is followed by a space to write your answer. Please write your answer neatly in the space provided.

Lecture 25 Nonlinear Programming. November 9, 2009

minimize ½(x 1 + 2x 2 + x 32 ) subject to x 1 + x 2 + x 3 = 5

Carnegie Learning Math Series Course 2, A Florida Standards Program

Background for Surface Integration

. Tutorial Class V 3-10/10/2012 First Order Partial Derivatives;...

NOTATION AND TERMINOLOGY

Introduction. Classroom Tips and Techniques: The Lagrange Multiplier Method

Kernel Methods & Support Vector Machines

Math 115 Second Midterm March 25, 2010

Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I

EXTRA-CREDIT PROBLEMS ON SURFACES, MULTIVARIABLE FUNCTIONS AND PARTIAL DERIVATIVES

Math 113 Calculus III Final Exam Practice Problems Spring 2003

Multivariate Calculus: Review Problems for Examination Two

The following information is for reviewing the material since Exam 3:

minimise f(x) subject to g(x) = b, x X. inf f(x) = inf L(x,

Support Vector Machines

Graphing Linear Inequalities in Two Variables.

Introduction to Constrained Optimization

Mathematics in Orbit

7/28/2011 SECOND HOURLY PRACTICE V Maths 21a, O.Knill, Summer 2011

Lecture Notes: Constraint Optimization

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations

30. Constrained Optimization

11/6/2012 SECOND HOURLY Math 21a, Fall Name:

MATH 142 Business Mathematics II

DEPARTMENT OF MATHEMATICS AND STATISTICS QUEEN S UNIVERSITY AT KINGSTON MATH 121/124 - APR 2014 A. Ableson, T. Day, A. Hoefel

LECTURE 5: DUAL PROBLEMS AND KERNELS. * Most of the slides in this lecture are from

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Chapter III.C Analysis of The Second Derivative Draft Version 4/19/14 Martin Flashman 2005

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Algorithms in Bioinformatics II, SoSe 07, ZBIT, D. Huson, June 27,

6. Find the equation of the plane that passes through the point (-1,2,1) and contains the line x = y = z.

Math 241, Final Exam. 12/11/12.

Goals: Course Unit: Describing Moving Objects Different Ways of Representing Functions Vector-valued Functions, or Parametric Curves

Transcription:

Lagrange multipliers From Wikipedia, the free encyclopedia In mathematical optimization problems, Lagrange multipliers, named after Joseph Louis Lagrange, is a method for finding the local extrema of a function of several variables subject to one or more constraints. This method reduces a problem in n variables with k constraints to a solvable problem in n + k variables with no constraints. The method introduces a new unknown scalar variable, the Lagrange multiplier, for each constraint and forms a linear combination involving the multipliers as coefficients. The justification for this can be carried out in the standard way as concerns partial differentiation, using either total differentials, or their close relatives, the chain rules. The objective is to find the conditions, for some implicit function, so that the derivative in terms of the independent variables of a function equals zero for some set of inputs. Fig. 1. Drawn in green is the locus of points satisfying the constraint g(x,y) = c. Drawn in blue are contours of f. Arrows represent the gradient, which points in a direction normal to the contour. Contents 1 Introduction 2 The method of Lagrange multipliers 3 Example 3.1 Very simple example 3.2 Another example 4 Economics 5 See also 6 External links Introduction Consider a two-dimensional case. Suppose we have a function, f(x,y), to maximize subject to where c is a constant. We can visualize contours of f given by for various values of d n, and the contour of g given by g(x,y) = c. Suppose we walk along the contour line with g = c. In general the contour lines of f and g may be

distinct, so traversing the contour line for g = c could intersect with or cross the contour lines of f. This is equivalent to saying that whilst moving along the contour line for g = c the value of f can vary. Only when the contour line for g = c touches contour lines of f tangentially, do we not increase or decrease the value of f - that is, when the contour lines touch but do not cross. This occurs at the constrained local extrema and the constrained inflection points of f. A familiar example can be obtained from weather maps, with their contour lines for temperature and pressure: the constrained extrema will occur where the superposed maps show touching lines (isopleths). Geometrically we translate the tangency condition to saying that the gradients of f and g are parallel vectors at the maximum, since the gradients are always normal to the contour lines. Introducing an unknown scalar, λ, we solve for λ 0. Once values for λ are determined, we are back to the original number of variables and so can go on to find extrema of the new unconstrained function = in traditional ways. That is, F(x,y) = f(x,y) for all (x,y) satisfying the constraint because g(x,y) c equals zero on the constraint, but the extrema of F(x,y) are all on g(x,y) = c. The method of Lagrange multipliers Let f be a function defined on R n, and let the constraints be given by g k (x) = 0 (perhaps by moving the constant to the left, as in g k (x) c = 0). Now, define the Lagrangian, Λ, as Observe that both the optimization criteria and constraints g k are compactly encoded as extrema of the Lagrangian: and Often the Lagrange multipliers have an interpretation as some salient quantity of interest. To see why this might be the case, observe that:

Thus, λ k is the rate of change of the quantity being optimized as a function of the constraint variable. As examples, in Lagrangian mechanics the equations of motion are derived by finding stationary points of the action, the time integral of the difference between kinetic and potential energy. Thus, the force on a particle due to a scalar potential, F = V, can be interpreted as a Lagrange multiplier determining the change in action (transfer of potential to kinetic energy) following a variation in the particle's constrained trajectory. In economics, the optimal profit to a player is calculated subject to a constrained space of actions, where a Lagrange multiplier is the value of relaxing a given constraint (e.g. through bribery or other means). The method of Lagrange multipliers is generalized by the Karush-Kuhn-Tucker conditions. Example Very simple example Suppose you want to find the maximum values for f(x,y) = x 2 y with the condition that the x and y coordinates lie on the circumference of the unit circle: x 2 + y 2 = 1 As there is just a single condition, we will use only one multiplier, say λ. Use the constraint to define a function g(x, y): g(x,y) = x 2 + y 2 1 Notice that g(x, y) is always equal to zero. So any multiple of g(x, y) may be added to f(x, y) leaving f (x, y) unchanged. Let Φ(x,y,λ) = f(x,y) + λg(x,y) = x 2 y + λ(x 2 + y 2 1) The critical values of Φ occur when its gradient is zero. The partial derivatives are Equation (i) implies λ = y. Substituting this result into equation (ii), x 2 2y 2 = 0. Then x 2 = 2y 2. Substituting into equation (iii) and solving for y gives this value of y:

Clearly there are four critical points: ( {2/3} ½, {1/3} ½ ); ( {2/3} ½, {1/3} ½ ); ( {2/3} ½, {1/3} ½ ); ( {2/3} ½, {1/3} ½ ); To classify these as relative maximum or minimums find the second total derivative of f(x, y) and substitute the values for x and y. In this case the second total derivative is 4x + 2y. If the second derivative is negative, then a maximum exists at that point. Thus ( {2/3} ½, {1/3} ½ ) and ( {2/3} ½, {1/3} ½ ) are the maximum points. Another example Suppose we wish to find the discrete probability distribution with maximal information entropy. Then Of course, the sum of these probabilities equals 1, so our constraint is g(p) = 1 with We can use Lagrange multipliers to find the point of maximum entropy (depending on the probabilities). For all k from 1 to n, we require that which gives Carrying out the differentiation of these n equations, we get This shows that all p i are equal (because they depend on λ only). By using the constraint k p k = 1, we find

Hence, the uniform distribution is the distribution with the greatest entropy. Economics Constrained optimization plays a central role in economics. For example, the choice problem for a consumer is represented as one of maximizing a utility function subject to a budget constraint. The Lagrange multiplier has an economic interpretation as the shadow price associated with the constraint, in this case the marginal utility of income. See also Karush-Kuhn-Tucker conditions: generalization of the method of Lagrange multipliers. External links For references to Lagrange's original work and for an account of the terminology see the Lagrange Multiplier entry in Earliest known uses of some of the words of mathematics: L (http://members.aol.com/jeff570/l.html) For additional text and interactive applets Applet (http://www-math.mit.edu/18.02/applets/lagrangemultiplierstwovariables.html) Tutorial and applet (http://www.math.gatech.edu/~carlen/2507/notes/lagmultipliers.html) Conceptual introduction (http://www.slimy.com/~steuard/teaching/tutorials/lagrange.html) (plus a brief discussion of Lagrange multipliers in the calculus of variations as used in physics) Lagrange Multipliers without Permanent Scarring (http://www.cs.berkeley.edu/~klein/papers/lagrange-multipliers.pdf) (tutorial by Dan Klein) Retrieved from "http://en.wikipedia.org/wiki/lagrange_multipliers" Categories: Multivariate calculus Optimization Mathematical economics This page was last modified 01:01, 29 January 2007. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.