Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

Similar documents
Mathematical and Algorithmic Foundations Linear Programming and Matchings

Linear programming and duality theory

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

11 Linear Programming

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

ORIE 6300 Mathematical Programming I September 2, Lecture 3

Lecture 5: Duality Theory

Duality. Primal program P: Maximize n. Dual program D: Minimize m. j=1 c jx j subject to n. j=1. i=1 b iy i subject to m. i=1

EC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 2: Convex Sets

Some Advanced Topics in Linear Programming

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Lecture 2 - Introduction to Polytopes

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

Finite Math Linear Programming 1 May / 7

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

Lecture Notes 2: The Simplex Algorithm

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

1. Lecture notes on bipartite matching February 4th,

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

Linear Programming Motivation: The Diet Problem

4.1 The original problem and the optimal tableau

Introduction to Operations Research

Math 5593 Linear Programming Lecture Notes

Lesson 17. Geometry and Algebra of Corner Points

Lecture 5: Properties of convex sets

Introduction to optimization

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

AMS : Combinatorial Optimization Homework Problems - Week V

MTAEA Convexity and Quasiconvexity

Convex Optimization Lecture 2

Linear Programming Problems

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Introduction to Modern Control Systems

OPERATIONS RESEARCH. Linear Programming Problem

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

Lagrangean relaxation - exercises

Polytopes Course Notes

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

Linear Programming Duality and Algorithms

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

1 Linear programming relaxation

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

Linear Programming: Introduction

Convexity. 1 X i is convex. = b is a hyperplane in R n, and is denoted H(p, b) i.e.,

1. Lecture notes on bipartite matching

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Lagrangian Relaxation: An overview

CS522: Advanced Algorithms

Linear Programming. Course review MS-E2140. v. 1.1

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

Lecture 19 Subgradient Methods. November 5, 2008

Mathematical Programming and Research Methods (Part II)

Math 5490 Network Flows

Lecture 2 Optimization with equality constraints

Conic Duality. yyye

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems.

Introduction to Constrained Optimization

Lecture 2 September 3

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

Dual-fitting analysis of Greedy for Set Cover

Read: H&L chapters 1-6

Design and Analysis of Algorithms (V)

Convex Optimization and Machine Learning

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

Lecture 6: Faces, Facets

Convexity Theory and Gradient Methods

Integer Programming Theory

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Linear methods for supervised learning

Applied Lagrange Duality for Constrained Optimization

4 Integer Linear Programming (ILP)

J Linear Programming Algorithms

maximize c, x subject to Ax b,

Linear Programming and its Applications

Characterizing Improving Directions Unconstrained Optimization

Constrained Optimization and Lagrange Multipliers

Lecture 7: Support Vector Machine

FACES OF CONVEX SETS

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

The Simplex Algorithm

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

Programming, numerics and optimization

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

ACO Comprehensive Exam October 12 and 13, Computability, Complexity and Algorithms

Advanced Linear Programming. Organisation. Lecturers: Leen Stougie, CWI and Vrije Universiteit in Amsterdam

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points.

Approximation Algorithms

Transcription:

Linear Programming Larry Blume Cornell University & The Santa Fe Institute & IHS

Linear Programs The general linear program is a constrained optimization problem where objectives and constraints are all described by linear functions: v P (b) = max a x s.t. Ax b x 0 where a,x R n, and the matrix A is m n. This is the canonical form of the primal problem. The function v P (b) is the value function for the problem. How can one have inequalities in the other direction, or equality constraints? (P) Linear Programming 1

Linear Programs The standard form of a linear program is v P (b ) = max a x s.t. A x = b x 0 (P ) Go from the canonical to the standard form by adding slack variables z: v P (b) = max a x s.t. Ax +Iz = b x 0 z 0 where A is m n and I is the m m identity matrix. The matrix [A I] is the augmented matrix for the canonical form P. (P ) Linear Programming 2

Linear Programs Check that x is feasible for P if and only if there is a z such that (x,z) is feasible for P. Represent problems with inequality constraints in both directions in the canonical and standard forms. Represent minimization problems in both forms. Represent problems with equality constraints in both forms. Linear Programming 3

Definitions The objective function is f(x) = a x. The constraint set is C = {x : Ax b &x 0}, a convex polyhedron. A solution is a vector x R n. A feasible solution is an element of C. An optimal solution is a feasible solution which maximizes the objective function on the set C. Give examples of linear programs with a) no feasible solutions, and b) feasible solutions, but no optimal solutions. Linear Programming 4

The Geometry of Linear Programming Let C = {x : Ax = b} (standard form) Definition: x is a vertex of the polyhedron C iff there is no y 0 such that x +y and x y are both in P. Vertex Theorem: i) A vertex exists. ii) If v P (b) < and x C, then there is a vertex x C such that a x a x. The proof shows that C has a vertex. Not all forms of programs in have a vertex (give an example), and this is why we convert to the standard form. Linear Programming 5

Fundamental Theorem of Linear Programming Definition: A solution to an linear programming problem in standard form is a basic solution if and only if the set of all columns A j of A such that x j > 0 is linearly independent; that is, if the submatrix A x of A consisting of the columns A j has full column rank. Theorem: A solution x is basic if and only if x is a vertex. Fundamental Theorem: If (P) has a feasible solution, then it has a basic feasible solution. If (P) has an optimal solution, then it has a basic optimal solution. Proof: The vertex theorem implies that if a feasible solution exists, a vertex exists. There can be only a finite number of basic solutions, so (P) has only a finite number of vertices. The vertex theorem implies that the sup of the objective on C is the sup of the objective on the set of vertices, so if the sup is finite, it is realized at a vertex. Linear Programming 6

Duality The dual program for problem (P) is v D (a) = min y b s.t. ya a y 0 (D) Linear Programming 7

Duality What is the relationship between (P) and (D)? Write down the Lagrangean for each. For problem (P), letting y denote the multipliers, L(x,y) = a x +y b yax. For problem (D) letting x denote the multipliers, L(y,x) = b y xay +x a To find a saddle-point of the Lagrangean L(x,y) is to solve both the primal and the dual problems. Linear Programming 8

Duality Theorem Theorem: i) If either problem (P) or problem (D) has a finite optimal value, then both have an optimal solution. ii) If x and y are feasible for the primal and dual, then they are solutions if and only if a x = y b. iii) One problem has an infeasible solution if and only if the other problem is unbounded. a x yax and yax y b, so for all feasible primal solutions x and dual solutions y, a x y b. This proves iii). If x and y are feasible for (P) and (D), respectively, and a x = y b, then each expression bounds the value of the solution for the other problem If equality holds, then both bounds are achieved, and hence these solutions are optimal. This is one direction of ii). The usual proofs make use of the simplex method, which is not worth introducing. I will provide a proof from general convex duality later. Linear Programming 9

Complementary Slackness The primal and dual problems are: max a x s.t. Ax b x 0 min y b s.t. ya a y 0 Complementary Slackness Theorem: Suppose that x and y are feasible for the primal and dual problems, respectively. Then x and y are optimal for their respective problems if and only if y (b Ax ) = 0 = (y A a)x. Linear Programming 10

Complementary Slackness Interpretation y (b Ax ) = 0 = (y A a)x. The four vectors y, b Ax, y A b and x are non-negative. So in the vector inner-products, at least one of each i th coefficient must be 0. If a constraint in the primal is not binding, then the corresponding dual variable is 0. If a constraint in the dual is not binding, then the corresponding primal variable is 0. This hints at sensitivity analysis if a constraint is not binding, there is no gain to relaxing it or loss to tightening it. Linear Programming 11

Proof of the Complementary Slackness Theorem Suppose that x and y are feasible solutions that satisfy the complementary slackness conditions. Then y b = ax, and optimality follows from the duality theorem. If x and y are optimal, then since Ax b and y is non-negative, y Ax y b. Similarly ax y Ax. The duality theorem has y b = ax, so y Ax = y b and ax = y Ax. Linear Programming 12

Sensitivity Analysis Concave Functions Mathematicians write in terms of convex functions and minimization. We are interested in concave functions and maximization. These notes are for economists. We all know what concave functions are, but it is convenient to have an alternative description. Suppose that S is a subset of R n. Let Re = R {, }. Definition: The subgraph of a function f : S Re is subf = {(x,µ) S R : µ f(x)}. The function f is concave if subf is convex in R n+1. The effective domain of f is domf = {x : µ s.t. (x,µ) subf} = {x : f(x) > }. A weak continuity requirement of a function f is that its subgraph is closed. Continuous functions have closed subgraphs. Is the converse true? Exercise: Show that v P (b) has a closed subgraph. Linear Programming 13

Sensitivity Analysis Derivatives If concave f is smooth, f(z) f(x)+f (x)(z x) for all x and z in the domain. That is, f lies below any tangent to its subgraph. If f is not smooth, we can still support subf at any point (x,f(x)) R n+1. Definition: A supergradient to f at x is a vector x such that for all z domf, f(z) f(x)+x (z x). Definition: The superdifferential f : domf R n is the correspondence that maps each x domf to the set of all the supergradients of f at x. Linear Programming 14

Sensitivity Analysis An Example Suppose f(x) = { 2x if x 0, x if x 0. The supergradient of f is {2} if x < 0, f(x) = [1,2] if x = 0, {1} if x > 0. Linear Programming 15

Sensitivity Analysis Shadow Prices Theorem: The value function v P (b) is concave, and if y solves (D) with objective b, then y v P (b). Proof: Suppose x and x solve (P) with constraints b and b. For λ [0,1], λx +(1 λ)x is feasible for λb +(1 λ)b, so v P ( λb +(1 λ)b ) a(λx +(1 λ)x ) = λv P (b )+(1 λ)v P (b ). If y solves the dual with objective vector b, then y 0 and y A a. Choose another objective vector b with optimal solution x for (P) and y for (D). Then y is feasible for this dual problem too. Then y b y b = ax = v P (b ), so v P (b )+y (b b ) = v P (b )+y b y b = y b v P (b ), which establishes the supergradient inequality. Linear Programming 16

Sensitivity Analysis Shadow Prices A similar result holds for the dual: Theorem: v D (a) is convex, and if x solves (P) with objective a, then x v D (a). In this case, v D (a) is convex, the set we support is the epigraph, {(x,µ) : µ f(x)}, and v D (a) is called the subgradient. What is the subgradient inequality? These two theorems explain why dual variables are called shadow prices. They provide directional derivatives for changes in the value of a program with respect to constraint parameters. Notice too that the solution to the primal gives shadow prices for the dual. Linear Programming 17

Proof of the Vertex Theorem Choose x C. If x is not a vertex, then for some y 0, x ±y C. Thus Ay = 0, and if x j = 0 then y j = 0. To prove i), let λ solve sup{ λ : x ±λy C. Then x ±λ y C, and one of x ±λ y has more zeros than x. Repeat this argument at most n 1 times to find a vertex in C. Linear Programming 18

To prove ii), w.l.o.g. take a y 0. There are two cases: i) a y = 0. W.l.o.g. there is a j such that y j < 0. Note that x k > 0 for any k s.t. y k < 0. Look at x +λy for λ 0. a (x +λy) a x, and A(x +λy) = b. For large λ, x j +λy j < 0, and so x +λy is not in C. Let λ = sup{λ 0 : x +λy C}. Then x +λy C, has at least one more zero than x, and the same value. Repeat at most n 1 times to reach a vertex. ii) a y > 0. If there is a y j < 0, apply the preceding argument. If y 0, then x +λy C for all λ > 0, and so a (x +λy) = a x +λa y, and v P (b) = +, a contradiction. Linear Programming 19