Some Advanced Topics in Linear Programming

Similar documents
DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Linear programming and duality theory

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

Linear Programming. Course review MS-E2140. v. 1.1

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Linear Programming Problems

Chapter 15 Introduction to Linear Programming

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

AM 121: Intro to Optimization Models and Methods Fall 2017

Solutions for Operations Research Final Exam

Integer Programming Theory

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Linear programming II João Carlos Lourenço

Lecture 9: Linear Programming

Read: H&L chapters 1-6

maximize c, x subject to Ax b,

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

COLUMN GENERATION IN LINEAR PROGRAMMING

VARIANTS OF THE SIMPLEX METHOD

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

The Simplex Algorithm

Chapter II. Linear Programming

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

Civil Engineering Systems Analysis Lecture XV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

5 The Theory of the Simplex Method

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

Artificial Intelligence

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Math 5593 Linear Programming Lecture Notes

OPERATIONS RESEARCH. Linear Programming Problem

5.3 Cutting plane methods and Gomory fractional cuts

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

5.4 Pure Minimal Cost Flow

Generalized Network Flow Programming

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

11 Linear Programming

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

3 INTEGER LINEAR PROGRAMMING

Discrete Optimization. Lecture Notes 2

George B. Dantzig Mukund N. Thapa. Linear Programming. 1: Introduction. With 87 Illustrations. Springer

Marginal and Sensitivity Analyses

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Lecture 5: Duality Theory

Linear Programming Motivation: The Diet Problem

Linear Programming: Introduction

Tribhuvan University Institute Of Science and Technology Tribhuvan University Institute of Science and Technology

AMS : Combinatorial Optimization Homework Problems - Week V

Design and Analysis of Algorithms (V)

16.410/413 Principles of Autonomy and Decision Making

Linear Programming and its Applications

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

4.1 The original problem and the optimal tableau

R n a T i x = b i} is a Hyperplane.

Introductory Operations Research

Linear Programming in Small Dimensions

Notes for Lecture 20

Graphs that have the feasible bases of a given linear

Linear Programming Terminology

4.1 Graphical solution of a linear program and standard form

MATLAB Solution of Linear Programming Problems

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Lecture 2 Convex Sets

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

4 Integer Linear Programming (ILP)

Linear Optimization and Extensions: Theory and Algorithms

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lesson 17. Geometry and Algebra of Corner Points

CMPSCI611: The Simplex Algorithm Lecture 24

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

Chap5 The Theory of the Simplex Method

Decision Aid Methodologies In Transportation Lecture 1: Polyhedra and Simplex method

6. Lecture notes on matroid intersection

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

CSE 40/60236 Sam Bailey

Lecture 4: Linear Programming

THEORY OF LINEAR AND INTEGER PROGRAMMING

Applied Lagrange Duality for Constrained Optimization

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Linear Programming Duality and Algorithms

Finite Math Linear Programming 1 May / 7

Advanced Linear Programming. Organisation. Lecturers: Leen Stougie, CWI and Vrije Universiteit in Amsterdam

Transcription:

Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory, and the simplex method connect with the concepts from linear algebra and geometry discussed in Chapter 3 of Murty [].. The Feasible Set of an LP Recall the standard-form LP: Min cx s. t. Ax = b x. (LP) The feasible set forthislpisthesets={x R n Ax = b, x }. We will explore different characterizations of this set... Linear Functions and Linear Equations Recall that, for an m n matrix A and n-vector x, f(x) =Ax is a linear function of x, thatis,f(αx + βy) =αf(x)+βf(y). Also recall that we can take two views of the matrix product b = Ax (where A and x are given). The row-space view: n b i = A i x = a ij x j for i =,...,m j= In this view, each component of b is the inner product of the vector whose transpose is the corresponding row of A with the vector x. Copyright 995 by Matthew J. Saltzman. All rights reserved.

July 2, 995 2 : 39 DRAFT 2 The column-space view: n b = x j A j j= In this view, the vector b is a linear combination of the vectors that are the columns of the matrix A, with the scalar multiples of these vectors given by the values of the corresponding components of x. [ ] [ ] 3 3 Example. Consider the matrix A =, and the vector x =. The rowspace view of A, x, andb=ax = is depicted in Figure, and the column-space 4 2 [ ] 6 4 view is depicted in Figure 2. Consider now a system of linear equations Ax = b (where A and b are given). In the row space view, the set of solutions to a single linear equation is a hyperplane. (In two dimensions, a hyperplane is a line; in three dimensions, it is a plane, etc. In R n, a hyperplane is an (n )-dimensional affine space.) The solution to a system of equations is an affine space, namely the intersection of the hyperplanes corresponding to the individual equations. In Figure, the dotted lines represent the solutions of the two equations in the system, and their intersection occurs at the point x. Observe that the row vector corresponding to a single equation (called the normal vector of the equation) is perpendicular to the corresponding hyperplane of solutions. In the column-space view, the solution set is a set of scalar weights in a linear combination of the columns of A that produces the right-hand-side vector b. In Figure 2, the linear combination is represented by the dotted lines...2 Linear Programs When discussing the constraints of a linear program, the row space is often referred to as activity space, referring to the interpretation of x j s as activity levels. The column space is correspondingly called requirements space, referring to the righthand side values as levels of requirements that must be met by the solution. In activity space, the set of solutions to a linear program in standard form (LP) is the intersection of the affine space of solutions to the equations Ax = b and the cone of solutions to x. This region is a polyhedron (a polytope if it is bounded). Associated with each point in this space is a weight, corresponding to the objective function value of the point. The optimization problem is to find the point in the polytope with the smallest (or largest) weight.

July 2, 995 2 : 39 DRAFT 3 x 2 b 2 3 A 2 b = Ax 2 A 2 8 x 2 3 x 4 4 8 2 b Figure : Row-space interpretation of a linear function. x 2 b 2 3 2 b = Ax 2 8 x 4 A 2 3 x A 2 4 8 2 b Figure 2: Column-space interpretation of a linear function.

July 2, 995 2 : 39 DRAFT 4 x 3 x 2 x Figure 3: Feasible solutions to an LP. Example 2. Consider the constraints x + x 2 + x 3 = x,x 2,x 3 The set of feasible solutions is shown in Figure 3. In requirements space, the feasible set is harder to picture. The set of positive multiples of the columns of A form a cone. If the right-hand-side vector b lies within this cone, then the LP is feasible, otherwise it is not. The feasible set is simply the set of all positive combinations of columns of A that produce b. Example 3. Consider the constraints x,x 2,x 3 The columns of the constraint matrix and the right-hand side vector are shown in Figure fig-lp-row. Also shown are...

July 2, 995 2 : 39 DRAFT 5 For a bounded LP, the basic feasible solutions (in activity space) correspond to extreme points of the feasible polytope. Every feasible solution is a convex combination of these extreme points, so the polytope is the convex hull of the extreme points. An advanced theorem in linear programming called the Representation Theorem states that a polytope can be described equivalently either as the solution set of a system of linear inequalities or as the convex hull of the set of extreme points. A related but more complicated result holds when the feasible region is unbounded, but we will not discuss it here...3 Geometric Interpretation of Duality Another important advanced theorem, called the Separating Hyperplane Theorem, states that, given a polyhedron (indeed, any convex set) and a point not contained in the polyhedron, there is a hyperplane with the property that the set lies on one side and the point on the other. If we take a point on the boundary of the polyhedron, the Supporting Hyperplane Theorem states that there is a hyperplane that contains that point, such that the polyhedron lies entirely in or on one side of the hyperplane. Recall that the dual problem can be viewed as the problem of finding a linear combination of the constraint equations that bounds the objective values of all feasible solutions. Given a basic feasible solution x to the primal, the complementary dual solution constructs an iso-objective hyperplane that passes through x. The solution is dual feasible if this iso-objective hyperplane is a supporting hyperplane for the primal feasible region. Example 4. For the primal LP in example 5, the optimal iso-objective line is the hyperplane constructed by taking a linear combination of the two equality constraints with multipliers π =5/3andπ 2 =2/3. 2 The Dual Simplex Method We have already seen how to analyze the effect of certain changes in the coefficients of a linear program. Certain types of changes are easy to analyze, because they affect only the optimality of the current solution, not its feasibility. For example, adding a new primal variable to the problem does not affect feasibility. We can easily check optimality conditions by computing the appropriate reduced cost, and reoptimizing (if necessary) by performing simplex pivots. Changes to objective function coefficients or nonbasic technology coefficients can be treated similarly. More difficult to cope with (using the tools we have so far) are changes that affect feasibility, such as changes to the right-hand side coefficients or addition of a new inequality constraint. We can test easily enough if the new solution is feasible, but if

July 2, 995 2 : 39 DRAFT 6 the change makes the solution infeasible, we currently have no easy way to start from that solution and seek the new optimum. Such a method would certainly be useful in sensitivity analysis, but it is absolutely critical in integer programming. Both the branch-and-bound algorithm and the cutting-plane algorithms involve solving a sequence of linear programs, each derived from an earlier one by adding a constraint or constraints that are guaranteed to make the previous optimal solution infeasible. If we could not efficiently reoptimize after adding the new inequalities, we would have no hope of solving even relatively small IPs in reasonable time. In this section, we will develop just such a method, namely the dual simplex algorithm. The fundamental insight that allows us to efficiently reoptimize after feasibility is lost is based in the symmetric relationship between the primal and dual LPs (i.e., if LP2 is the dual of LP, then LP is the dual of LP2). Given a complementary primal-dual pair of solutions, we know that the primal optimality condition is that the complementary dual solution is dual feasible. Symmetry allows us to conclude that the dual optimality condition is that the complementary primal solution is primal feasible. That is, primal optimality and dual feasibility are equivalent, and dual optimality and primal feasibility are equivalent. Thus, a basic primal-dual pair can be in one of four states:. Primal feasible and dual feasible (dual optimal and primal optimal). This is thesolutionweseek. 2. Primal feasible and dual infeasible (i.e., primal suboptimal). In this case, we can continue from the current basis by applying the simplex method to the primal problem. This is the case we have encountered in post-optimality analysis. 3. Primal infeasible and dual feasible (i.e., primal superoptimal). In this case, we can proceed by applying the simplex method to the dual problem. This is thecasewewillexaminehere. 4. Primal and dual infeasible. In this case, we need to apply Phase I to the primal or the dual. Example 5. Consider the following primal-dual pair of LPs: Min 3x +4x 2 s. t. x +2x 2 4 2x +x 2 5 x,x 2 Max 4π +5π 2 s. t. π +2π 2 3 2π +π 2 4 π,π 2 Figure 4 shows the feasible regions of the primal and dual problems, with the corresponding complementary basic solution pairs labeled in each graph. The optimal

July 2, 995 2 : 39 DRAFT 7 iso-objective lines are also shown. We see that bases that are primal sub-optimal are dual infeasible, and vice versa, and that the basis labeled e is primal and dual feasible, and is optimal. (Of course bases that are primal and dual infeasible are possible, but there are none in this problem.) 2. The Simplex Method Applied to the Dual Consider the dual of (LP): Max πb s. t. πa c π unrestricted. After the addition of dual slacks (call them y), we can transpose the problem so that the variables appear in column vectors: Max b T π s. t. A T π + Iy = c T π unrestricted, y. (DLP) The resulting linear system has n rows and m + n columns. Clearly the system has rank n (since the n columns of I are independent), so we can definitely select a basis. But which one? 2.. Free Variables Revisited We have seen two ways of handling free variables (such as π) when converting a problem to standard form. If the columns associated with these variables are linearly independent (as we assume the columns of A T are), there is a third possibility: we can simply include these variables in the basis right from the start. Recall that the simplex method s ratio test is designed to prevent variables that are required to be nonnegative from violating that constraint. But it s OK for free variables to take on negative values, so the rows in the tableau labeled with free variables can simply be skipped during the ratio test. Thus once a free variable is in the basis, it will never leave! 2..2 Constructing a Dual Basis Suppose we know a set of m linearly independent rows of A T. We can partition A T into these rows (B T ) and the remaining rows (N T ). The columns of A T together

July 2, 995 2 : 39 DRAFT 8 x 2 4 a 2 b e c d f 2 4 x π 2 4 a 2 b e c d 2 f 4 π Figure 4: Correspondence of primal and dual basic solutions.

July 2, 995 2 : 39 DRAFT 9 with the columns of I with s in the rows corresponding to N (denoted I N )form an n n nonsingular basis matrix: [ ] [ ] B T ˆB = ˆB N T B = T I N N T B T I N It is plain that for every basis for the primal problem, there is a unique basis for the dual problem. The dual basic variables are π plus the dual slacks (reduced costs) associated with the nonbasic primal variables (y N ). If we know a primal optimal basis, the corresponding dual basic solution must be feasible, and we can start the simplex method on the dual problem. 2..3 A Simplex Pivot in the Dual The current dual solution is the solution to the system [ ][ ] [ ] B T π c T = B, N T which is π = B T c T B and y N = c T N N T π = c T N N T B T c T B = c T N. The dual reduced cost vector is [ ] [ ][ ] b T B T IB N T B T, I N I N y N or after multiplication, b T B T. Since the dual is to be maximized, optimality will be achieved when all dual reduced costs are nonpositive, i.e., b T B T or equivalently, B b. Thus, if x j is a primal basic variable and x j <, then y j is a candidate to enter the dual basis. It s easy to verify theorems from linear algebra that (AB) T = B T A T and that, if AB is nonsingular then A and B are also, and (AB) = B A. It s also easy to verify that if B is nonsingular, then (B T ) =(B ) T. We denote this matrix B T. c T N

July 2, 995 2 : 39 DRAFT Since the column associated with the entering y j isacolumnofi, the [ updated direction vector is just a column of B B ]. In particular, it is the column of T N T B T corresponding to the row of B in the primal that is labeled with x j. Since we ignore the π variables in the ratio test, the ratio test compares nonnegative entries of this column with the corresponding values of y N = c N. If there are no nonnegative entries in the column, the dual problem is unbounded, so the primal problem is infeasible. The minimum ratio determines the dual variable y k that will leave the dual basis. To keep up, the primal variable x k must enter the primal basis. The pivot is completed by updating the basis inverse and the primal and dual variable values as usual. 2..4 The Dual Simplex Method Careful study of the steps outlined above reveals that there is no information required for a simplex pivot in the dual that is not already available in the primal revised simplex tableau. In particular, the dual reduced costs can be derived from the values of the primal basic variables, the dual ratio test computations involve the primal reduced costs and one component of each of the updated primal direction vectors. These direction components can be computed by taking the product of each nonbasic column of A with a single row of B (corresponding to the infeasible basic primal variable). The dual simplex method begins with the revised simplex tableau associated with a dual feasible basis: x B B b z π z The steps are as follows (all terminology is with respect to the primal basis):. Leaving Primal Variable Find a negative basic variable. If there are none, then stop the current solution is dual feasible and primal feasible, hence optimal. Otherwise, suppose b i <. 2. Dual Direction Vector For each nonbasic variable x j compute the ith row of B A j (the ith row of B times A j ). Call these entries ā j. 3. Ratio Test/Entering Primal Variable For each negative entry computed in the previous step, compute the reduced cost c j and the ratio c j /ā j. If there are no negative entries, then stop the dual is unbounded, so the primal is infeasible. Otherwise, let the minimum-magnitude (i.e., least negative) ratio be achieved for column k. Thenx k is the entering variable.

July 2, 995 2 : 39 DRAFT 4. Pivot Compute the [ rest] of the updated primal direction vector d = B A k. d Append the column to the tableau and perform an elimination step pivotingontheith entry of the added column, as in the primal simplex method. c k Go to Step. Example 6. Consider the LP with dual Min 2x + 3x 2 + x3 s. t. x + x 2 2x 3 + x 4 = 4x 2x 2 x 3 + x 5 = 2 x,...,x 5 Max u 2u 2 s. t. u + 4u 2 2 u 2u 2 3 2u u 2 u u 2 An obvious dual-feasible basis is x B = (x 4,x 5 ) T, with π = (,). The primal detached coefficient tableau is x x 2 x 3 x 4 x 5 z b 2 4 2 2 2 3 and the starting revised dual simplex tableau is x 4 x 5 2 z We choose x 5 to leave[ the basis (the ] most negative variable), and compute [ the] dual direction vector ā = 4 2. The reduced cost vector is c = 2 3 and the minimum ratio of the second and third components is, so k = 3. Appending the updated direction vector for x 3 to the tableau gives x 4 2 x 5 2 z

July 2, 995 2 : 39 DRAFT 2 Pivoting gives x 4 2 3 x 3 2 z 2 Since b, this basic pair is primal and dual feasible, so it is optimal; x = (,, 2, 3, T and π =(, ). 2..5 Constructing a Dual Feasible Basis For certain kinds of LPs (such as those where all primal variables have upper and lower bounds), constructing a dual-feasible starting basis is easy. For these problems, it makes sense to construct this basis and solve using the dual simplex method (using special techniques to handle the bound constraints). For problems with more constraints than variables, it may make sense to actually formulate the dual problem and solve it, perhaps using the dual simplex method if a primal-feasible basis for the original problem is easy to construct. In fact, state-of-the-art commercial codes often solve LPs from scratch using these techniques with great success, but the most important use for the dual simplex method is probably still in post-optimality analysis and integer programming algorithms, where it is used to reoptimize after adding a constraint. We will concentrate on this use. One concern in post-optimality analysis is assessment of the effect of changes to the right-hand side vector b. If changing b to b affects the feasibility of the current basis (i.e., B b ), then we can simply start the dual simplex method from the current basis. The other post-optimality problem, and the problem in branch-and-bound and cutting-plane algorithms, is to assess the effect of addition of a new inequality constraint. In this case, the number of rows is increased by one, and we need to add a new variable to the basis. Since the constraint to be added is an inequality, it also introduces a new slack or surplus variable. We can augment the basis with [ this] new variable: if the new constraint is ax+s = β, then the new basis is ˆB B = and a B [ ] its inverse is ˆB B = a B B ; if the constraint is ax s = β, then the new [ ] [ ] basis is ˆB B = and its inverse is a B ˆB B = a B B. The shadow prices and basic variable values can be easily computed, and the dual simplex method applied if necessary. When applying this method in a - branch-and-bound context, we can branch by adding the constraint x j orx j, without regard for the

July 2, 995 2 : 39 DRAFT 3 fact that x j andx j are already constraints in the problem. 2 References [] K. G. Murty, Operations Research: Deterministic Optimization Models, Prentice Hall, 995. 2 Commercial-quality codes that handle bounds implicitly don t have to worry about increasing the size of the basis when imposing simple bound constraints or fixing variables in branch-andbound.