New Directions in Linear Programming

Similar documents
ORF 307: Lecture 14. Linear Programming: Chapter 14: Network Flows: Algorithms

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

The Simplex Algorithm

Linear Programming. Course review MS-E2140. v. 1.1

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Read: H&L chapters 1-6

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

AM 121: Intro to Optimization Models and Methods Fall 2017

5. Consider the tree solution for a minimum cost network flow problem shown in Figure 5.

Linear Programming Motivation: The Diet Problem

Some Advanced Topics in Linear Programming

Artificial Intelligence

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

16.410/413 Principles of Autonomy and Decision Making

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

Tuesday, April 10. The Network Simplex Method for Solving the Minimum Cost Flow Problem

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Linear Programming Problems

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Chapter II. Linear Programming

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

Linear programming and duality theory

VARIANTS OF THE SIMPLEX METHOD

Graphs and Network Flows IE411. Lecture 20. Dr. Ted Ralphs

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

Lecture 16 October 23, 2014

Programming, numerics and optimization

Discrete Optimization. Lecture Notes 2

AMS : Combinatorial Optimization Homework Problems - Week V

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

Duality. Primal program P: Maximize n. Dual program D: Minimize m. j=1 c jx j subject to n. j=1. i=1 b iy i subject to m. i=1

Math Models of OR: The Simplex Algorithm: Practical Considerations

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.

In this lecture, we ll look at applications of duality to three problems:

Algorithmic Game Theory and Applications. Lecture 6: The Simplex Algorithm

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

An iteration of the simplex method (a pivot )

Linear Programming Duality and Algorithms

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

George B. Dantzig Mukund N. Thapa. Linear Programming. 1: Introduction. With 87 Illustrations. Springer

An introduction to pplex and the Simplex Method

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Math 5593 Linear Programming Lecture Notes

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

5.4 Pure Minimal Cost Flow

Combinatorial Optimization

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

11 Linear Programming

Linear programming II João Carlos Lourenço

MATLAB Solution of Linear Programming Problems

The simplex method and the diameter of a 0-1 polytope

Lesson 11: Duality in linear programming

Civil Engineering Systems Analysis Lecture XV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Math 414 Lecture 30. The greedy algorithm provides the initial transportation matrix.

Solutions for Operations Research Final Exam

Applied Lagrange Duality for Constrained Optimization

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS

Outline: Finish uncapacitated simplex method Negative cost cycle algorithm The max-flow problem Max-flow min-cut theorem

J Linear Programming Algorithms

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

Integer Programming Theory

Tribhuvan University Institute Of Science and Technology Tribhuvan University Institute of Science and Technology

4.1 The original problem and the optimal tableau

Primal Simplex Algorithm for the Pure Minimal Cost Flow Problem

CSE 40/60236 Sam Bailey

4.1 Graphical solution of a linear program and standard form

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Detecting Infeasibility in Infeasible-Interior-Point. Methods for Optimization

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

NATCOR Convex Optimization Linear Programming 1

Linear Programming: Introduction

Optimized Choice of Parameters in interior-point methods for linear programming. Luiz-Rafael dos Santos

The Ascendance of the Dual Simplex Method: A Geometric View

Convex Optimization CMU-10725

CSc 545 Lecture topic: The Criss-Cross method of Linear Programming

Graphs that have the feasible bases of a given linear

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

3 INTEGER LINEAR PROGRAMMING

Programs. Introduction

Recap, and outline of Lecture 18

arxiv: v1 [cs.cc] 30 Jun 2017

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

OPERATIONS RESEARCH. Linear Programming Problem

THE simplex algorithm [1] has been popularly used

COVERING POINTS WITH AXIS PARALLEL LINES. KAWSAR JAHAN Bachelor of Science, Bangladesh University of Professionals, 2009

Transcription:

New Directions in Linear Programming Robert Vanderbei November 5, 2001 INFORMS Miami Beach NOTE: This is a talk mostly on pedagogy. There will be some new results. It is not a talk on state-of-the-art implementations of algorithms for LP. Operations Research and Financial Engineering, Princeton University http://www.princeton.edu/ rvdb

1 Outline Themes: Symmetry & Parametric Methods & Segues Strong Duality Theorem: Negative Transpose Property Simplex Methods: Primal, Dual, Two-Phase Efficiency: A Parametric Klee Minty Problem The Parametric Self-Dual Simplex Method: a Path-Following Method Average Performance Two Algorithms for L 1 Regression: Reweighted Least Squares Symmetry in Network Flow Problems: Geometric Dual Algebraic Dual An Interior-Point Method (IPM): Path-Following Another IPM: Affine-Scaling=Reweighted Least Squares

2 LPs in Inequality Form A Primal Dual Symmetric Setting Primal Problem: Dual Problem: max c T x min b T y s.t. Ax b s.t. A T y c x 0 y 0 Dual in Canonical Form: max s.t. b T y A T y c y 0

3 Dictionary Notation & Basic Solutions Primal Dual Symmetry ζ = 0 +c T x w = b Ax after pivoting = ζ = ζ + c T x N x B = b B 1 Nx N dual Note: neg transp dual Theorem: dual=neg transp ξ = 0 b T y z = c +A T y after pivoting = ξ = ζ + b T z B z N = c +(B 1 N) T z B Corollary (Strong Duality Theorem): When primal is optimal, so is dual and objective function values agree, i.e., no duality gap.

4 Proof of Neg. Transp. Property b a pivot b a 1 a d c d bc a c a Now, if we start with a dual dictionary that is the negative transpose of the primal and apply one pivot operation, we get b d pivot b a d + bc a. a c 1 a c a

5 Simplex Method and Duality A Primal Problem: Its Dual: Notes: Dual is negative transpose of primal. Primal is feasible, dual is not. Use primal to choose pivot: x 2 enters, w 2 leaves. Make analogous pivot in dual: z 2 leaves, y 2 enters.

6 Second Iteration After First Pivot: Primal (still feasible): Dual (still not feasible): Note: negative transpose property intact. Again, use primal to pick pivot: x 3 enters, w 1 leaves. Make analogous pivot in dual: z 3 leaves, y 1 enters.

7 After Second Iteration Primal: Is optimal. Dual: Negative transpose property remains intact. Is optimal. Conclusion Simplex method applied to primal problem (two phases, if necessary), solves both the primal and the dual. Negative-transpose property implies zero duality gap.

8 Dual Simplex Method When: dual feasible, primal infeasible (i.e., pinks on the left, not on top). An Example. Showing both primal and dual dictionaries: Looking at dual dictionary: y 2 enters, z 2 leaves. On the primal dictionary: w 2 leaves, x 2 enters. After pivot...

9 Dual Simplex Method: Second Pivot Going in, we have: Looking at dual: y 1 enters, z 4 leaves. Looking at primal: w 1 leaves, x 4 enters.

10 Dual Simplex Method Pivot Rule Refering to the primal dictionary: Pick leaving variable from those rows that are infeasible. Pick entering variable from a box with a negative value and which can be increased the least (on the dual side). Next primal dictionary shown on next page...

11 Dual Simplex Method: Third Pivot Going in, we have: x 2 leaves, x 1 enters. Resulting dictionary is OPTIMAL:

12 A Two-Phase Method: Dual-Based Phase I An Example: Notes: Two objective functions: the true objective (on top), and a fake one (below it). For Phase I, use the fake objective it s dual feasible. Two right-hand sides: the real one (on the left) and a fake (on the right). Ignore the fake right-hand side we ll use it in another algorithm later. Phase I First Pivot: w 3 leaves, x 1 enters. After first pivot...

13 Dual-Based Phase I Method Second Pivot Current dictionary: Dual pivot: w 2 leaves, x 2 enters. After pivot:

14 Dual-Based Phase I Method Third Pivot Current dictionary: Dual pivot: w 1 leaves, w 2 enters. After pivot: It s feasible!

15 Fourth Pivot Phase II Current dictionary: It s feasible. Ignore fake objective Use the real thing (top row). Primal pivot: x 3 enters, w 4 leaves. After pivot: Unbounded! Note: could use a primal phase-i (with the fake right-hand side) and then a dual method phase-ii.

16 Efficiency: The Klee Minty Problem via Parametric Methods maximize subject to 2 n j=1 i 1 j=1 2 n j x j 2 i j x j + x i 100 i 1 i = 1, 2,..., n x j 0 j = 1, 2,..., n. Example n = 3: maximize 4x 1 + 2x 2 + x 3 subj. to x 1 1 4x 1 + x 2 100 8x 1 + 4x 2 + x 3 10000 x 1, x 2, x 3 0.

17 Efficiency: A Parametric Klee Minty Problem Replace with 1, 100, 10000,..., 1 = b 1 b 2 b 3.... Then, make following replacements to rhs: b 1 b 1 b 2 2b 1 + b 2 b 3 4b 1 + 2b 2 + b 3 b 4 8b 1 + 4b 2 + 2b 3 + b 4. Hardly a change! Make a similar constant adjustment to objective function.

18 Parametric Klee Minty Problem: Case n = 3 Now watch the pivots...

Observe: only sign changes no other numerical changes. Put pink=0, gray=1. Pivots count from 0 to 2 n 1 in base 2. Degeneracy The lexicographic method for resolving degeneracy can be thought of as method with several parametric right-hand sides: ɛ 1 ɛ 2 ɛ 3.... No time to consider this further.

19 The Parametric Self-Dual Simplex Method A Symmetric Method. A Parametric Method. A Path-Following Method too. In general, one expects both c and b to contain negative elements. Consider a perturbation by a real parameter µ: ζ = ζ +(c + µe) T x N x B = (b + µe) B 1 Nx N For µ large, this dictionary is optimal. Decrease µ to the brink of nonoptimality. Two cases: 1. an objective coefficient is going negative: do a primal pivot. 2. a right-hand side is going negative: do a dual pivot. Continue pivoting and reducing the interval of optimal µ values until µ = 0 is optimal.

20 Parametric Self-Dual Simplex Method with the Pivot Tool It s easy to use the pivot tool to do the parametric self-dual simplex method. Just think of the artificial objective row and the artificial right-hand side column as being multiplied by µ and added to the true objective and right-hand side:

21 Remarks It suffices to perturb just the infeasible (i.e. negative) coefficents. It is not necessary to use e as the initial perturbation vector. It s better to randomize this vector. If one perturbs all coefficients randomly, then the probability of encountering a degenerate dictionary (with µ 0) is zero. An example: on a randomly generated (600x90,000) assignment problem this variant took 1655 iterations compared to 9951 iterations for a two-phase simplex method.

22 Average Performance An Unexpected Result. Thought Experiment: - Consider parametric self-dual method with random initial perturbations. - The parameter µ starts at. - Via pivoting, we reduce µ to zero. - There are n + m barriers to reducing µ. - With each pivot one barrier is passed of course, all of the other barriers move. - For a random problem there is no propensity for the jumping barriers to intervene. - Hence, we expect Expected number of iters = (m + n)/2. Empirical Study: - Using the netlib suite of test problems, we found iters 0.517(m + n) 1.05. - Remarkably close!

23 Regression Speaking of regression, L 1 regression problems can be formulated as LPs: Least Squares Regression (L 2 ): x = argmin x b Ax 2 2 = Solve a system of equations: x = ( A T A) 1 A T b Least Absolute Deviation Regression (L 1 ): ˆx = argmin x b Ax 1 = Solve an LP: min i t i t i b i j a ijx j t i

24 Two Algorithms for L 1 Regression Simplex Method and Reweighted Least Squares L 1 regressions can be computed by solving an LP. Here s another method: Set the derivative of the L 1 objective function to zero. f x k (ˆx) = i b i j a ij ˆx j b i j a ij ˆx j ( a ik ) = 0, k = 1, 2,..., n After rearranging, we get: ˆx = ( A T E(ˆx)A) 1 A T E(ˆx)b where E(ˆx) = Diag(ɛ(ˆx)) 1 and ɛ i (ˆx) = b i j a ij ˆx j This suggests an iterative reweighted least squares algorithm: ( 1 x k+1 = A T E(x )A) k A T E(x k )b This is the affine-scaling method applied to the LP (as first noted by M. Meketon).

25 Network Flows: A Symmetric View Data and Decision Vars Data: A network (N, A) with nodes N and arcs A. b i, i N, supply at node i (demands are recorded as negative supplies). c ij, (i, j) A, cost of shipping 1 unit along arc (i, j). Decision Variables: x ij, (i, j) A, quantity to ship along arc (i, j).

26 Network Flow Problem No Apparent Primal-Dual Symmetry Primal Problem minimize c ij x ij subject to (i,j) A x ik j : x kj = b k, k N i : (i, k) A (k, j) A x ij 0, (i, j) A Dual Problem maximize i N b i y i subject to y j y i + z ij = c ij (i, j) A z ij 0 (i, j) A

27 Tree Solution = Basic Solution data Fix a root node, say a. Primal flows on tree arcs calculated recursively from leaves inward. variables Dual variables at nodes calculated recursively from root node outward along tree arcs using: y j y i = c ij Dual slacks on nontree arcs calculated using: z ij = y i y j + c ij.

28 Primal Network Simplex Method Used when all primal flows are nonnegative (i.e., primal feasible). Pivot Rules: Entering arc: Pick a nontree arc having a negative (i.e. infeasible) dual slack. Entering arc: (g,e) Leaving arc: (d,g) Leaving arc: Add entering arc to make a cycle. Leaving arc is an arc on the cycle, pointing in the opposite direction to the entering arc, and of all such arcs, it is the one with the smallest primal flow.

29 Primal Method Second Pivot Explanation of leaving arc rule: Increase flow on (d,e). Each unit increase produces a unit increase on arcs pointing in the same direction. Entering arc: (d,e) Leaving arc: (d,a) Each unit increase produces a unit decrease on arcs pointing in the opposite direction. The first to reach zero will be the one pointing in the opposite direction and having the smallest flow among all such arcs.

30 Primal Method Third Pivot Entering arc: (c,g) Leaving arc: (c,e) Optimal!

31 Dual Network Simplex Method Used when all dual slacks are nonnegative (i.e., dual feasible). Pivot Rules: Leaving arc: Pick a tree arc having a negative (i.e. infeasible) primal flow. Leaving arc: (g,a) Entering arc: (d,e) Entering arc: Remove leaving arc to split the spanning tree into two subtrees. Entering arc is an arc reconnecting the spanning tree with an arc in the opposite direction, and, of all such arcs, is the one with the smallest dual slack.

32 Dual Network Simplex Method Second Pivot Leaving arc: (d,a) Entering arc: (b,c) Optimal!

33 Two-Phase Network Simplex Method Example. Turn off display of dual slacks. Turn on display of artificial dual slacks.

34 Two-Phase Method First Pivot Use dual network simplex method. Leaving arc: (d,e) Entering arc: (e,f) Dual Feasible!

35 Two-Phase Method Phase II Turn off display of artificial dual slacks. Turn on display of dual slacks.

36 Two-Phase Method Second Pivot Entering arc: (g,b) Leaving arc: (g,f)

37 Two-Phase Method Third Pivot Entering arc: (f,c) Leaving arc: (a,f) Optimal!

38 Planar Networks Symmetry Returns A a b c Definition. Network is called planar if can be drawn on a plane without intersecting arcs. f B C D d Theorem. Every planar network has a dual dual nodes are faces of primal network. e Notes: Dual node A is node at infinity. Primal spanning tree shown in red. Dual spanning tree shown in blue (don t forget node A). Theorem. A dual pivot on the primal network is exactly a primal pivot on the dual network.

39 An Interior-Point Method (IPM) Another Path-Following Method Central Path For each µ 0 there exists a unique solution to: Ax + w = b A T y z = c XZe = µe Y W e = µe x, y, w, z 0. We start with x, y, w, z > 0 and use Newton s method to find search directions to a new point: A x + w = ρ := b Ax w A T y z = σ := c A T y + z Z x + X z = µe XZe W y + Y w = µe Y W e. The algorithm is summarized on the next page...

initialize (x, w, y, z) > 0 while (not optimal) { } ρ = b Ax w σ = c A T y + z γ = z T x + y T w γ µ = 0.1 n + m solve: θ = r A x + w = ρ A T y z = σ Z x + X z = µe XZe W y + Y w = µe Y W e max ij { x j x j x x + θ x, y y + θ y,, w i w i, y i y i w w + θ w z z + θ z, z j z j } 1 1

40 Another IPM: Affine-Scaling = Reweighted Least Squares The Affine-Scaling algorithm is obtained by putting µ = 0. For problems with x free, we can also set z = 0. In this case the equation for the search direction x reduces to: (x + x) = ( A T EA) 1 (σ + A T Eb), where E = W Y 1. For L 1 regression it is possible to initialize so that σ = 0. It then remains zero throughout and the new x is related to the old x by a weighted least-squares formula: (x + x) = ( A T EA) 1 A T Eb.