CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

Similar documents
The Simplex Algorithm

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

The simplex method and the diameter of a 0-1 polytope

Algorithmic Game Theory and Applications. Lecture 6: The Simplex Algorithm

16.410/413 Principles of Autonomy and Decision Making

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

Linear Programming Duality and Algorithms

Convex Optimization CMU-10725

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Mathematical and Algorithmic Foundations Linear Programming and Matchings

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

Lecture 16 October 23, 2014

MATH 310 : Degeneracy and Geometry in the Simplex Method

An iteration of the simplex method (a pivot )

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

Lecture Notes 2: The Simplex Algorithm

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

Linear programming and duality theory

Lesson 17. Geometry and Algebra of Corner Points

Lecture 4: Linear Programming

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50

Open problems in convex geometry

Notes taken by Mea Wang. February 11, 2005

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Discrete Optimization. Lecture Notes 2

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Linear Programming Motivation: The Diet Problem

Open problems in convex optimisation

CMPSCI611: The Simplex Algorithm Lecture 24

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1: Introduction to Optimization. Instructor: Shaddin Dughmi

A Subexponential Randomized Simplex Algorithm

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).

CS675: Convex and Combinatorial Optimization Spring 2018 Convex Sets. Instructor: Shaddin Dughmi

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

AMS : Combinatorial Optimization Homework Problems - Week V

Introduction to Linear Programming

Linear Programming and its Applications

Programming, numerics and optimization

The Affine Scaling Method

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

Linear Programming in Small Dimensions

Linear programming II João Carlos Lourenço

R n a T i x = b i} is a Hyperplane.

Subexponential lower bounds for randomized pivoting rules for the simplex algorithm

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Chapter 1 Linear Programming. Paragraph 4 The Simplex Algorithm

Math 5593 Linear Programming Lecture Notes

1. Lecture notes on bipartite matching February 4th,

Lecture 2 - Introduction to Polytopes

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

Artificial Intelligence

Integral Geometry and the Polynomial Hirsch Conjecture

New Directions in Linear Programming

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Lecture 5: Duality Theory

Subexponential Lower Bounds for the Simplex Algorithm

Chapter 15 Introduction to Linear Programming

Linear Optimization and Extensions: Theory and Algorithms

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

ACTUALLY DOING IT : an Introduction to Polyhedral Computation

Integer Programming Theory

Optimization. Lecture08 12/06/11

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

Mathematical Programming and Research Methods (Part II)

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 4: Convex Sets. Instructor: Shaddin Dughmi

Polytopes, their diameter, and randomized simplex

THEORY OF LINEAR AND INTEGER PROGRAMMING

VARIANTS OF THE SIMPLEX METHOD

EARLY INTERIOR-POINT METHODS

CSC 8301 Design & Analysis of Algorithms: Linear Programming

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Linear Programming. Course review MS-E2140. v. 1.1

The Simplex Algorithm for LP, and an Open Problem

be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Some Advanced Topics in Linear Programming

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

Circuit Walks in Integral Polyhedra

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where

Linear Programming and Clustering

J Linear Programming Algorithms

maximize c, x subject to Ax b,

arxiv: v1 [cs.cc] 30 Jun 2017

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points.

Linear Programming. them such that they

Advanced Linear Programming. Organisation. Lecturers: Leen Stougie, CWI and Vrije Universiteit in Amsterdam

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Finite Math Linear Programming 1 May / 7

Investigating Mixed-Integer Hulls using a MIP-Solver

3 INTEGER LINEAR PROGRAMMING

A linear program is an optimization problem of the form: minimize subject to

Transcription:

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm Instructor: Shaddin Dughmi

Algorithms for Convex Optimization We will look at 2 algorithms in detail: Simplex and Ellipsoid. If there is time, we might also look at interior point methods (e.g. gradient descent and variants). These are important in practice.

History and Basics of the Simplex Algorithm First methodical procedure for solving linear programs Developed by George Dantzig in 1947 Considered one of the most influential algorithms of the 20th century

History and Basics of the Simplex Algorithm First methodical procedure for solving linear programs Developed by George Dantzig in 1947 Considered one of the most influential algorithms of the 20th century Really a family of algorithms, parametrized by a pivot rule

History and Basics of the Simplex Algorithm First methodical procedure for solving linear programs Developed by George Dantzig in 1947 Considered one of the most influential algorithms of the 20th century Really a family of algorithms, parametrized by a pivot rule Efficient in practice, leading to conjectures that it runs in polynomial time In 1972, Klee and Minty exhibited worst-case examples that take exponential time, at least for some of the most popular simplex pivot rules

History and Basics of the Simplex Algorithm First methodical procedure for solving linear programs Developed by George Dantzig in 1947 Considered one of the most influential algorithms of the 20th century Really a family of algorithms, parametrized by a pivot rule Efficient in practice, leading to conjectures that it runs in polynomial time In 1972, Klee and Minty exhibited worst-case examples that take exponential time, at least for some of the most popular simplex pivot rules This spurred development of the Ellipsoid method, interior point methods,...

Outline 1 Description of The Simplex Algorithm 2 Properties 3 Initialization

Linear Programming Description of The Simplex Algorithm 3/12 We consider a standard form LP written as follows for convenience maximize subject to c x Ax b We use n to denote the number of variables, and m to denote the number of constraints.

Linear Programming Description of The Simplex Algorithm 3/12 We consider a standard form LP written as follows for convenience maximize subject to c x Ax b We use n to denote the number of variables, and m to denote the number of constraints. Recall: optimal occurs at a vertex and corresponds to n linearly-independent tight inequalities

Linear Programming Description of The Simplex Algorithm 3/12 We consider a standard form LP written as follows for convenience maximize subject to c x Ax b We use n to denote the number of variables, and m to denote the number of constraints. Recall: optimal occurs at a vertex and corresponds to n linearly-independent tight inequalities We assume we are given a starting vertex x 0 as input, and want to compute optimal vertex x This is Phase II Phase I, finding an initial vertex, involves solving another LP. We will come back to this at the end.

Linear Programming Description of The Simplex Algorithm 3/12 We consider a standard form LP written as follows for convenience maximize subject to c x Ax b We use n to denote the number of variables, and m to denote the number of constraints. Recall: optimal occurs at a vertex and corresponds to n linearly-independent tight inequalities We assume we are given a starting vertex x 0 as input, and want to compute optimal vertex x This is Phase II Phase I, finding an initial vertex, involves solving another LP. We will come back to this at the end. Degeneracy: a vertex with > n tight inequalities We will mostly assume this away to save ourselves a headache

Linear Programming Description of The Simplex Algorithm 3/12 We consider a standard form LP written as follows for convenience maximize subject to c x Ax b minimize subject to y b y A = c y 0 We use n to denote the number of variables, and m to denote the number of constraints. Recall: optimal occurs at a vertex and corresponds to n linearly-independent tight inequalities We assume we are given a starting vertex x 0 as input, and want to compute optimal vertex x This is Phase II Phase I, finding an initial vertex, involves solving another LP. We will come back to this at the end. Degeneracy: a vertex with > n tight inequalities We will mostly assume this away to save ourselves a headache Incidentally, algorithm will produce optimal dual y as well.

Recall: Physical Interpretation of LP Description of The Simplex Algorithm 4/12 Apply force field c to a ball inside bounded polytope Ax b.

Recall: Physical Interpretation of LP Description of The Simplex Algorithm 4/12 Apply force field c to a ball inside bounded polytope Ax b. Eventually, ball will come to rest against the walls of the polytope.

Recall: Physical Interpretation of LP Description of The Simplex Algorithm 4/12 Apply force field c to a ball inside bounded polytope Ax b. Eventually, ball will come to rest against the walls of the polytope. Wall a i x b i applies some force y i a i to the ball for some y i 0

Recall: Physical Interpretation of LP Description of The Simplex Algorithm 4/12 Apply force field c to a ball inside bounded polytope Ax b. Eventually, ball will come to rest against the walls of the polytope. Wall a i x b i applies some force y i a i to the ball for some y i 0 Since the ball is still, c T = i y ia i = y T A.

Recall: Physical Interpretation of LP Description of The Simplex Algorithm 4/12 Apply force field c to a ball inside bounded polytope Ax b. Eventually, ball will come to rest against the walls of the polytope. Wall a i x b i applies some force y i a i to the ball for some y i 0 Since the ball is still, c T = i y ia i = y T A. At optimality, only the walls adjacent to the ball push (Complementary Slackness) Necessary and sufficient for optimality, given dual-feasible y

Informal Description Description of The Simplex Algorithm 5/12 Starts at initial vertex x = x 0 While x is not optimal, move to a neighbouring vertex x with cx > cx.

Informal Description Description of The Simplex Algorithm 5/12 Starts at initial vertex x = x 0 While x is not optimal, move to a neighbouring vertex x with cx > cx. Either c is in the cone defined by tight constraints at x, in which case x is optimal by complementary slackness Or else can improve cx by moving along an edge (1-d face)

Description of The Simplex Algorithm 6/12 Simplex Method Input: vertex x = x 0 Output: Optimal vertex x and complementary dual y, or unbounded Repeat the following: 1 Write c = y A, where y i 0 only for n tight constraints a i x = b i. 2 If y 0 then stop and return (x, y), else 3 Choose i with y i < 0, and let d be s.t. A T \{i} d = 0 and a i d = 1. 4 If x + λd feasible for all λ 0, stop and return unbounded, else 5 x x + λd, for largest λ 0 maintaining feasibility

Description of The Simplex Algorithm 6/12 Simplex Method Input: vertex x = x 0 Output: Optimal vertex x and complementary dual y, or unbounded Repeat the following: 1 Write c = y A, where y i 0 only for n tight constraints a i x = b i. 2 If y 0 then stop and return (x, y), else 3 Choose i with y i < 0, and let d be s.t. A T \{i} d = 0 and a i d = 1. 4 If x + λd feasible for all λ 0, stop and return unbounded, else 5 x x + λd, for largest λ 0 maintaining feasibility Let T be set of tight rows. y T A T = c Gaussian elimination

Description of The Simplex Algorithm 6/12 Simplex Method Input: vertex x = x 0 Output: Optimal vertex x and complementary dual y, or unbounded Repeat the following: 1 Write c = y A, where y i 0 only for n tight constraints a i x = b i. 2 If y 0 then stop and return (x, y), else 3 Choose i with y i < 0, and let d be s.t. A T \{i} d = 0 and a i d = 1. 4 If x + λd feasible for all λ 0, stop and return unbounded, else 5 x x + λd, for largest λ 0 maintaining feasibility y is a dual satisfying complementary slackness with x Therefore both are optimal

Simplex Method Input: vertex x = x 0 Output: Optimal vertex x and complementary dual y, or unbounded Repeat the following: 1 Write c = y A, where y i 0 only for n tight constraints a i x = b i. 2 If y 0 then stop and return (x, y), else 3 Choose i with y i < 0, and let d be s.t. A T \{i} d = 0 and a i d = 1. 4 If x + λd feasible for all λ 0, stop and return unbounded, else 5 x x + λd, for largest λ 0 maintaining feasibility Chosen so that moving in direction d preserves tightness of T \ {i}, and loosens i. A T is full-rank, therefore null(a T \{i} ) is a 1-dimensional subspace which is not normal to a i Choose d null(a T \{i} ) appropriately. Moving in direction d improves objective: c d = y Ad = y i a i d > 0 Description of The Simplex Algorithm 6/12

Description of The Simplex Algorithm 6/12 Simplex Method Input: vertex x = x 0 Output: Optimal vertex x and complementary dual y, or unbounded Repeat the following: 1 Write c = y A, where y i 0 only for n tight constraints a i x = b i. 2 If y 0 then stop and return (x, y), else 3 Choose i with y i < 0, and let d be s.t. A T \{i} d = 0 and a i d = 1. 4 If x + λd feasible for all λ 0, stop and return unbounded, else 5 x x + λd, for largest λ 0 maintaining feasibility i.e. Ad 0

Description of The Simplex Algorithm 6/12 Simplex Method Input: vertex x = x 0 Output: Optimal vertex x and complementary dual y, or unbounded Repeat the following: 1 Write c = y A, where y i 0 only for n tight constraints a i x = b i. 2 If y 0 then stop and return (x, y), else 3 Choose i with y i < 0, and let d be s.t. A T \{i} d = 0 and a i d = 1. 4 If x + λd feasible for all λ 0, stop and return unbounded, else 5 x x + λd, for largest λ 0 maintaining feasibility { } bj a λ = min j x a j d : j [m], a j d > 0 j achieving this minimum is a new tight constraint, replacing i. By nondegeneracy assumption, λ > 0

Outline 1 Description of The Simplex Algorithm 2 Properties 3 Initialization

Correctness Claim If the simplex algorithm terminates, then it correctly outputs either an optimal primal/dual pair or unbounded. Primal feasibility of x is maintained throughout Returns (x, y) only if y is dual feasible and satisfies complementary slackness x and y are both optimal Returns unbounded only if there is a direction d with c d > 0 and Ad 0. Properties 7/12

Termination in the Absence of Degeneracy Claim In the absence of degenerate vertices, the simplex algorithm terminates in a finite number of steps, at most ( m n) 2 m. There are at most ( m n) distinct vertices in the polyhedron In the absence of degeneracy, the simplex algorithm does not repeat a vertex In each iteration, moves along an edge in direction d, in total λd We saw: c d > 0, and λ > 0. Objective strictly improves each iteration Properties 8/12

Pivot Rules Note The algorithm we presented was not fully specified When multiple neighboring vertices are improving, which one should we choose so as to terminate as quickly as possible? In the presence of degeneracy, how should we identify the next (geometric) vertex so as to guarantee termination? We maintain n tight and linearly independent constraints T, to be thought of as an algebraic representation of a vertex (aka a basic feasible solution (BFS)) When many algebraic representations are possible of a single geometric vertex, unclear how to identify the next geometric vertex. Properties 9/12

Pivot Rules Both concerns are addressed by the use of a pivot rule, which determines the order in which we examine algebraic vertices. Pivot rule A rule for selecting which i leaves T, and which j enters T, when multiple choices are possible either because of multiple improving neighbors or degeneracy. Examples: Bland s rule: Choose lowest indexed i, then lowest indexed j Lexicographic: Maintain an order over rows, and move from T to the lexicographically smallest possible T. Perturbation: perturb entries of b by a small value to remove degeneracy. This perturbation can be purely symbolic. Properties 9/12

Properties 10/12 Runtime and Termination Many pivot rules, like the ones we mentioned, have been shown to never cycle over algebraic vertices Guarantees termination in general, even in the presence of degeneracies See book and notes for proofs. However, no pivot rules have been shown to guarantee a polynomial number of pivots Even if no degeneracies. In 1972, Klee and Minty exhibited a family of examples that lead to exponential worst-case runtime for some widely-used pivot rules

Properties 10/12 Runtime and Termination Nevertheless, one explanation as to the efficiency of the simplex algorithm in practice is through smoothed complexity Theorem (Spielman & Teng 01) The simplex algorithm has polynomial smoothed complexity. Model of input: A, b, c chosen arbitrarily (worst case) Then subjected to small gaussian noise with stddev σ (relative to largest entry of A, b, c) Interpretation: measurement error More optimistic than worst case, but not quite as optimistic as average case. Expected runtime is polynomial in n, m and 1 σ

Properties 10/12 Runtime and Termination Open Question Is there a pivot rule which guarantees a polynomial number of pivots of the simplex algorithm in the worst case? Why is this important? Would yield a strongly polynomial algorithm for LP If true, resolves in the affirmative a classic open question in polyhedral combinatorics Polynomial Hirsch Conjecture: Is the diameter of the edge-vertex graph of an m-facet polytope in n-dimensional space bounded by a polynomial in n and m?

Outline 1 Description of The Simplex Algorithm 2 Properties 3 Initialization

Initialization 11/12 Initialization Solving a Linear Program via the Simplex Method Phase I: Find a vertex x 0. Phase II: Run the simplex algorithm starting from x 0 So far, we have looked only at phase II For phase I, we pose a different LP whose optimal solution is a vertex, if one exists

Phase I Initialization 12/12 maximize subject to c x Ax b x 0 If x = 0 is feasible, then it is a vertex and we are done, otherwise b min < 0

Phase I Initialization 12/12 maximize subject to c x Ax b x 0 minimize subject to z Ax z 1 b x 0 z 0 If x = 0 is feasible, then it is a vertex and we are done, otherwise b min < 0 We write a new LP with a variable z measuring how far we are from feasibility

Phase I Initialization 12/12 maximize subject to c x Ax b x 0 minimize subject to z Ax z 1 b x 0 z 0 If x = 0 is feasible, then it is a vertex and we are done, otherwise b min < 0 We write a new LP with a variable z measuring how far we are from feasibility If original LP is feasible, then an optimal solution of the new LP will have z = 0 and yield a feasible solution for original LP.

Phase I Initialization 12/12 maximize subject to c x Ax b x 0 minimize subject to z Ax z 1 b x 0 z 0 If x = 0 is feasible, then it is a vertex and we are done, otherwise b min < 0 We write a new LP with a variable z measuring how far we are from feasibility If original LP is feasible, then an optimal solution of the new LP will have z = 0 and yield a feasible solution for original LP. An optimal vertex of new LP (with z = 0) will correspond to some vertex x 0 of original LP

Phase I Initialization 12/12 maximize subject to c x Ax b x 0 minimize subject to z Ax z 1 b x 0 z 0 We need a starting vertex for new LP, this is easier! Let x 0 = 0, and z 0 = b min

Phase I Initialization 12/12 maximize subject to c x Ax b x 0 minimize subject to z Ax z 1 b x 0 z 0 We need a starting vertex for new LP, this is easier! Let x 0 = 0, and z 0 = b min Running simplex on new LP with starting vertex (x 0, z 0), we get starting vertex x 0 for original LP.