Lecture Notes 2: The Simplex Algorithm

Similar documents
Mathematical and Algorithmic Foundations Linear Programming and Matchings

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

R n a T i x = b i} is a Hyperplane.

Linear Programming Motivation: The Diet Problem

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

The Simplex Algorithm

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

Linear programming and duality theory

An iteration of the simplex method (a pivot )

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Linear Programming Duality and Algorithms

Lecture 2 - Introduction to Polytopes

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

ORIE 6300 Mathematical Programming I September 2, Lecture 3

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).

Lesson 17. Geometry and Algebra of Corner Points

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

CS522: Advanced Algorithms

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Integer Programming Theory

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50

Linear Programming and its Applications

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

Lecture 4: Linear Programming

CMPSCI611: The Simplex Algorithm Lecture 24

Recap, and outline of Lecture 18

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

maximize c, x subject to Ax b,

Linear Programming Problems

16.410/413 Principles of Autonomy and Decision Making

Math 414 Lecture 2 Everyone have a laptop?

IE 5531: Engineering Optimization I

11 Linear Programming

NATCOR Convex Optimization Linear Programming 1

The simplex method and the diameter of a 0-1 polytope

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

Solutions for Operations Research Final Exam

What is linear programming (LP)? NATCOR Convex Optimization Linear Programming 1. Solving LP problems: The standard simplex method

1. Lecture notes on bipartite matching February 4th,

Introduction to Operations Research

Lecture 5: Duality Theory

Polytopes Course Notes

CSC 8301 Design & Analysis of Algorithms: Linear Programming

Lecture 3: Totally Unimodularity and Network Flows

AMS : Combinatorial Optimization Homework Problems - Week V

Duality. Primal program P: Maximize n. Dual program D: Minimize m. j=1 c jx j subject to n. j=1. i=1 b iy i subject to m. i=1

Math 5593 Linear Programming Lecture Notes

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

Linear Programming in Small Dimensions

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems.

Lecture 19 Subgradient Methods. November 5, 2008

1. Lecture notes on bipartite matching

Lecture 6: Faces, Facets

Mathematical Programming and Research Methods (Part II)

Chapter II. Linear Programming

Convex Optimization CMU-10725

Programming, numerics and optimization

Lecture 16 October 23, 2014

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Chapter 4 Concepts from Geometry

Dual-fitting analysis of Greedy for Set Cover

Advanced Linear Programming. Organisation. Lecturers: Leen Stougie, CWI and Vrije Universiteit in Amsterdam

Lecture 10,11: General Matching Polytope, Maximum Flow. 1 Perfect Matching and Matching Polytope on General Graphs

5.3 Cutting plane methods and Gomory fractional cuts

Lecture Overview. 2 Shortest s t path. 2.1 The LP. 2.2 The Algorithm. COMPSCI 530: Design and Analysis of Algorithms 11/14/2013

Graphs that have the feasible bases of a given linear

Convex Optimization Lecture 2

OPERATIONS RESEARCH. Linear Programming Problem

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where

Notes for Lecture 18

COMP331/557. Chapter 2: The Geometry of Linear Programming. (Bertsimas & Tsitsiklis, Chapter 2)

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

Bilinear Programming

2. Optimization problems 6

Algorithmic Game Theory and Applications. Lecture 6: The Simplex Algorithm

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

Chapter 15 Introduction to Linear Programming

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1.

Polar Duality and Farkas Lemma

Modeling and Analysis of Hybrid Systems

Modeling and Analysis of Hybrid Systems

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Applied Lagrange Duality for Constrained Optimization

Introduction to Mathematical Programming IE406. Lecture 4. Dr. Ted Ralphs

Conic Duality. yyye

Lecture 5: Properties of convex sets

Transcription:

Algorithmic Methods 25/10/2010 Lecture Notes 2: The Simplex Algorithm Professor: Yossi Azar Scribe:Kiril Solovey 1 Introduction In this lecture we will present the Simplex algorithm, finish some unresolved linear programming related issues from the previous lecture, and present the Dual concept. 2 Linear Programming Some basic definitions and theorems we covered in the previous lecture: Polyhedron - Intersection of half-spaces. Vertex - Feasible point which cannot be written as a convex combination of two feasible points. Theorem A: Let p be a polyhedron defined as p = {x Ax b}. Then x is a vertex of p iff x is a feasible solution that satisfies tightly n linearly independent constraints. Theorem B: Let p be a polytope (non-empty bounded polyhedron). Then for every x p there exists a vertex x such that c t x c t x. Basic Solution: The matrix A has m linearly independent columns. The total number of constraints is m + n. Denote j 1 < j 2 <... < j m as the appropriate variable indices. Any other variable is set to 0. Let B be the matrix consist of the columns j 1,..., j m of A and x B = x j1,..., x jm. The basic solution is defined as x B = B 1 b. If x B 0 then x B is a basic feasible solution (bfs). Example 1: x 1 + x 2 + x 3 = 8 x 2 + 2x 3 = 6 (2, 6, 0) and (5, 0, 3) are both bfs whereas (0, 10, 2) is not feasible. Example 2: x 1 + x 2 + x 3 = 6 x 2 + 2x 3 = 6 (0, 6, 0) is a mutual bfs for the basis x 1, x 2 and x 2, x 3. The other bfs is (3, 0, 3). Theorem C: x is a vertex iff x is bfs. proof. If x is a vertex, then by Theorem A x satisfies tightly n linearly independent constraints. Let j x be all the non-zero variable indices, and A x be the induced matrix. Denote 2-1

k as the number of non zero variables. Then n k of the variables are 0 and therefore define n k tight constraints. rank(a x ) = k, otherwise we will not get n linearly independent constrains. Hence k m. If k = m then x is bfs for the columns j x of A and we are done. If k < m then by adding m k independent columns we form a basis. x satisfies all the constraints defined by the sub-matrix C of A of size m m, and the rest of the variables in x are set to 0. The matrix is regular and therefore x is the bfs of the column of C. For the other direction we include an alternative proof to the one given in class. Let x be a bfs of Ax = b, x 0, and rank(a) = m. Suppose that x = λu + (1 λ)v where 0 < λ < 1 and u, v p = {x Ax = b}, in other words, assume that x is not a vertex. Let J {1,..., n} be the basis corresponding to x, and for j / J x j = 0. x is unique vector in p satisfying x j = 0 if j / J. Therefore, there exists j / J such that u j > 0. Also, v 0 since this a LPS system. Therefore λu j + (1 λ)v j > 0 in contradiction to the fact that x j = 0. 2.1 Yet, another application of linear programming Let l 1,..., l m be k half-spaces in R n. Find a maximal radius ball that doesn t intersect with any l i. l i are given in the following form, a 11 x 1 + a 12 x 2 + + a 1n x n b 1.. a m1 x 1 + a m2 x 2 + + a mn x n b m We would like that the center of such ball will be as far as it can from each half-space. We get the following linear programming system whose variables are x 1,..., x n and R: i a i1 x 1 + a i2 x 2 +... + a in x n b i a 2 i1 + a2 i2 +... + a2 in R; max R. 3 The Simplex Algorithm [Danzig47] In a nutshell, the algorithm traverses the vertices of the polytop, moving from one vertex to its neighbor, by modifying the basis of the bfs of the current vertex. Example: x 1 1 0 2 1 0 7 0 1 1 1 0 0 1 1 1 1 x 2 x 3 x 4 x 5 = 3 ; x 1, x 2, x 3, x 4, x 5 0 6 min 3x 1 + x 2 + x 3 + x 4 + 2x 5. 2-2

Using elementary row operations we get x 1 1 2 0 1 0 x 2 1 0 1 1 1 0 x 3 0 2 0 2 1 x 4 = 3. 3 x 5 We will use x 1, x 3, x 5 as the basis and set the 2nd and 4th columns to 0. Note that the columns 1,3,5 form the identity matrix (or in general, permutation matrix). The appropriate bfs is (1, 0, 3, 0, 1) with the target function 11. Let s increase the value of x 4 from 0 to φ. The values of x 1, x 3, x 5 would change accordingly, x 1 1 1 x 3 = 3 1 x 4. x 5 5 2 We will get the following change in the target function, 2φ + 0 + φ + φ 4φ = 4φ. So, the target function diminishes as x 4 grows bigger, but we must keep in mind the fact that we have to satisfy the constraints x 1,..., x 5 0 as well. It forces the constrain φ 1. If, on the other hand, we set x 2 = δ then x 1 = 1 2δ, x 3 = 3 + δ, x 5 = 3 2δ; δ 1/2 and the change in the target function will be 6δ. Let s extend the above example to general LPS. 3.1 Preliminaries min c t x; Ax = b; x 0 Let B be the basis, x B basis variables, x N non-basis variables, A B the basis columns matrix (regular matrix), A N non-basis columns matrix. We get the following (equivalent) linear programming problem, min(c t Bx B + c t Nx N ); A B x B + A N x N = b; x B, x N 0. Moreover, x B = A 1 B b A 1 B A Nx N, and the target function can be written as min(c t B(A 1 B b A 1 B A Nx N ) + c t Nx N ) = min(c t BA 1 B b + (ct N c t BA 1 B A N)x N ). If we set x N = 0 then x B = A 1 B b and the value of the target function will be ct B A 1 B b. We define a weight vector c = c t N ct B A 1 B A N. If there exists c j < 0 then decreasing x j (as long as x B 0) will result a decrease in the goal function value. In case there is no limit for increasing x j, the target function is unbounded, i.e, otherwise we stop once some variable in the basis reaches 0. This process is called pivot step. Theorem: If c > 0 then the algorithm is optimal. Proof. For any feasible point y we have c t y = c t BA 1 B b + (ct N c t BA 1 B A N)y N ) = c t BA 1 B b + cy N c t BA 1 B b since c 0 and y N 0. Hence x is optimal. 2-3

3.2 Algorithm 1. Select a basis B that defines a bfs. 2. Calculate the proper matrices (A 1 B, A N,...). 3. If c > 0 then the solution is optimal, STOP. 4. Otherwise, find an index j such that c j < 0 and increase x j until one of the basis variables is set to 0. If there is no such variable the target function is unbounded. Otherwise include j in the basis and remove the new zero variable. Steps 2,3,4 are the pivot steps. A few questions arise: How do we select j (among all j, such that c j < 0) and what variable is removed from the basis (among all variables that are set to 0)? What happens when we cannot increase x j at all? Can the algorithm get stuck in a loop? Lemma: The algorithm converges if the target function decreases. Proof. The algorithm never returns to the same solution and the number of vertices is finite. If, on the other hand, the target function remains the same, we stay in the same vertex and might return to the same basis. Definition (Bland Rule): Add to the basis a variable with minimal j such that c j < 0. Remove from the basis the zero variable with minimal index j. The following theorem is given without a proof. Theorem: The algorithm converges if the Bland Rule is used. Observation: In the worst case, the algorithm traverses all the vertices, which might be exponential (in m, n). Typically, however, the number of iterations is small and even linear. Complexity: 1. Initialization requires handling - deciding whether some solution exists is as hard as finding an optimal solution. 2. Gauss Elimination is used in the first iteration - O(n 3 ), otherwise O(n 2 ). 3. O(n) (values are computed by the Gauss Elimination). 4. O(n). 2-4

3.3 Algorithm Initialization min c t x, Ax = b 0 (if there s b i < 0 we multiply the ith row of A by 1), x 0. We define a new system ( ) ( ) x A I = b, min y y i, x 0, y 0. Lemma: The original system is feasible the new system has an optimum value 0. Proof. Let x be a solution to the original system. Set y i = 0, observe that (x y) satisfies the new system as well and the goal function attains it s minimum value. The other direction of the proof is similar. Hence, we can find an initial solution to the original system by solving the new system using the simplex algorithm. However, we need also to initialize the new system. Notice that x i = 0, y i = b i 0 is a bfs of the new system. 4 The Dual We start with an example. Observe the following linear programming system: min 2x 1 + 4x 2 x 3 ; (1) x 1 + x 2 + x 3 10 (2) x 1 + 3x 2 2x 3 4 (3) 2x 2 3x 3 8 Notice that 2 (1) + 1 (3) : 2x 1 + 4x 2 x 3 12 1 (1) + 1 (2) : 2x 1 + 4x 2 x 3 14. Thus, we found a lower bound on the goal function. To find to find a stronger lower bound, denote by y 1, y 2, y 3 the factors on the constraints (in the last example y 1 = 1, y 2 = 1, y 3 = 0). Now we want to solve the system y t A = c; max y t b; y 0. Each feasible solution of the new system is smaller then any feasible solution in the original system. It s clear that min c t x max y t b. The constraints x 0 in the original system transform to the constraints y t A c in the dual. Yet, another example: min 3x 1 + 2x 2 + 3x 3 = G; x 1 + x 2 x 3 = 4 2x 1 4x 3 = 7. 1 (1) 1 (2) : 3x 1 + x 2 + 3x 3 = 11 = G 11 2 (1) 1 2 (2) : 3x 1 + 2x 2 + 0x 3 = 11.5 = G 11.5 Again, to get a lower bound we need to solve y t A c; y < = > 0; max y t b. 2-5