Recap, and outline of Lecture 13 Previously Developed and justified all the steps in a typical iteration ( pivot ) of the Simplex Method (see next page). Today Simplex Method Initialization Start with a basis B and corresponding BFS x Iteration Perform a pivot starting at x. If a termination condition is met during the pivot, stop the algorithm, either with an optimal solution, or with proof of problem unboundedness. Otherwise, let B and y be as computed at the end of the pivot. Update Update B B and x y, and repeat the iteration Remaining questions: Will the algorithm always terminate with a solution/claim of unboundedness? How long will it take? How do we find a starting BFS And what if the problem is infeasible? IOE 510: Linear Programming I, Fall 2010 Outline of Lecture 13 Page 13 1 An iteration of the simplex method (a pivot ) 1. Iteration starts with a basis of columns A B(1),...,A B(m),and an associated BFS x. 2. Compute the reduced costs c j = c j c B B 1 A j for all nonbasic indices j. If c j 0forallj, terminate;x is an optimal solution. Otherwise, pick some j with c j < 0. 3. Compute u = B 1 A j.ifu 0, wehaveθ =. Terminate the algorithm and declare the problem unbounded. 4. If at least one component of u is positive, let θ x B(i) = min. i:u i >0 u i 5. Let l be such that θ = x B(l) /u l.formanewbasisby replacing A B(l) with A j.ify is the new basic feasible solution, the values of the new basic variables are y j = θ and y B(i) = x B(i) θ u i, i = l. IOE 510: Linear Programming I, Fall 2010 Outline of Lecture 13 Page 13 2
Results of the Simplex Method Theorem 3.3, Part I Assume that the feasible set is nonempty. Then, if the simplex method terminates, at termination, there are the following two possibilities: (a) We have an optimal basis B and an associated basic feasible solution which is optimal (b) We have found a vector d satisfying Ad = 0, d 0, and c d < 0, and the optimal cost is. Proof: If the algorithm stops, do we have a (correct) solution? If algorithm terminates in Step 2 of some pivot, B is an optimal basis, and hence current BFS is optimal. If algorithm terminates in Step 3 of some pivot, x + θd P for all θ, andsincec d = c j < 0, cost of x + θd can be made arbitrarily negative by taking θ sufficiently large. IOE 510: Linear Programming I, Fall 2010 Analysis of the Simplex Method Page 13 3 Convergence of the Simplex Method on non-degenerate problems Theorem 3.3 Assume that the feasible set is nonempty and that every feasible solution is nondegenerate. Then,the simplex method terminates after a finite number of iterations. At termination, there are the following two possibilities: (a) We have an optimal basis B and an associated basic feasible solution which is optimal (b) We have found a vector d satisfying Ad = 0, d 0, and c d < 0, and the optimal cost is. Proof: Remaining question: will the algorithm stop? In each completed pivot θ > 0 (since x is non-degenerate) thus, c y = c x + θ c d = c x + θ c j < c x thus, no basic solution is visited twice. Therefore, no basis is visited twice Since there is a finite number of bases, the algorithm has to stop after a finite number of iterations! IOE 510: Linear Programming I, Fall 2010 Analysis of the Simplex Method Page 13 4
Degenerate BFSs and the Simplex method If, during a pivot, more than one of the original basic variables becomes 0 at the point x + θ d,wehaveatieforwhichbasic variable should exit the basis. Only one of these (zero-valued) variables exits the basis. The others remain in the basis at zero level; hence the new basis is degenerate. If the current BFS is degenerate, θ may be equal to zero (when x B(i) =0andu i > 0 for some i). In this case we can still perform the change of basis, and Theorem 3.2 is still valid. The new basic variable enters the basis at value 0 We changed the basis, but haven t changed the corresponding BFS, i.e., haven t improved cost Unlike in Theorem 3.3, the algorithm might cycle, i.e., repeatedly visit the same basis. See example 3.6 for a situation where the simplex method cycles See example 3.5 for a situation where the simplex method does not cycle, even though it visits a degenerate BFS IOE 510: Linear Programming I, Fall 2010 Analysis of the Simplex Method Page 13 5 Pivoting rules (choosing entering and leaving variable) Entering: any x j with c j < 0 is eligible to enter Choose j with the smallest c j Choose j with the smallest resulting cost reduction (θ c j ) Careful: value of θ depends on choice of j! Smallest subscript rule: Choose the smallest j with c j < 0 and many others Leaving: any x B(i) with x B(i) /u i = θ is eligible to leave Smallest subscript rule: of all variables eligible to leave, choose the one with the smallest subscript. Smallest value of B(i), not the smallest value of i Lexicographic rule (see Section 3.4) Bland s rule Choose both entering and leaving variables according to the appropriate smallest subscript rule. Robert Bland, 1977: The simplex method implemented using Bland s pivoting rule never cycles; hence it is finite. May stay at the same degen. BFS for several iterations, but always different basis. IOE 510: Linear Programming I, Fall 2010 Analysis of the Simplex Method Page 13 6
Number of iterations needed by simplex method Computational effort/running time of an algorithm: work per iteration number of iterations. In practice simplex method is extremely fast Conventional wisdom suggests that number of iterations is usually proportional to the number of constraints Unless solving special bad LPs Number of iterations in the worst case exponential (1972) Theorem 3.5 considers an LP over a perturbed cube in n and simplex method initialized at a BFS adjacent to the optimal one. For a particular pivoting rule the simplex method requires 2 n 1 pivots on this example. But under a different pivoting rule in this example, we only need one pivot! Is there an exponential example for any pivoting rule? Hirsch Conjecture (n, m) m n, where (n, m) isthe maximal diameter of all bounded polyhedra in n that can be represented by m inequalities. The known result: (n, m) (2n) log 2 m IOE 510: Linear Programming I, Fall 2010 Analysis of the Simplex Method Page 13 7 Finding an initial BFS/proving infeasibility Consider the problem Wolog, b 0. (LP) min c x s.t. Ax = b x 0 We are not given a starting BFS. We don t even know if the problem is feasible. Introduce y m (artificial variables) and consider the auxiliary problem: (AUX) min y 1 + + y m s.t. Ax + Iy = b x 0 y 0 There is an obvious BFS for the auxiliary problem, so it can be solved with the simplex method. IOE 510: Linear Programming I, Fall 2010 Initializing the Simplex Method Page 13 8
Solving the auxiliary problem (AUX) min y 1 + + y m s.t. Ax + Iy = b x 0 y 0 Possible outcomes of applying the simplex method to (AUX): (AUX) cannot be unbounded If optimal cost of (AUX) is > 0, it implies original LP is infeasible If optimal cost of (AUX) is 0, it implies original LP is feasible If after termination all artificial variables are non-basic, we have found a BFS of the original LP If after termination some artificial variables are basic (at zero level), need to drive them out of the basis to obtain a BFS of the original LP. IOE 510: Linear Programming I, Fall 2010 Initializing the Simplex Method Page 13 9 Driving a artificial variable out of the basis Suppose solving (AUX) leads to a basis B consisting of k < m columns of the original matrix A: AB(i1 ),...,A B(ik ),and m k columns corresponding to artificial variables (at 0 level). If A has rank m, it is possible to select m k additional columns of A to form a basis corresponding to a (degenerate) BFS of (LP). How? (Recall: need linear independence of basic columns!) Suppose the lth basic variable is an artificial variable. Among vectors B 1 A j, j = B(i 1 ),...,B(i k ), find one that had a non-zero lth component. If found, have the corresponding (original) variable replace the artificial on the in the basis. Claim: The new column and columns of A already in the basis are linearly independent. Update B; now there is one fewer artificial basic variable What if (B 1 A j ) l =0forallj = B(i 1 ),...,B(i k )? Then the lth constraint was redundant, i.e., rank(a) < m Remove the lth constraint from (LP) and (AUX); m m 1 IOE 510: Linear Programming I, Fall 2010 Initializing the Simplex Method Page 13 10
Two-phase simplex method: Phase I Given an LP in standard form, 1. Ensure that b 0 (multiply some constraints by 1) 2. Introduce artificial variables y m and apply simplex to the auxiliary problem of minimizing m i=1 y i. 3. If the optimal cost of aux. problem is > 0, the original problem is infeasible terminate. 4. If the optimal cost of aux. problem is 0, the original problem is feasible. If there are no artif. variables are in the final basis, a BFS for the original problem is available. 5. If the lth basic variable is an artif. one, examine the lth entry of the vectors B 1 A j, j =1,...,n. If all these entries are 0, the lth row represents a redundant constraint, and is eliminated. If the lth element of the jth vector above is non-zero, change the basis with A j becoming the lth basic column. Repeat until no artificial variables remain in the basis. IOE 510: Linear Programming I, Fall 2010 Initializing the Simplex Method Page 13 11 Two-phase simplex method: Phase II 1. Let the final basis obtained from Phase I be the initial basis for Phase II. By construction, corresponding basic solution is a BFS. 2. Compute the reduced costs of all variables for this initial basis, using the cost coefficients of the original problem. 3. Apply simplex method to the original problem. Now we have a complete method, assuming that we use a pivoting rule that avoids cycling. In particular, If the problem is infeasible, it is detected in Phase I If the problem is feasible but rank(a) < m, thisisdetected and corrected at the end of Phase I, by identifying and eliminating redundant equality constrains. If the optimal cost is equal to, thisisdetectedwhile running Phase II Else, Phase II terminates with an optimal solution. IOE 510: Linear Programming I, Fall 2010 Initializing the Simplex Method Page 13 12