Math Models of OR: The Simplex Algorithm: Practical Considerations John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Simplex Algorithm: Practical Considerations 1 / 20
Initialization and termination Outline 1 Initialization and termination 2 Tolerances 3 Pivoting rules 4 Preprocessing 5 Free variables Mitchell Simplex Algorithm: Practical Considerations 2 / 20
Initialization and termination The Phases of the Simplex Algorithm As presented, the simplex algorithm solves a linear optimization problem by Converting it into standard form, Phase I: finding an equivalent canonical form, usually through the method of artificial variables, and Phase II: pivoting from basic feasible solution to neighboring basic feasible solution until reaching either optimal form or unbounded form. When you formulate a problem to feed to a solver, you shouldn t convert it to standard form. Solvers can efficiently exploit simple bounds, free variables, and inequality constraints; see later. Mitchell Simplex Algorithm: Practical Considerations 3 / 20
Initialization and termination The Phases of the Simplex Algorithm As presented, the simplex algorithm solves a linear optimization problem by Converting it into standard form, Phase I: finding an equivalent canonical form, usually through the method of artificial variables, and Phase II: pivoting from basic feasible solution to neighboring basic feasible solution until reaching either optimal form or unbounded form. When you formulate a problem to feed to a solver, you shouldn t convert it to standard form. Solvers can efficiently exploit simple bounds, free variables, and inequality constraints; see later. Mitchell Simplex Algorithm: Practical Considerations 3 / 20
Initialization and termination The Phases of the Simplex Algorithm As presented, the simplex algorithm solves a linear optimization problem by Converting it into standard form, Phase I: finding an equivalent canonical form, usually through the method of artificial variables, and Phase II: pivoting from basic feasible solution to neighboring basic feasible solution until reaching either optimal form or unbounded form. When you formulate a problem to feed to a solver, you shouldn t convert it to standard form. Solvers can efficiently exploit simple bounds, free variables, and inequality constraints; see later. Mitchell Simplex Algorithm: Practical Considerations 3 / 20
Initialization and termination Terminating the algorithm We have seen that the simplex algorithm may cycle between multiple basic sequences that give the same extreme point. Modern LP solvers have built-in mechanisms to help escape such cycling by using perturbation techniques involving the variable bounds. Given an m n constraint matrix A of rank m, any basic feasible solution has m basic variables. So the n number of possible basic feasible solutions is no larger than m Thus, the simplex algorithm converges in a finite number of iterations. Modern solvers routinely solve problems with millions of variables, even on a laptop.. Mitchell Simplex Algorithm: Practical Considerations 4 / 20
Tolerances Outline 1 Initialization and termination 2 Tolerances 3 Pivoting rules 4 Preprocessing 5 Free variables Mitchell Simplex Algorithm: Practical Considerations 5 / 20
Tolerances Roundoff errors Numbers are represented to a finite precision on a computer. Combining together finite precision representations of numbers may lead to additional roundoff errors. Typically, 10 7 is regarded as machine single precision, while 10 16 is double precision. A computer cannot readily return solutions that are more accurate than these precision values. Mitchell Simplex Algorithm: Practical Considerations 6 / 20
Tolerances Tolerances Modern optimization solvers have several tolerances for declaring that a solution is optimal. The principal tolerances are: feasiblility: Do the variables obey their bounds? Are the constraints satisfied? The default tolerance for CPLEX is 10 6. Note that the tolerance might be an absolute value, or it might be a relative tolerance. A relative tolerance compares the error scaled by the given bound or value, for example: if require P n j=1 a ijx j = b i : relative tolerance is P n j=1 a ij x j b i max{1, b i } optimality: the reduced costs need to be nonnegative to conclude a solution is optimal. This is relaxed to requiring that all the reduced costs are greater than some tolerance. The default in CPLEX is to require that all reduced costs be no smaller than 10 9. Mitchell Simplex Algorithm: Practical Considerations 7 / 20
Tolerances Tolerances Modern optimization solvers have several tolerances for declaring that a solution is optimal. The principal tolerances are: feasiblility: Do the variables obey their bounds? Are the constraints satisfied? The default tolerance for CPLEX is 10 6. Note that the tolerance might be an absolute value, or it might be a relative tolerance. A relative tolerance compares the error scaled by the given bound or value, for example: if require P n j=1 a ijx j = b i : relative tolerance is P n j=1 a ij x j b i max{1, b i } optimality: the reduced costs need to be nonnegative to conclude a solution is optimal. This is relaxed to requiring that all the reduced costs are greater than some tolerance. The default in CPLEX is to require that all reduced costs be no smaller than 10 9. Mitchell Simplex Algorithm: Practical Considerations 7 / 20
Tolerances Pivot elements Internally, the algorithm needs to ensure that it does not choose a pivot element that is too close to zero, which would lead to accumulation of roundoff errors. Mitchell Simplex Algorithm: Practical Considerations 8 / 20
Pivoting rules Outline 1 Initialization and termination 2 Tolerances 3 Pivoting rules 4 Preprocessing 5 Free variables Mitchell Simplex Algorithm: Practical Considerations 9 / 20
Pivoting rules Pivoting rules The original pivot rule for choosing the entering variable is to choose the most negative reduced cost. Other rules that require more work per iteration, but typically reduce the number of iterations include: best improvement: choose the incoming variable that leads to the best improvement in the objective function value. Mitchell Simplex Algorithm: Practical Considerations 10 / 20
Pivoting rules Steepest edge steepest edge: choose the incoming variable where the simplex direction makes the most acute angle with the objective function c. x 2 3 2 1 (0, 0) min x2ir 2 3x 1 x 2 3x 1 + 3x 2 = 10 s.t. x 1 3x 2 apple 2 3x 1 + 3x 2 apple 10 x 1, x 2 0 feasible region x 1 3x 2 = 2 c 0 1 2 3 4 steepest edge x 1 Mitchell Simplex Algorithm: Practical Considerations 11 / 20
Pivoting rules Dual simplex dual simplex: Later, we will see the dual simplex algorithm, which can work very well, especially with a steepest edge pivot rule. Mitchell Simplex Algorithm: Practical Considerations 12 / 20
Pivoting rules Partial pricing One method to reduce computational cost is partial pricing: instead of examining all the reduced costs, we examine a subset and choose the incoming variable from this subset. If all the reduced costs in the subset are nonnegative then we examine some of the remaining reduced costs. Mitchell Simplex Algorithm: Practical Considerations 13 / 20
Preprocessing Outline 1 Initialization and termination 2 Tolerances 3 Pivoting rules 4 Preprocessing 5 Free variables Mitchell Simplex Algorithm: Practical Considerations 14 / 20
Preprocessing Preprocessing Commercial solvers preprocess linear optimization problems before solving them, looking for logical implications that allow them to shrink the size of the problem. For example, they look for variables that can be fixed or constraints that are redundant. These steps are especially useful for integer optimization problems. Also useful sometimes is rescaling the problem, so that the numbers in different columns of the constraint matrix are not too widely divergent from one another. For example, if all the numbers in one column are expressed in terms of 10 3 and in another column in terms of 10 4, then the columns can be rescaled. Mitchell Simplex Algorithm: Practical Considerations 15 / 20
Free variables Outline 1 Initialization and termination 2 Tolerances 3 Pivoting rules 4 Preprocessing 5 Free variables Mitchell Simplex Algorithm: Practical Considerations 16 / 20
Free variables Handling upper bounds We ve previously seen that the simplex algorithm can handle upper bounds on variables without needing to introduce explicit slack variables. Mitchell Simplex Algorithm: Practical Considerations 17 / 20
Free variables Free variables Free variables are variables that are unrestricted in sign. Such a variable can be eliminated from a linear optimization problem. the free variable appears in an equality constraint: For example, we have a constraint x 1 + 3x 4 2x 6 = 5, where x 1 is a free variable. The for any values of x 4 and x 6,we can set x 1 = 5 3x 4 + 2x 6 and we don t have to worry about the sign of x 1. So, we can obtain an equivalent linear optimization problem by replacing x 1 by 5 3x 4 + 2x 6 in all the other constraints and the objective function. The original constraint x 1 + 3x 4 2x 6 = 5 can be deleted. Mitchell Simplex Algorithm: Practical Considerations 18 / 20
Free variables, part 2 Free variables the free variable appears only in inequality constraints: Assume we write all the inequality constraints as apple constraints, so they all have the form nx a ij x j apple b i. j=1 Assume x 1 is a free variable. There are some situations where we can make dramatic simplifications to the problem: I ai1 0 for all constraints and c 1 = 0: In this case, all the constraints with a nonzero a i1 coefficient are redundant, since these constraints can all be satisfied by taking x 1 sufficiently negative. I ai1 0 for all constraints and c 1 > 0: In this case, the problem has an unbounded optimal value, provided it is feasible: we can drive x 1! 1. I a i1 apple 0 for all constraints and c 1 = 0 or c 1 < 0: similar to the two previous cases, with now x 1!1. Mitchell Simplex Algorithm: Practical Considerations 19 / 20
Free variables, part 2 Free variables the free variable appears only in inequality constraints: Assume we write all the inequality constraints as apple constraints, so they all have the form nx a ij x j apple b i. j=1 Assume x 1 is a free variable. There are some situations where we can make dramatic simplifications to the problem: I ai1 0 for all constraints and c 1 = 0: In this case, all the constraints with a nonzero a i1 coefficient are redundant, since these constraints can all be satisfied by taking x 1 sufficiently negative. I ai1 0 for all constraints and c 1 > 0: In this case, the problem has an unbounded optimal value, provided it is feasible: we can drive x 1! 1. I a i1 apple 0 for all constraints and c 1 = 0 or c 1 < 0: similar to the two previous cases, with now x 1!1. Mitchell Simplex Algorithm: Practical Considerations 19 / 20
Free variables, part 2 Free variables the free variable appears only in inequality constraints: Assume we write all the inequality constraints as apple constraints, so they all have the form nx a ij x j apple b i. j=1 Assume x 1 is a free variable. There are some situations where we can make dramatic simplifications to the problem: I ai1 0 for all constraints and c 1 = 0: In this case, all the constraints with a nonzero a i1 coefficient are redundant, since these constraints can all be satisfied by taking x 1 sufficiently negative. I ai1 0 for all constraints and c 1 > 0: In this case, the problem has an unbounded optimal value, provided it is feasible: we can drive x 1! 1. I a i1 apple 0 for all constraints and c 1 = 0 or c 1 < 0: similar to the two previous cases, with now x 1!1. Mitchell Simplex Algorithm: Practical Considerations 19 / 20
Free variables, part 2 Free variables the free variable appears only in inequality constraints: Assume we write all the inequality constraints as apple constraints, so they all have the form nx a ij x j apple b i. j=1 Assume x 1 is a free variable. There are some situations where we can make dramatic simplifications to the problem: I ai1 0 for all constraints and c 1 = 0: In this case, all the constraints with a nonzero a i1 coefficient are redundant, since these constraints can all be satisfied by taking x 1 sufficiently negative. I ai1 0 for all constraints and c 1 > 0: In this case, the problem has an unbounded optimal value, provided it is feasible: we can drive x 1! 1. I a i1 apple 0 for all constraints and c 1 = 0 or c 1 < 0: similar to the two previous cases, with now x 1!1. Mitchell Simplex Algorithm: Practical Considerations 19 / 20
Free variables, part 3 Free variables If we are not in one of these simpler subcases then we may well need to introduce slack variables into the inequality constraints and then eliminate the free variable. Mitchell Simplex Algorithm: Practical Considerations 20 / 20