Math Models of OR: The Simplex Algorithm: Practical Considerations

Similar documents
5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

Linear Programming. Course review MS-E2140. v. 1.1

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Finite Math Linear Programming 1 May / 7

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

Linear Programming Motivation: The Diet Problem

The Ascendance of the Dual Simplex Method: A Geometric View

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Linear programming and duality theory

Artificial Intelligence

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

An iteration of the simplex method (a pivot )

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Some Advanced Topics in Linear Programming

Discrete Optimization. Lecture Notes 2

DEGENERACY AND THE FUNDAMENTAL THEOREM

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

Selected Topics in Column Generation

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

THE simplex algorithm [1] has been popularly used

AM 121: Intro to Optimization Models and Methods Fall 2017

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Column Generation: Cutting Stock

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Integer and Combinatorial Optimization: Clustering Problems

VARIANTS OF THE SIMPLEX METHOD

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

Linear programming II João Carlos Lourenço

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

Notes for Lecture 18

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI

NOTATION AND TERMINOLOGY

Fundamentals of Integer Programming

CASE STUDY. fourteen. Animating The Simplex Method. case study OVERVIEW. Application Overview and Model Development.

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

New Directions in Linear Programming

3 INTEGER LINEAR PROGRAMMING

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

TIM 206 Lecture Notes Integer Programming

Integer Programming as Projection

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

MATLAB Solution of Linear Programming Problems

16.410/413 Principles of Autonomy and Decision Making

Heuristic Optimization Today: Linear Programming. Tobias Friedrich Chair for Algorithm Engineering Hasso Plattner Institute, Potsdam

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

Integer Programming Chapter 9

Course Summary! What have we learned and what are we expected to know?

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Tribhuvan University Institute Of Science and Technology Tribhuvan University Institute of Science and Technology

Chapter 7. Linear Programming Models: Graphical and Computer Methods

CSc 545 Lecture topic: The Criss-Cross method of Linear Programming

Linear Programming Problems

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

The Simplex Algorithm

5.3 Cutting plane methods and Gomory fractional cuts

Math 5593 Linear Programming Lecture Notes

Gurobi Guidelines for Numerical Issues February 2017

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Linear Programming. them such that they

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

Solutions for Operations Research Final Exam

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

Agenda. Understanding advanced modeling techniques takes some time and experience No exercises today Ask questions!

SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem

Tuesday, April 10. The Network Simplex Method for Solving the Minimum Cost Flow Problem

5.4 Pure Minimal Cost Flow

AMATH 383 Lecture Notes Linear Programming

Graphs and Network Flows IE411. Lecture 20. Dr. Ted Ralphs

Financial Optimization ISE 347/447. Lecture 13. Dr. Ted Ralphs

An Improved Decomposition Algorithm and Computer Technique for Solving LPs

Previously Local sensitivity analysis: having found an optimal basis to a problem in standard form,

Linear Programming Terminology

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING

Convex Optimization CMU-10725

Linear Programming. Revised Simplex Method, Duality of LP problems and Sensitivity analysis

Integer Programming Theory

lpsymphony - Integer Linear Programming in R

Read: H&L chapters 1-6

x ji = s i, i N, (1.1)

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

The Heuristic (Dark) Side of MIP Solvers. Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

Pivot and Gomory Cut. A MIP Feasibility Heuristic NSERC

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

Linear Programming Motivation

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

Transcription:

Math Models of OR: The Simplex Algorithm: Practical Considerations John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Simplex Algorithm: Practical Considerations 1 / 20

Initialization and termination Outline 1 Initialization and termination 2 Tolerances 3 Pivoting rules 4 Preprocessing 5 Free variables Mitchell Simplex Algorithm: Practical Considerations 2 / 20

Initialization and termination The Phases of the Simplex Algorithm As presented, the simplex algorithm solves a linear optimization problem by Converting it into standard form, Phase I: finding an equivalent canonical form, usually through the method of artificial variables, and Phase II: pivoting from basic feasible solution to neighboring basic feasible solution until reaching either optimal form or unbounded form. When you formulate a problem to feed to a solver, you shouldn t convert it to standard form. Solvers can efficiently exploit simple bounds, free variables, and inequality constraints; see later. Mitchell Simplex Algorithm: Practical Considerations 3 / 20

Initialization and termination The Phases of the Simplex Algorithm As presented, the simplex algorithm solves a linear optimization problem by Converting it into standard form, Phase I: finding an equivalent canonical form, usually through the method of artificial variables, and Phase II: pivoting from basic feasible solution to neighboring basic feasible solution until reaching either optimal form or unbounded form. When you formulate a problem to feed to a solver, you shouldn t convert it to standard form. Solvers can efficiently exploit simple bounds, free variables, and inequality constraints; see later. Mitchell Simplex Algorithm: Practical Considerations 3 / 20

Initialization and termination The Phases of the Simplex Algorithm As presented, the simplex algorithm solves a linear optimization problem by Converting it into standard form, Phase I: finding an equivalent canonical form, usually through the method of artificial variables, and Phase II: pivoting from basic feasible solution to neighboring basic feasible solution until reaching either optimal form or unbounded form. When you formulate a problem to feed to a solver, you shouldn t convert it to standard form. Solvers can efficiently exploit simple bounds, free variables, and inequality constraints; see later. Mitchell Simplex Algorithm: Practical Considerations 3 / 20

Initialization and termination Terminating the algorithm We have seen that the simplex algorithm may cycle between multiple basic sequences that give the same extreme point. Modern LP solvers have built-in mechanisms to help escape such cycling by using perturbation techniques involving the variable bounds. Given an m n constraint matrix A of rank m, any basic feasible solution has m basic variables. So the n number of possible basic feasible solutions is no larger than m Thus, the simplex algorithm converges in a finite number of iterations. Modern solvers routinely solve problems with millions of variables, even on a laptop.. Mitchell Simplex Algorithm: Practical Considerations 4 / 20

Tolerances Outline 1 Initialization and termination 2 Tolerances 3 Pivoting rules 4 Preprocessing 5 Free variables Mitchell Simplex Algorithm: Practical Considerations 5 / 20

Tolerances Roundoff errors Numbers are represented to a finite precision on a computer. Combining together finite precision representations of numbers may lead to additional roundoff errors. Typically, 10 7 is regarded as machine single precision, while 10 16 is double precision. A computer cannot readily return solutions that are more accurate than these precision values. Mitchell Simplex Algorithm: Practical Considerations 6 / 20

Tolerances Tolerances Modern optimization solvers have several tolerances for declaring that a solution is optimal. The principal tolerances are: feasiblility: Do the variables obey their bounds? Are the constraints satisfied? The default tolerance for CPLEX is 10 6. Note that the tolerance might be an absolute value, or it might be a relative tolerance. A relative tolerance compares the error scaled by the given bound or value, for example: if require P n j=1 a ijx j = b i : relative tolerance is P n j=1 a ij x j b i max{1, b i } optimality: the reduced costs need to be nonnegative to conclude a solution is optimal. This is relaxed to requiring that all the reduced costs are greater than some tolerance. The default in CPLEX is to require that all reduced costs be no smaller than 10 9. Mitchell Simplex Algorithm: Practical Considerations 7 / 20

Tolerances Tolerances Modern optimization solvers have several tolerances for declaring that a solution is optimal. The principal tolerances are: feasiblility: Do the variables obey their bounds? Are the constraints satisfied? The default tolerance for CPLEX is 10 6. Note that the tolerance might be an absolute value, or it might be a relative tolerance. A relative tolerance compares the error scaled by the given bound or value, for example: if require P n j=1 a ijx j = b i : relative tolerance is P n j=1 a ij x j b i max{1, b i } optimality: the reduced costs need to be nonnegative to conclude a solution is optimal. This is relaxed to requiring that all the reduced costs are greater than some tolerance. The default in CPLEX is to require that all reduced costs be no smaller than 10 9. Mitchell Simplex Algorithm: Practical Considerations 7 / 20

Tolerances Pivot elements Internally, the algorithm needs to ensure that it does not choose a pivot element that is too close to zero, which would lead to accumulation of roundoff errors. Mitchell Simplex Algorithm: Practical Considerations 8 / 20

Pivoting rules Outline 1 Initialization and termination 2 Tolerances 3 Pivoting rules 4 Preprocessing 5 Free variables Mitchell Simplex Algorithm: Practical Considerations 9 / 20

Pivoting rules Pivoting rules The original pivot rule for choosing the entering variable is to choose the most negative reduced cost. Other rules that require more work per iteration, but typically reduce the number of iterations include: best improvement: choose the incoming variable that leads to the best improvement in the objective function value. Mitchell Simplex Algorithm: Practical Considerations 10 / 20

Pivoting rules Steepest edge steepest edge: choose the incoming variable where the simplex direction makes the most acute angle with the objective function c. x 2 3 2 1 (0, 0) min x2ir 2 3x 1 x 2 3x 1 + 3x 2 = 10 s.t. x 1 3x 2 apple 2 3x 1 + 3x 2 apple 10 x 1, x 2 0 feasible region x 1 3x 2 = 2 c 0 1 2 3 4 steepest edge x 1 Mitchell Simplex Algorithm: Practical Considerations 11 / 20

Pivoting rules Dual simplex dual simplex: Later, we will see the dual simplex algorithm, which can work very well, especially with a steepest edge pivot rule. Mitchell Simplex Algorithm: Practical Considerations 12 / 20

Pivoting rules Partial pricing One method to reduce computational cost is partial pricing: instead of examining all the reduced costs, we examine a subset and choose the incoming variable from this subset. If all the reduced costs in the subset are nonnegative then we examine some of the remaining reduced costs. Mitchell Simplex Algorithm: Practical Considerations 13 / 20

Preprocessing Outline 1 Initialization and termination 2 Tolerances 3 Pivoting rules 4 Preprocessing 5 Free variables Mitchell Simplex Algorithm: Practical Considerations 14 / 20

Preprocessing Preprocessing Commercial solvers preprocess linear optimization problems before solving them, looking for logical implications that allow them to shrink the size of the problem. For example, they look for variables that can be fixed or constraints that are redundant. These steps are especially useful for integer optimization problems. Also useful sometimes is rescaling the problem, so that the numbers in different columns of the constraint matrix are not too widely divergent from one another. For example, if all the numbers in one column are expressed in terms of 10 3 and in another column in terms of 10 4, then the columns can be rescaled. Mitchell Simplex Algorithm: Practical Considerations 15 / 20

Free variables Outline 1 Initialization and termination 2 Tolerances 3 Pivoting rules 4 Preprocessing 5 Free variables Mitchell Simplex Algorithm: Practical Considerations 16 / 20

Free variables Handling upper bounds We ve previously seen that the simplex algorithm can handle upper bounds on variables without needing to introduce explicit slack variables. Mitchell Simplex Algorithm: Practical Considerations 17 / 20

Free variables Free variables Free variables are variables that are unrestricted in sign. Such a variable can be eliminated from a linear optimization problem. the free variable appears in an equality constraint: For example, we have a constraint x 1 + 3x 4 2x 6 = 5, where x 1 is a free variable. The for any values of x 4 and x 6,we can set x 1 = 5 3x 4 + 2x 6 and we don t have to worry about the sign of x 1. So, we can obtain an equivalent linear optimization problem by replacing x 1 by 5 3x 4 + 2x 6 in all the other constraints and the objective function. The original constraint x 1 + 3x 4 2x 6 = 5 can be deleted. Mitchell Simplex Algorithm: Practical Considerations 18 / 20

Free variables, part 2 Free variables the free variable appears only in inequality constraints: Assume we write all the inequality constraints as apple constraints, so they all have the form nx a ij x j apple b i. j=1 Assume x 1 is a free variable. There are some situations where we can make dramatic simplifications to the problem: I ai1 0 for all constraints and c 1 = 0: In this case, all the constraints with a nonzero a i1 coefficient are redundant, since these constraints can all be satisfied by taking x 1 sufficiently negative. I ai1 0 for all constraints and c 1 > 0: In this case, the problem has an unbounded optimal value, provided it is feasible: we can drive x 1! 1. I a i1 apple 0 for all constraints and c 1 = 0 or c 1 < 0: similar to the two previous cases, with now x 1!1. Mitchell Simplex Algorithm: Practical Considerations 19 / 20

Free variables, part 2 Free variables the free variable appears only in inequality constraints: Assume we write all the inequality constraints as apple constraints, so they all have the form nx a ij x j apple b i. j=1 Assume x 1 is a free variable. There are some situations where we can make dramatic simplifications to the problem: I ai1 0 for all constraints and c 1 = 0: In this case, all the constraints with a nonzero a i1 coefficient are redundant, since these constraints can all be satisfied by taking x 1 sufficiently negative. I ai1 0 for all constraints and c 1 > 0: In this case, the problem has an unbounded optimal value, provided it is feasible: we can drive x 1! 1. I a i1 apple 0 for all constraints and c 1 = 0 or c 1 < 0: similar to the two previous cases, with now x 1!1. Mitchell Simplex Algorithm: Practical Considerations 19 / 20

Free variables, part 2 Free variables the free variable appears only in inequality constraints: Assume we write all the inequality constraints as apple constraints, so they all have the form nx a ij x j apple b i. j=1 Assume x 1 is a free variable. There are some situations where we can make dramatic simplifications to the problem: I ai1 0 for all constraints and c 1 = 0: In this case, all the constraints with a nonzero a i1 coefficient are redundant, since these constraints can all be satisfied by taking x 1 sufficiently negative. I ai1 0 for all constraints and c 1 > 0: In this case, the problem has an unbounded optimal value, provided it is feasible: we can drive x 1! 1. I a i1 apple 0 for all constraints and c 1 = 0 or c 1 < 0: similar to the two previous cases, with now x 1!1. Mitchell Simplex Algorithm: Practical Considerations 19 / 20

Free variables, part 2 Free variables the free variable appears only in inequality constraints: Assume we write all the inequality constraints as apple constraints, so they all have the form nx a ij x j apple b i. j=1 Assume x 1 is a free variable. There are some situations where we can make dramatic simplifications to the problem: I ai1 0 for all constraints and c 1 = 0: In this case, all the constraints with a nonzero a i1 coefficient are redundant, since these constraints can all be satisfied by taking x 1 sufficiently negative. I ai1 0 for all constraints and c 1 > 0: In this case, the problem has an unbounded optimal value, provided it is feasible: we can drive x 1! 1. I a i1 apple 0 for all constraints and c 1 = 0 or c 1 < 0: similar to the two previous cases, with now x 1!1. Mitchell Simplex Algorithm: Practical Considerations 19 / 20

Free variables, part 3 Free variables If we are not in one of these simpler subcases then we may well need to introduce slack variables into the inequality constraints and then eliminate the free variable. Mitchell Simplex Algorithm: Practical Considerations 20 / 20