LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

Similar documents
Some Advanced Topics in Linear Programming

Linear Programming Duality and Algorithms

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

4 Integer Linear Programming (ILP)

Modelling of LP-problems (2WO09)

Chapter 15 Introduction to Linear Programming

Discrete Optimization. Lecture Notes 2

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Column Generation: Cutting Stock

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

The Simplex Algorithm

Integer Programming Theory

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

5 The Theory of the Simplex Method

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

Integer Programming Explained Through Gomory s Cutting Plane Algorithm and Column Generation

Mathematical and Algorithmic Foundations Linear Programming and Matchings

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

Lecture 2 - Introduction to Polytopes

5.3 Cutting plane methods and Gomory fractional cuts

COLUMN GENERATION IN LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Math 5593 Linear Programming Lecture Notes

12.1 Formulation of General Perfect Matching

Vertex Cover Approximations

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

11 Linear Programming

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

Solutions for Operations Research Final Exam

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Lecture Notes 2: The Simplex Algorithm

Solutions to Assignment# 4

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

Algorithms for Integer Programming

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Lecture 2 September 3

Financial Optimization ISE 347/447. Lecture 13. Dr. Ted Ralphs

Lecture 5: Duality Theory

CMPSCI611: The Simplex Algorithm Lecture 24

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Linear Programming: Introduction

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

Linear Programming. Course review MS-E2140. v. 1.1

3 No-Wait Job Shops with Variable Processing Times

1. Lecture notes on bipartite matching February 4th,

Artificial Intelligence

6. Lecture notes on matroid intersection

The Simplex Algorithm

Unit.9 Integer Programming

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Programming, numerics and optimization

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

Real life Problem. Review

Linear programming II João Carlos Lourenço

In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems.

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Linear Programming Motivation: The Diet Problem

Linear programming and duality theory

In this lecture, we ll look at applications of duality to three problems:

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI

Lecture 4: Linear Programming

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

VARIANTS OF THE SIMPLEX METHOD

Integer Programming ISE 418. Lecture 1. Dr. Ted Ralphs

1 Linear programming relaxation

Lagrangean Methods bounding through penalty adjustment

Lecture 5: Properties of convex sets

Combinatorial Optimization

TMA946/MAN280 APPLIED OPTIMIZATION. Exam instructions

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

Advanced Linear Programming. Organisation. Lecturers: Leen Stougie, CWI and Vrije Universiteit in Amsterdam

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Math Models of OR: The Simplex Algorithm: Practical Considerations

Optimality certificates for convex minimization and Helly numbers

NATCOR Convex Optimization Linear Programming 1

Module 10. Network Simplex Method:

Investigating Mixed-Integer Hulls using a MIP-Solver

An example of LP problem: Political Elections

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

EXTREME POINTS AND AFFINE EQUIVALENCE

4. Simplicial Complexes and Simplicial Homology

Stable sets, corner polyhedra and the Chvátal closure

CS522: Advanced Algorithms

Combinatorial Auctions: A Survey by de Vries and Vohra

Transcription:

LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following topics should yield a lot of additional knowledge. In three meeting I will discuss 1. Linear and Integer Programming models; 2. Graph problems and algorithms; 3. Column generation (builds on LP-models, has lots of applications); The topics will further include the use of a Modelling language, which is a tool to build various math programming models in a structured way. The tool we will use is AIMMS, which we downloaded from //www.win.tue.nl/bs/aimms. It has a student version with a limited amount of identifiers and a restricted size of problems to solve. It can also be used with the unlimited network license which resides on the departmental license server: winlicenses.campus.tue.nl as long as the user is not too far away from the license server. We will use AIMMS to build and evaluate models. AIMMS can handle various kinds of models, and allows for programming in a C-style language, building procedures, functions etcetera. A very specific advantage of AIMMS is the ease with which to build a user interface. This enables us to easily play around with input and output. When dealing with practical problems, these interaction aspects are mathematically uninteresting, but often absorb a lot of time. Part of the course material is taken from the Optimization Modeling Guide that comes with AIMMS. This material is contained with the AIMMS distribution, as well as a User and a Language Reference. Furthermore there is a Student Tutorial which gives the bare minimum of hints how to work with AIMMS. It can be worked through in more or less an hour. The Modeling Guide does not cover the theory. Therefore the theory will be dealt with briefly in class, and is described in the following sections. This treatment will not be in-depth because it concerns mainly well-known mathematical principles. 2 Optimization models Below we will deal with three very general types of mathematical programming models. Here the term programming may be misleading. It was invented before the computer age, and its original meaning is planning. The three areas are 1

Linear programming Integer programming Non-linear programming Examples of these three types are described in chapter 2 of the Optimization guide, along a worked out example of a production plan for potato chips. A more formal discussion will be given below. 2.1 Linear programming In a linear programming model all restrictions and the objective function are linear expressions in the modeling variables. The variables (x j, j = 1,..., n) generally have as their domain R, and are restricted by two types of constraints, namely the technological constraints, which are of the general form a ij x j b i j and by so-called bounds l j x j u j The coefficients a ij and b i are rational numbers, and are derived from either the data for the instance or from the model. The sign between the LEFT HAND SIDE j a ijx j and the RIGHT HAND SIDE b i can be either, or =. The objective function is a linear function as well j c jx j, with rational cost coefficients c j. The objective may be to maximize or minimize the objective over all vectors x R n that satisfy the constraints. It is not so difficult to see that problems min cx, subject to Ax b; l x u and max cx, subject to Ax b; l x u are equivalent and deliver the same optimal solutions, with objective values having opposite signs. Similarly, constraints Ax b are equivalent with Ax b, and the equality system Ax = b is equivalent with Ax b, Ax b. Finally, Ax b can be seen to be equivalent with the extended system Ax + s = b; 0 s. By manipulation a linear programming problem can always be restated in one the following three standard representations. minimization in canonical form: min cx subject to Ax b; x 0 maximization in canonical form: max cx subject to Ax b; x 0 minimization in standard form: min cx subject to Ax = b; l x u 2

The first two formulations convey the intuition that when you try to maximize something, there are some restrictions preventing you from getting to far away. Note that the intuition is misleading, because the coefficients in A, b, and c may of course be negative. The two first formulations also come close to the geometrical picture one has in mind when thinking of linear programming problems. The third formulation comes close to the algebraic notion of the linear programming problem. 2.1.1 Simplex method Any study of mathematics mentions the simplex method as the algorithm to solve linear programming problems. Often this is illustrated by showing how the simplified minimization problem: min cx subject to Ax = b; x 0 is solved. Assume that the m n matrix A has full row rank (m), then the first observation is that if an optimal solution exists, there is also one that has not more than m non-zeroes. Let B denote a set of m linearly independent columns of A, forming a square, invertible matrix B, and let N denote the set of remaining columns. By setting all variables with index in N to zero, the remaining equation system Bx B = b has a unique solution. The vector (x B, x N ) = (B 1 b, 0) is called a basic solution. If x B = B 1 b 0, then it is a feasible basic solution. The simplex method can be seen as a black box, delivering a basic feasible solution of which it claims it is also optimal. To check whether it is feasible is easy, but to know for sure that it is optimal requires either good faith in the computer or a compact proof also delivered by the algorithm. Let c B denote that part of the cost vector c that is associated with the basis B of the final basic solution delivered by simplex. Consider the (row) vector w = c B B 1. This vector has the property that wa c, which can easily be verified once w is given. Now consider any feasible solution to the system Ax = b, x 0. We have cx wax = wb = c B B 1 b = c B x B = c B x B + c N x N which means that every feasible solution has a cost at least the cost of the final solution given by the simplex method. So the solution (x B, x N ) is optimal. 2.1.2 Column generation The above description of the simplex method provides a mechanism to solve LARGE linear programs with n much bigger than m, by a sophisticate scheme of alternatingly solving small problems, and testing for optimality. In order to solve min c 1 x 1 + c 2 x 2 subject to A 1 x 1 + A 2 x 2 = b; x 1, x 2 0 we may first solve the smaller system min c 1 x 1 subject to A 1 x 1 = b; x 1 0 3

The simplex method gives us a feasible basic solution ˆx 1, and a proof of optimality ŵ = c B 1 B 1. This proof of optimality only says that ŵa 1 c 1. Of course we hope that solution (ˆx 1, 0) is an optimal solution for the whole system. This is provably true if ŵ also has the property that ŵa 2 c 2 Hence, we only have to check if ŵa j c j, for ALL columns j in the A 2 -part. Suppose we can do this effectively. If it turns out to be true, we are done, and we found the solution with little effort. If there is a column j for which the desired property does not hold, we add column j to the first part A 1, and restart. It is impossible to give a priori a low upper bound on the number of iterations, but from a PRACTICAL point of view the observation is that the number of iterations between solving the mini LP, and checking for optimality is relatively small, certainly when the number of columns of A 2 is of much higher magnitude than m, the number of rows. A good example of this approach is found in the AIMMS demo of a cutting stock problem. 2.1.3 Shadow prices The vector w in the previous sections is ofter referred to as the shadow price vector. interpret w it is more convenient to consider a maximization problem of the form To max cx subject to Ax b; x 0 with b a positive vector. Note that the equivalent equation system reads Ax + Iy = b, y 0, x 0, and the corresponding cost/profit function is max cx+0y or, equivalently, min cx 0y. With the basic feasible solution a proof vector w = γb 1 is returned with the property that w(a I) ( c 0) In other words: w 0, and wa c. Note that γ is taken from ( c, 0). Rephrasing it in the original cost row, by taking ŵ = γb 1 = w we obtain a vector ŵ with the property ŵ 0, ŵa c Here ŵ i is a non-negative multiplier for row i and is to be interpreted as the potential increase in the optimal solution value, if the right hand side of constraint i is increased by one unit. If ŵ i > 0 then in the current solution the constraint is effectively blocking the solution from becoming bigger, whereas if the value is zero, it usually means that the constraint is not tight, and we may even decrease b i. The term c j ŵa j is called the reduced profit (in case of a maximization problem) or reduced cost (in case of minimization). An optimal solution is characterized by having nonbasic variables with non-negative reduced cost, or non-positive reduced profit. 4

2.2 Integer Linear Programming problems Many combinatorial problems can be stated as Linear programming problems with the AD- DITIONAL requirement that some or all of the variables are only allowed integral values. max cx subject to Ax b; x 0; x j Z, j J Since the simplex method relies on elementary row operations on systems of equations it cannot take integrality of variables into account. 2.2.1 Solving ILPs In order to solve an Integer Linear Programming problem (ILP), we hope for the best, forget about the integrality requirements, solve the remaining LP (which is called the LP-relaxation of the ILP-description) and count our blessings. It may turn out that the outcome of simplex actually does satisfy the integrality property. Another outcome may be that the LP-problem is not feasible. That is there does not exist any solution. But then again, our ILP-problem is solved, because we can tell that also this problem has no solution. What remains is the case that the LP-relaxation does have an optimal solution x with value z, but x j Z, for at least some j J. From this point onwards there are several approaches one can take. 1. A solution was already available, with value ẑ. If z = ẑ, then the outcome of the LPformulation has proven that the known solution was actually optimal. This is of course nice to know. If ẑ < z, this does not say that the known solution is not optimal. We simply do not know. If z and ẑ are very close, one might be satisfied with the known result, because there does not exist a true solution that is strictly higher than z. 2. Maybe it is possible to round the LP-solution to an integral one, that still satisfies all the constraints. If so, the previous approach applies. 3. Maybe it is possible to formulate a constraint that is satisfied by every integral solution to our problem, but not by the LP-solution x. For instance, if we are looking in a graph, for a maximal subset of edges, in such a way that no two edges in the subset intersect, we might model this as max x e subject to x e 1, v V ; x e 0; x e Z, e E e e:v e If the graph G = (V, E) contains a triangle {u, v, w} the LP-solution may contain components x uv = x vw = x wu = 0.5 thus satisfying the degree constraints for vertices {u, v, w}. Evidently, in any integral solution, out of three edges in a triangle, at most one appears in the solution, and hence the constraint x uv + x vw + x wu 1.0 is satisfied by any integral solution. Adding this constraint to the LP-description of the problem restricts the space of feasible solutions, and in particular forbids the solution x that we got in the first LP-relaxation. This method is called cutting planes. It involves 5

methods to describe classes of valid inequalities for integral solutions, and methods for finding a violated inequality, given an LP-solution. In principle, the cutting planes method can also be used when solving plain LP-models, with say, exponentially many, implicitly described constraints. It resembles very much the column generation approach mentioned in the LP-section. Again, it is hard to predict the number of rounds one has to go through before an integral solution is reached. Notice that if the set of integral solution vectors is bounded, then the set consisting of all its convex combinations (which is formally called a polytope) can be described as the intersection of a finite number of half-spaces described by supporting hyper-planes. Hence, there exists an LP-description of the problem such that the extreme points correspond to integral solutions. Therefore, by solving the LP-problem one solves the ILP. The only problem is that we do not have the explicit description of this LP-model, and we cannot even know the number of constraints involved. 4. The standard approach taken, when an optimal solution is required, is called branchand-bound. The term branching refers to splitting the problem into two or more subproblems, in such a way that the optimal solution to the ILP is retained in (at least) one of these sub-cases, and such that the last LP-solution x is excluded from any of the sub-cases. A straightforward way of splitting the original problem into two cases is the following. Let ˆx j0 Z, for some j 0 J, then the world of all solutions x to the ILP can be split in those for which x j0 ˆx j0 and those with x j0 ˆx j0. Obviously there are no integral solutions in between. Moreover, the last LP-solution ˆx occurs in neither of the two cases. One now has to recursively solve both and max cx subject to Ax b; x 0; x j0 ˆx j0 ; x j Z, j J max cx subject to Ax b; x 0; x j0 ˆx j0 ; x j Z, j J and take the best of the two optimal solutions. Notice that this can get way out of control. At each split the number of sub-problems is increased by at least one. This might lead to an exponentially large search tree. This tree of sub-cases can be generated and searched through in various ways: on which variable to split?, which sub-case to deal with first?, etc. In the context of the tree search it is not always necessary to completely solve each sub-case. Notice that if a sub-case is dealt with by solving its LP-relaxation, and if at that time a solution to the ILP-problem is known, with value ẑ, and if the LP-relaxation of the sub-case yields a value z < ẑ (in case of maximization), then we know from here that the sub-case cannot have a solution that will improve the current one. Hence we need not solve this sub-case to optimality, and therefore not generate all the sub-cases below it. Here we prune the search tree. That we may do so, was implied by the LP-bound of the sub-case, hence the term bound. It is of importance to realize that many combinatorial problems can be stated in equivalent integer linear programming models. Since the behavior of the branch-and-bound 6

method is based on the LP-relaxation, it is necessary to have an LP-description that is as close as possible to the convex hull of all integral solution vectors. Different ILP-models may lead to different branch-and-bound behavior. 2.2.2 Modeling with integers The additional requirement x j Z, j J leads to strong modeling power. One can model all kinds of discontinuities and dijunctions, etcetera. We will see this from the AIMMS chapters on Linear and Integer Programming Tricks, in chapters 6 and 7 of the AIMMS Optimization Modeling Guide. 2.3 Non-Linear Programming problems They are a generalization of LPs since they allow for non-linear objective functions, to be optimized over non-linearly constraint solution spaces. This is a so general theme, that it would need a course of its own. A simplest approach in NLP is to determine the gradient of the objective function, and follow it to improve on the solution, while taking care of the boundaries of the feasible region. For special cases this can actally be implemented in a efficient way (Interioir Point Methos). These apply also for LP-problems and these methods are sometimes preferred over the simplex method. 7