Lecture 4: Linear Programming

Similar documents
CMPSCI611: The Simplex Algorithm Lecture 24

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Linear Programming and its Applications

Linear programming and duality theory

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

16.410/413 Principles of Autonomy and Decision Making

The Simplex Algorithm

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

Linear Programming: Introduction

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Introduction to Linear Programming

Discrete Optimization. Lecture Notes 2

maximize c, x subject to Ax b,

Lecture Notes 2: The Simplex Algorithm

The Simplex Algorithm

Lecture 9: Linear Programming

Linear Programming Terminology

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Chapter 15 Introduction to Linear Programming

Lecture 2 - Introduction to Polytopes

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

Linear Programming. them such that they

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Combinatorial Geometry & Topology arising in Game Theory and Optimization

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Some Advanced Topics in Linear Programming

MATH 890 HOMEWORK 2 DAVID MEREDITH

Lesson 17. Geometry and Algebra of Corner Points

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Chapter 4 Concepts from Geometry

Artificial Intelligence

Lecture 6: Faces, Facets

Programming, numerics and optimization

Linear Programming. Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Math 414 Lecture 2 Everyone have a laptop?

ORIE 6300 Mathematical Programming I September 2, Lecture 3

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

Integer Programming Theory

R n a T i x = b i} is a Hyperplane.

AMATH 383 Lecture Notes Linear Programming

Farming Example. Lecture 22. Solving a Linear Program. withthe Simplex Algorithm and with Excel s Solver

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

IE 5531: Engineering Optimization I

Notes taken by Mea Wang. February 11, 2005

Chapter 3 Linear Programming: A Geometric Approach

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

4 Integer Linear Programming (ILP)

Notes for Lecture 18

Modeling and Analysis of Hybrid Systems

Modeling and Analysis of Hybrid Systems

Linear Programming in Small Dimensions

Polytopes Course Notes

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

ACTUALLY DOING IT : an Introduction to Polyhedral Computation

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Math 273a: Optimization Linear programming

1. CONVEX POLYGONS. Definition. A shape D in the plane is convex if every line drawn between two points in D is entirely inside D.

4.1 Graphical solution of a linear program and standard form

1 Linear Programming. 1.1 Optimizion problems and convex polytopes 1 LINEAR PROGRAMMING

Linear Programming Duality and Algorithms

Solutions to problem set 1

OPERATIONS RESEARCH. Linear Programming Problem

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

Math 5593 Linear Programming Lecture Notes

Convex Geometry arising in Optimization

Question. Why is the third shape not convex?

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where

Lecture 4: Rational IPs, Polyhedron, Decomposition Theorem

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

AMS : Combinatorial Optimization Homework Problems - Week V

Linear programming II João Carlos Lourenço

Seach algorithms The travelling salesman problem The Towers of Hanoi Playing games. Comp24412: Symbolic AI. Lecture 4: Search. Ian Pratt-Hartmann

Mathematical Programming and Research Methods (Part II)

11 Linear Programming

J Linear Programming Algorithms

Mathematics for Business and Economics - I. Chapter7 Linear Inequality Systems and Linear Programming (Lecture11)

Applied Integer Programming

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

On Unbounded Tolerable Solution Sets

Finite Math Linear Programming 1 May / 7

Convex Optimization CMU-10725

In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems.

16.410/413 Principles of Autonomy and Decision Making

Numerical Optimization

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

MATH 310 : Degeneracy and Geometry in the Simplex Method

An example of LP problem: Political Elections

Lecture 5: Duality Theory

Transcription:

COMP36111: Advanced Algorithms I Lecture 4: Linear Programming Ian Pratt-Hartmann Room KB2.38: email: ipratt@cs.man.ac.uk 2017 18

Outline The Linear Programming Problem Geometrical analysis The Simplex Method

The originators of linear programming: Leonid Kantorovich 1912 1986 George Dantzig 1914 2005

The military situation around Leningrad, 1942

Linear programming is an optimization problem in which we seek values for non-negative real variables which will maximize a linear objective function, subject to a set of linear constraints. Expressed in matrix form, we must maximize c T x subject to Ax b and x 0, where x is the variable vector (x 1,..., x n ) T The feasible region is the area of n-dimensional space R n satisfying the constraints. This may be empty. Any point within this region is a feasible solution. The function x c T x is called the objective function.

A Simple Example Suppose Scatalogical Devices PLC (a chip company) makes profits of 10 on its α chip and 12 on its β chip. The α chip costs 25 pence to produce, and takes 5 hours. The β chip costs 40 pence to produce and takes only 4 hours to produce. The company has just 20 to spend and 320 hours of fabrication time.

We express this problem as a set of inequalities Let the number of α chips produced be x Let the number of β chips produced be y Then the profit is f = 10x + 12y. And the constraints are given by Or, if you like: 0.25x + 0.4y 20 5x + 4y 320 5x + 8y 400 5x + 4y 320

Pictorially: 80 50 R 64 80

We shall always take linear programming problems to have the form maximize: c T x subject to: Ax b x 0 This is sometimes called standard form. Standard form is less restrictive than it looks: An equality is the same as two inequalities The inequality a T b b can be re-written a T b b Any real variable can be written as the difference of two non-negative real variables x = x x. Strict inequalities cannot be eliminated in this way, but they are not really interesting in the context of maximization.

Outline The Linear Programming Problem Geometrical analysis The Simplex Method

A half-plane is the region consisting of a plane and one of its residual domains x 2 x 3 a 1 x 1 + a 2 x 2 + a 3 x 3 A polyhedron is a non-empty intersection of half-planes (i.e. Ax b). x 1

All polyhedra have a simple characterization. A polytope is a bounded polyhedron. A cone is a polyhedron defined by planes passing through the origin (i.e. Ax 0).

The following facts are not hard to prove: The polytpes are exactly the convex combinations of finite non-empty sets of vectors: λ 1 a 1 + λ p a p (λ k all non-negative, λ 1 + λ p = 1) The cones are exactly the non-negative combinations of finite non-empty sets of vectors. λ 1 a 1 + λ p a p (λ k all non-negative). Every polyhedron P defined by Ax b is the Minkowski sum of a polytope U and a cone V (where we allow V = {0}): P = U V = {u + v u U, v V }.

Let s just think about polytopes for the moment. (This is the usual case in practice.) The objective function always has a maximum in this case. (Why?) The objective function is a convex combination of its value at the vertices: if x = λ 1 a 1 + λ p a p, then f (x) = λ 1 f (a 1 ) + λ p f (a p ). It follows that the maximum must be achieved at a vertex. (Why?)

Not only must the maximum be achieved at a vertex any vertex a at which f is greater than its value at any neighbour is a global maximum. a This suggests an algorithm based on moving from one vertex to another in such a way as to increase the objective function.

The simplex algorithm is: 1. select a vertex of the feasible region as starting point; 2. choose an edge on the surface of the feasible region through this point such that the objective function increases; 3. proceed along this edge to the next vertex; 4. repeat steps 2 and 3 until the objective function cannot be increased.

Consider a system of m inequalities Ax b in n variables, and assume all m rows of A to be linearly independent (generally, m > n). We can take k n of these inequalities and replace them with the corresponding equations (keep the remaining inequalities). This will define a face of the feasible region of dimension n k. If k = n, we have a vertex.

Note that any inequality a 1 x 1 + + a n x n b can be re-written as an equation a 1 x 1 + + a n x n + w = b where w is a new variable subject to the constraint w 0. Such a variable is called a slack variable. Obviously, we can rewrite a whole system of inequalities in this way: we need a new slack variable for every inequality.

Now think about faces of the feasible region in terms of the slack variables. A vertex is a point lying on n bounding planes in particular, it is a solution of the equational system in which n variables are 0. We call such a solution in which n variables are chosen to be zero a basic feasible solution (bfs). These are called the basic variables. The remainder are called the non-basic variables.

Now imagine taking a bfs in which n slack variables are 0, and allowing one of the variables to become positive. (But stay in the polytope: don t violate any of the remaining inequalities!) This corresponds to moving away from the vertex corresponding to the bfs, and moving to a point lying on just n 1 of the n bounding planes involved in other words, on an edge of the polytope adjacent to that vertex.

So that is what it means to move from one vertex from another along an edge: take a bfs, and allow one of the basic variables to become positive, until you hit some other constraint, at which point another slack variable will become zero, and you have another bfs.

Outline The Linear Programming Problem Geometrical analysis The Simplex Method

Consider the following example maximize: f = 4x 2y z subject to: x + y + z 3 2x + 2y + z 4 x y 1 x, y, z 0.

First, we introduce slack variables to turn the inequalities into equalities. Thus, becomes with u, v, w 0. x + y + z 3 2x + 2y + z 4 x y 1 3 x y z = u 4 2x 2y z = v 1 x +y = w

So now we work with the equations and the objective function 3 x y z = u 4 2x 2y z = v 1 x +y = w f = 4x 2y z, and we take all variables to be non-negative.

Equations: objective function: 3 x y z = u 4 2x 2y z = v 1 x +y = w f = 4x 2y z, We note that x = y = z = 0 is a solution. (This is not always so, but in real examples it quite often is.) This solution is a bfs, and the variables x, y, z are its basic variables. The value of f at this bfs is 0.

Equations: objective function: 3 x y z = u 4 2x 2y z = v 1 x +y = w f = 4x 2y z. We start off with the bfs x = 0, y = 0, z = 0. We can increase f by increasing x. By how much? Well, until one of the equations becomes insoluble. (Remember: all variables are non-negative). We can increase x by 1, which zeroes w.

Equations: objective function: 3 x y z = u 4 2x 2y z = v 1 x +y = w f = 4x 2y z. We start off with the bfs x = 0, y = 0, z = 0. We can increase f by increasing x. By how much? Well, until one of the equations becomes insoluble. (Remember: all variables are non-negative). We can increase x by 1, which zeroes w.

Now that w is 0, it becomes a basic variable, and x ceases to be one. So we pivot on x, rewriting as 1 x + y = w 1 w + y = x Substituting the left-hand-side for x in the the above problem, we have the equations: and objective function: 2 +w 2y z = u 2 +2w 4y z = v 1 w +y = x f = 4 4w + 2y z. Now we have the bfs w = 0, y = 0, z = 0. We now increase y by 1 2, which zeroes v.

Now that v is 0, it becomes a basic variable, and y ceases to be one. So we pivot on y, rewriting 2 + 2w 4y z = v as 1 2 + 1 2 w 1 4 v 1 4 z = y Substituting the left-hand-side for y in the the above problem, we have the objective function: and the equations... f = 5 3w 1 2 v 3 2 z.... Just a minute, we can t increase the objective function by increasing any variables!

We had the equations and objective function 2 +w 2y z = u 2 +2w 4y z = v 1 w +y = x f = 5 3w 1 2 v 3z; and we agreed to set y = 1 2 with w = z = 0. We have arrived at the basic solution (x, y, z, u, v, w) = ( 3 2, 1, 0, 1, 0, 0) 2 and f = 5.

Reading G+T: Ch 26.1 26.2, pp. 731 743.