Linear Programming. Course review MS-E2140. v. 1.1

Similar documents
Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Linear programming and duality theory

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Outline: Finish uncapacitated simplex method Negative cost cycle algorithm The max-flow problem Max-flow min-cut theorem

Some Advanced Topics in Linear Programming

Read: H&L chapters 1-6

INEN 420 Final Review

Linear Programming Problems

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

LECTURES 3 and 4: Flows and Matchings

5.3 Cutting plane methods and Gomory fractional cuts

AM 121: Intro to Optimization Models and Methods Fall 2017

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

Solutions for Operations Research Final Exam

CSC 8301 Design & Analysis of Algorithms: Linear Programming

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Math 5490 Network Flows

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

3 INTEGER LINEAR PROGRAMMING

Linear Programming Motivation: The Diet Problem

New Directions in Linear Programming

Math 414 Lecture 30. The greedy algorithm provides the initial transportation matrix.

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

Lecture 3: Totally Unimodularity and Network Flows

Integer Programming Theory

The Ascendance of the Dual Simplex Method: A Geometric View

Artificial Intelligence

Linear Programming Duality and Algorithms

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

DM515 Spring 2011 Weekly Note 7

Notes for Lecture 20

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

An iteration of the simplex method (a pivot )

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Duality. Primal program P: Maximize n. Dual program D: Minimize m. j=1 c jx j subject to n. j=1. i=1 b iy i subject to m. i=1

CSE 417 Network Flows (pt 4) Min Cost Flows

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

Tribhuvan University Institute Of Science and Technology Tribhuvan University Institute of Science and Technology

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

56:272 Integer Programming & Network Flows Final Examination -- December 14, 1998

Graphs and Network Flows IE411. Lecture 13. Dr. Ted Ralphs

Easter Term OPTIMIZATION

Design and Analysis of Algorithms (V)

VARIANTS OF THE SIMPLEX METHOD

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

The Simplex Algorithm

Math Models of OR: The Simplex Algorithm: Practical Considerations

THEORY OF LINEAR AND INTEGER PROGRAMMING

Introduction to Operations Research

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

Heuristic Optimization Today: Linear Programming. Tobias Friedrich Chair for Algorithm Engineering Hasso Plattner Institute, Potsdam

16.410/413 Principles of Autonomy and Decision Making

Integer Programming as Projection

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

ORF 307: Lecture 14. Linear Programming: Chapter 14: Network Flows: Algorithms

Lecture 5: Duality Theory

CSE 40/60236 Sam Bailey

CSE 417 Network Flows (pt 3) Modeling with Min Cuts

ME 391Q Network Flow Programming

Graphs and Network Flows IE411. Lecture 20. Dr. Ted Ralphs

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

R n a T i x = b i} is a Hyperplane.

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

5.4 Pure Minimal Cost Flow

Generalized Network Flow Programming

George B. Dantzig Mukund N. Thapa. Linear Programming. 1: Introduction. With 87 Illustrations. Springer

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Column Generation: Cutting Stock

A Computer Technique for Duality Theory in Linear Programs

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

x ji = s i, i N, (1.1)

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

Dual-fitting analysis of Greedy for Set Cover

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

4 Integer Linear Programming (ILP)

The simplex method and the diameter of a 0-1 polytope

Unit.9 Integer Programming

Pivot and Gomory Cut. A MIP Feasibility Heuristic NSERC

Algorithms for Integer Programming

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

11 Linear Programming

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Improved Gomory Cuts for Primal Cutting Plane Algorithms

Mathematics for Decision Making: An Introduction. Lecture 18

Transcription:

Linear Programming MS-E2140 Course review v. 1.1

Course structure Modeling techniques Linear programming theory and the Simplex method Duality theory Dual Simplex algorithm and sensitivity analysis Integer Programming methods Network flow problems and special cases

Linear Programming problems Any linear programming problem can be transformed into an equivalent one in standard form: min c x s. t. Ax = b x 0 If its optimal cost is finite, then at least one optimal solution corresponds to an extreme point of its feasible region

Basic solutions To obtain a basic solution of a standard form problem with an m n constraint matrix A Choose m linearly independent columns of A that form a basis matrix B = (A B 1,, A B m ) Define the vector of basic variables x B = (x B 1,, x B m ) where x B i is associated with the i-th column A B i of B Define the vector x N of non-basic variables which is formed by all variables x j associated with the columns not in B The basic solution associated with B is x B = B 1 b, x N = 0 The associated basic cost vector is c B = (c B 1,, c B m )

Basic feasible solutions A basic solution x of a standard form problem associated with a basis matrix B is called: Feasible if x B = B 1 b 0 Degenerate if some component(s) of x B is 0 Optimal if x B = B 1 b 0 and c = c c B B 1 A 0 basic feasible solutions correspond to extreme points of the feasible region The Simplex method solves an LP in standard form by looking for an optimal basic feasible solution

Simplex method: Basic directions Works by iteratively moving from a basic feasible solution to an adjacent one with better cost Adjacent basic solutions have all basic variables in common except for one: The corresponding bases differ by one column The movement from a basic solution x associated with basis B to an adjacent one is along a basic direction Moving along the j-th basic direction d increases the value of a non-basic variable x j. The j-th basic direction d is defined as: d j = 1, d i = 0, for all other i j such that x i is non basic d B = B 1 A j where d B = (d B 1,, d B m ) The solutions obtained by moving along this direction are x + θd

Simplex method: Change of basis A movement of θ along the j th basic direction d Changes the solution cost from z to z + θ c j where c j = c j c B B 1 A j is the reduced cost of variable x j Changes the value of the non-basic variable x j from 0 to θ If all reduced costs are non-negative then none of the adjacent basic solutions has better cost: The current solution is optimal If a variable x j has negative reduced cost c j a better adjacent basic feasible solution is obtained by moving as much as possible along the corresponding basic direction. The maximum movement possible along a basic direction d is θ = min i=1,,m x B i d B i d B i < 0

Full tableau Simplex c B B 1 b c c B B 1 A Reduced cost vector Basic variables values column 0 z x B(1) c 1 e 1 Choose an entering variable x j with reduced cost c j < 0 Increase the value of x j by moving along the corresponding basic direction until a basic variable x B(i) becomes 0 x B(i) is such that x B i u i = min k=1,,m x B(m) B 1 b c m c m+1 e m B 1 A m+1 I B 1 N B 1 A n Each column j is the vector u = d B where d B is the vector of basic components of the j th basic direction x B k u k u k > 0 (where u k = a kj ) Update the tableau to reflect this change: make a pivoting operation on row i and column j c n zeroth row a ij : element in row i and column j

Two phase Simplex method Original LP Auxiliary problem (auxiliary LP) n m minimize z = c j x j minimize z = y i j=1 i=1 n n s. t. a ij x j = b i, i = 1,, m s. t. a ij x j + y i = b i, i = 1,, m j=1 j=1 x j 0, j = 1,, n x j 0, j = 1,, n y i 0, i = 1,, m Phase 1: Solve the auxiliary problem The initial basis matrix is formed by the columns of the artificial variables and corresponds to the identity matrix If the auxiliary problem has optimal cost > 0 then the original problem is infeasible

Phase 2: Solve the original problem Start from the optimal tableau obtained at the end of Phase 1 For each artificial variable y B(i) which is basic: If row i has all zero entries in the columns of the original variables x j, j = 1,, n, then remove row i Otherwise, find a column j corresponding to a variable x j with element a ij 0 in row i. Make a pivoting on a ij Recompute the reduced cost of each variable x j and the value of the objective function by using the original costs c j Update the tableau (zeroth row) and continue with the Simplex

Duality Any LP has a dual problem Primal problem (in standard form) min c x s. t. Ax = b x 0 Dual problem max p b s. t. p A c p unrestricted A R m n, c R n, b R m, x R n, p R m Any feasible solution of the dual has a cost lower than or equal to any feasible solution of the primal (assuming the primal is a minimization problem)

Construction of the dual problem Each constraint of the primal (except for non-negativity constraints) corresponds to a variable of its dual Each variable of the primal corresponds to a constraint of its dual PRIMAL (min) DUAL (max) Constraints a i x b i p i 0 Variables (p) a i x b i p i 0 a i x = b i p i unrestricted x j 0 p A j c j Variables (x) x j 0 Constraints p A j c j x j unrestricted p A j = c j

Weak duality Weak duality theorem If x is a feasible solution to the primal, and p is a feasible solution to the dual, then p b c x Corollaries (assuming the primal is a minimization problem) a) If the optimal cost of the primal is, then its dual is infeasible b) If the optimal cost of the dual is +, then its primal is infeasible c) Let x and p be feasible solutions to the primal and the dual, respectively. If p b = c x, then x and p are optimal

Strong duality Strong duality theorem If an LP has an optimal solution x, then its dual also has an optimal solution p, and their costs are equal, i.e., c x = p b Possibilities for a pair of primal and dual problems Dual Primal Optimal solution Optimal solution Unbounded Infeasible Possible Impossible Impossible Unbounded Impossible Impossible Possible Infeasible Impossible Possible Possible

Complementary slackness Complementary slackness theorem Let x and p be feasible solutions to the primal and to its dual, respectively. Then, x and p are optimal if and only if they satisfy Complementary slackness conditions p i a ix b i = 0, i = 1,, m c j p A j x j = 0, j = 1,, n (C1) (C2) Given a pair of optimal primal and dual solutions x and p : If a primal/dual constraint is not satisfied with equality then the corresponding dual/primal variable must be 0 If a primal/dual variable is not 0 then the corresponding dual/primal constraint must be satisfied with equality In absence of degeneracy we can use complementary slackness to obtain an optimal x from an optimal p and vice versa

Primal and dual basic solutions A basis matrix B of a standard form problem P corresponds to a basic primal solution x with x B = B 1 b, x N = 0 a basic dual solution p = c B B 1 dual constraint j satisfied by p variable x j has non-negative reduced cost p A j c j c j c B B 1 A j = c j 0 dual constraint j binding at p variable x j has reduced cost 0 p A j = c j c j c B B 1 A j = c j = 0 If these two basic primal and dual solutions are both feasible, then they are both optimal

Dual Simplex We call a basis matrix B dual-feasible if the corresponding dual basic solution p = c B B 1 is feasible primal-feasible if the corresponding primal basic solution is feasible, i.e., x B = B 1 b 0 The (primal) Simplex algorithm only considers primal-feasible basis matrices B and looks for one which is also dual-feasible The Dual Simplex algorithm only considers dual-feasible basis matrices B and looks for one which is also primal-feasible

Dual Simplex: Starts with a dual-feasible basis B: All reduced costs must be non-negative At each iteration moves to an adjacent basic solution corresponding to a new basis matrix B such that: 1. B is dual-feasible, i.e., c 1,, c n are again non negative 2. c B B 1 b c B B 1 b (the solution cost does not decrease) If B is also primal-feasible the corresponding solution is optimal One iteration on the tableau: Find a variable x B(i) with negative value b i If all elements of row i (except b i ) are non-negative, then the optimal dual cost is + and the primal problem is infeasible Find a column j such that = min a ij 1 k n a ik <0 Make a pivot operation on row i and column j c j c k a ik

Sensitivity analysis How does the optimal cost change if the problem data (coefficients of A, b, or c) is subject to small variations? Main idea: The optimal basis does not change if the coefficient change is small enough (in absence of degeneracy) A basis matrix B is optimal if it satisfies 1. B 1 b 0 (B is primal-feasible) 2. c = c c B B 1 A 0 (B is dual-feasible) If something changes we can check if the optimal basis remains optimal by verifying if it still satisfies 1 and 2 after the change If it does not remain optimal we can find a new optimal solution without resolving from scratch

A coefficient change can affect feasibility/optimality: Feasibility (primal feasibility): Optimality conditions (dual feasibility): B 1 b 0 c c B B 1 A 0 A change in a right hand side b i can only affect primal feasibility: Check if B remains primal feasible A change in a cost c j can only affect dual feasibility: If x j is not basic, check the reduced cost of x j If x j is basic, check all the reduced costs A change in a coefficient a ij of a non-basic column j can only affect dual feasibility: Check the reduced cost of x j

The addition of a new variable x n+1 to the problem can affect dual feasibility but not primal feasibility Setting x n+1 = 0 leaves the previous optimal solution feasible Check the reduced cost of x n+1 to check optimality The addition of a new constraint to the problem can affect primal feasibility If primal feasibility is lost after a change in the problem data but dual feasibility is not affected: Continue with the dual Simplex to find a new optimal solution If dual feasibility is lost after a change in the problem data but primal feasibility is not affected: Continue with the primal Simplex to find a new optimal solution

Network flow problems Flow exiting from node i minimize z = (i,j) A c ij f ij Flow entering into node i s.t. f ij f ji = b i i N (1) j N (i,j) A j N (j,i) A 0 f ij u ij (i, j) A (2) Basic solutions correspond to tree solutions: Defined by a set T of N 1 arcs forming a tree Flows f ij on all arcs in A T are equal to 0 Flows f ij on arcs in T are uniquely determined by constraints (1) A tree solution f is feasible if f 0

Network simplex (for uncapacitated problems) 1. Start with a feasible tree solution associated with a set of arcs T that form a tree 2. Compute the dual variables p i, i = 1,, n by setting p n = 0 and solving the system c ij p i + p j = 0, (i, j) T 3. Compute reduced costs c ij = c ij p i + p j, i, j A T 4. If c ij < 0 for some i, j, add (i, j) to T to form a cycle C Otherwise the solution is optimal 5. Push a flow θ around C until the flow on a backward arc (k, l) of C becomes 0 (if C has no backward arcs the optimal cost is ) (k, l) is removed from T and replaced by (i, j). Return to 2

Special cases of network flow problems Efficient tailored algorithms for some special cases: Shortest path problem: Label correcting method The label p i of node i represents the cost of the best (shortest) path from i to n found so far Iteratively corrects labels as p i = min p i, min j (i,j) A {c ij + p j } Maximum flow problem: Ford-Fulkerson algorithm Iteratively checks if there is an augmenting path If an augmenting path is found, it is used to increase the flow from the source to the sink, if not the solution is optimal

Integer programming problems Integer programming problems (IPs) are more difficult to solve than LPs with all continuous variables We don t have an efficient algorithm to solve a general IP Different formulations are possible for the same IP: If we remove the integrality constraints from an IP formulation we obtain its LP relaxation which is an LP An IP formulation is the strongest possible if the feasible region of its LP relaxation is the convex hull of integer solutions: In this case, solving the LP relaxation solves the IP

Integer programming methods Gomory cutting plane method: Iteratively solves the LP relaxation and then adds a cutting plane that cuts off its optimal fractional solution The cutting plane is derived from the optimal Simplex tableau Branch and Bound (minimization problem): Splits the problem into sub problems Solves the LP relaxation of each sub problem to obtain a lower bound (lower estimate) on its optimal costs Sub problems with a lower bound bigger or equal than the cost of the best feasible integer solution are discarded Branch and Cut: Extends branch and bound by adding cutting planes to the sub problems

Exams Thursday 22.10.2015 13:00-16:00 Monday 14.12.2015 9:00-12:00 Tuesday 24.05.2016 9:00-12:00 Old exams can be found in MyCourses within section Additional Reading Calculators are allowed (not graphical) but not other material Remeber to sign up for the exam via WebOodi. The registration deadline for the first exam is this Thursday (15.10.2015)