Marginal and Sensitivity Analyses

Similar documents
Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

AM 121: Intro to Optimization Models and Methods Fall 2017

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Read: H&L chapters 1-6

Some Advanced Topics in Linear Programming

Linear Programming Problems

4.1 The original problem and the optimal tableau

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

COLUMN GENERATION IN LINEAR PROGRAMMING

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

VARIANTS OF THE SIMPLEX METHOD

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

Introduction to Operations Research

Discuss mainly the standard inequality case: max. Maximize Profit given limited resources. each constraint associated to a resource

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

5.3 Cutting plane methods and Gomory fractional cuts

Civil Engineering Systems Analysis Lecture XV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Generalized Network Flow Programming

The Simplex Algorithm

3 INTEGER LINEAR PROGRAMMING

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

CSC 8301 Design & Analysis of Algorithms: Linear Programming

Linear programming and duality theory

Linear Programming. Course review MS-E2140. v. 1.1

Primal Simplex Algorithm for the Pure Minimal Cost Flow Problem

CSE 40/60236 Sam Bailey

Tuesday, April 10. The Network Simplex Method for Solving the Minimum Cost Flow Problem

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

AMATH 383 Lecture Notes Linear Programming

Improved Gomory Cuts for Primal Cutting Plane Algorithms

5.4 Pure Minimal Cost Flow

Lecture 9: Linear Programming

4.1 Graphical solution of a linear program and standard form

Solving Linear Programs Using the Simplex Method (Manual)

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Solutions for Operations Research Final Exam

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Other Algorithms for Linear Programming Chapter 7

The Ascendance of the Dual Simplex Method: A Geometric View

Proof of An Open Problem in Sphere Methods for LP

COMS 4771 Support Vector Machines. Nakul Verma

Graphs and Network Flows IE411. Lecture 20. Dr. Ted Ralphs

Advanced Algorithms Linear Programming

Finite Math Linear Programming 1 May / 7

Mid-term Exam of Operations Research

Chapter II. Linear Programming

ISE203 Optimization 1 Linear Models. Dr. Arslan Örnek Chapter 4 Solving LP problems: The Simplex Method SIMPLEX

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma

Recap, and outline of Lecture 18

Linear Programming. Revised Simplex Method, Duality of LP problems and Sensitivity analysis

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

An iteration of the simplex method (a pivot )

Math 414 Lecture 30. The greedy algorithm provides the initial transportation matrix.

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Sphere Methods for LP

OPERATIONS RESEARCH. Linear Programming Problem

Pivot and Gomory Cut. A MIP Feasibility Heuristic NSERC

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Linear Programming. Linear Programming. Linear Programming. Example: Profit Maximization (1/4) Iris Hui-Ru Jiang Fall Linear programming

Assignment #3 - Solutions MATH 3300A (01) Optimization Fall 2015

LP Graphic Solution & Solver for LP

2. Convex sets. x 1. x 2. affine set: contains the line through any two distinct points in the set

Discrete Optimization. Lecture Notes 2

11 Linear Programming

Convex Optimization. Convex Sets. ENSAE: Optimisation 1/24

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

Heuristic Optimization Today: Linear Programming. Tobias Friedrich Chair for Algorithm Engineering Hasso Plattner Institute, Potsdam

Math Introduction to Operations Research

Size of a problem instance: Bigger instances take

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

Linear programming II João Carlos Lourenço

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Previously Local sensitivity analysis: having found an optimal basis to a problem in standard form,

Programming, numerics and optimization

An example of LP problem: Political Elections

2. Convex sets. affine and convex sets. some important examples. operations that preserve convexity. generalized inequalities

Integer Programming Theory

MATLAB Solution of Linear Programming Problems

Chapter 3: Towards the Simplex Method for Efficient Solution of Linear Programs

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

Dual-fitting analysis of Greedy for Set Cover

Lesson 17. Geometry and Algebra of Corner Points

Chap5 The Theory of the Simplex Method

Primal-Dual Methods for Approximation Algorithms

What s Linear Programming? Often your try is to maximize or minimize an objective within given constraints

New Directions in Linear Programming

Chapter 1 Linear Programming. Paragraph 4 The Simplex Algorithm

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

Transcription:

8.1 Marginal and Sensitivity Analyses Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Consider LP in standard form: min z = cx, subject to Ax = b, x 0 where A m n and rank m. Theorem: If this LP has an optimum nondegenerate BFS, then its dual opt. sol. is unique, and it is the marginal value vector for this LP. Theorem: If this LP has an optimum sol., but no optimal nondegenerate BFS, then the dual opt. sol. may not be unique. In this case, marginal value vector may not exist, but positive and negative marginal values exist for each b i. They are: Positive MV wrt b i = Max{π i : over dual opt. sols. π} Negative MV wrt b i = Min{π i : over dual opt. sols. π} Let f(b) denote the optimum objective value function as a function of the RHS constants vector b. f(b) defined only over b Pos(A). 159

1. If f(b) = for some b Pos(A), then it is for all b Pos(A). 2. Positive Homogeneity: If f(b) finite for some b, then f(0) = 0andf(λb) =λf(b) for all λ 0. 3. Convexity: f(b) is a piecewise linear convex function defined over Pos(A). 4. Subgadient Property Let π 1 be a dual opt. sol. when b = b 1. Then f(b) π 1 b for all b Pos(A) i.e., π 1 is a subgradient of f(b) atb 1. 160

Sensitivity Analysis 8.2 Also called Post-optimality analysis. Deals with efficient techniques for finding new opt. when small changes occur in data. Consider LP in standard form: min z = cx, subject to Ax = b, x 0 where A m n and rank m. Example: Original tableau x 1 x 2 x 3 x 4 x 5 x 6 z b 1 2 0 1 0 6 0 11 0 1 1 3 2 1 0 6 1 2 1 3 1 5 0 13 3 2 3 6 10 5 1 0 x j 0 for all j, minz 161

Optimum Inverse tableau Basic Inverse tableau Basic var. values x 1 1 2 2 0 3 x 2 1 1 1 0 4 x 3 1 0 1 0 2 z 2 4 1 1 11 We assume that we have an opt. inverse tableau for the problem. Let x B be the present opt. basic vector, and x, π the primal and dual opt. sols. Cost Coefficient Ranging This finds the optimality interval for each original cost coefficient. OPTIMALITY INTERVAL OF A DATA ELEMENT = Set of all values of that data element, as it varies in the tableau, but all other data remains fixed at current values, for which the present opt. basic vector (or present opt. sol.) remain optimal. 162

1. Nonbasic Cost Coefficient Ranging Let x j be a present nonbasic variable. c j = cost coeff. of x j which may change from its present value, while all other data remains fixed at current values. With this change, the only thing that will change is the rel. cost coeff. of x j, c j = c j πa.j. c j remains 0aslongasc j πa.j. So, optimality interval for c j is [ πa.j, ]. If c j becomes < πa.j,enterx j into x B, and continue simplex iterations until termination again. Example: Find range for c 5. Find new opt. sol. if c 5 changes from present 10 to 6. Cost Ranging a Basic Cost Coeff. Let x p be a basic variable in the present opt. basic vector x B. If its cost coeff. c p changes, the dual sol. changes. So, to compute opt. interval for c p do the following: In c B replace cost coeff. of basic var. x p by parameter c p, 163

and denote it by c B (c p ). π(c p ) = dual basic sol. as a function of c p = c B (c p )B 1 Each component of π(c p ) is an affine function of c p, i.e., has the form πi 0 + c pπi 1 where πi 0,π1 i are constants. Now compute each nonbasic relative cost coeff. c j as a function of c p,itisgivenby: c j (c p )=c j π(c p )A.j Again each of these is an affine function of c p. Express the cond. that all these rel. cost coeffs. must be 0. This yields theopt.int.forc p. To get new opt. sol. when c p changes to a value outside its opt. interval, compute all nonbasic c j (c p ), and if some of them are < 0, select one of the corresponding variables as the entering variable, and continue the application of the simplex algo. to 5. Example: Find opt. range for c 1 and new opt. if c 1 changes 164

Ranging RHS Constants Optimality interval for an RHS constant b i = set of all values of b i for which present opt. basic vector x B remains opt., as b i varies while all other data remains fixed at current values. To find opt int. for a b i, replace its present value in RHS constants vector b by parameter b i, and denote it by b(b i ). The basic values vector as a function of b i is B 1 b(b i ). Each component of this vector is an affine function of b i. Express the cond. that each of them must be 0, this yields the opt. int. for b i. As b i varies in its opt. int., the dual opt. sol. remains unchanged, but the primal opt. sol. is given by: Nonbasic variables = 0 Basic vector = B 1 b(b i ) opt. obj value = πb(b i ) If new value of b i is outside its opt. int., to find new opt. sol. 165

use dual simplex iterations. Example: Find opt. range for b 1, and new opt. sol. when b 1 changes to 15. Ranging Input-Output Coeffs. in a Nonbasic col. Let x j be a present nonbasic variable, and a ij an input-output coeff. in its original column A.j. To find opt. range for a ij, replace its present value in the column A.j by the parameter a ij, and call it A.j (a ij ). Then express the condition that the relative cost coeff. of x j, c j (a ij )=c j πa.j (a ij ) 0. This yields the opt. int. for a ij. If a ij changes to a value outside its opt. int., to get the new opt. sol. enter x j into the present basic vector x B, and continus the application of the simplex algo. To Introduce a New Nonnegative Variable Find the rel. cost coeff. of new variable wrt present opt. basic vector x B. 166

If this is 0, extend x by including new variable at 0-value; this is the new opt. sol. If rel. cost coeff. of new var. < 0, bring it into x B, and continue the application of simplex algo. Example: Include x 7 0 with col. vector (1, 2, 3,. 7) T. What happens if cost coeff. of new variable is 10 instead of 7? Introducing a new Inequality Constraint Let new constraint be A (m+1). x b m+1. If present opt. sol. x satisfies it, it remains opt., terminate. If x violates it, let x n+1 be the nonnegative slack var. corresponding to new inequality. Construct inverse tableau corresponding to basic vector (x B,x n+1 ). This basic vector is primal infeasible to augmented problem, but dual feasible. Apply dual simplex iterations to get new opt. B 0 a 1 1 = B 1 0 ab 1 1 167

B 0 a 1 1 = B 1 0 ab 1 1 168