Chap5 The Theory of the Simplex Method

Similar documents
5 The Theory of the Simplex Method

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

Read: H&L chapters 1-6

Linear programming II João Carlos Lourenço

ISE203 Optimization 1 Linear Models. Dr. Arslan Örnek Chapter 4 Solving LP problems: The Simplex Method SIMPLEX

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

Some Advanced Topics in Linear Programming

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

The Simplex Algorithm

- Introduction P. Danziger. Linear Algebra. Algebra Manipulation, Solution or Transformation

Introduction to Operations Research

Chapter II. Linear Programming

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

The Simplex Algorithm

Chapter 15 Introduction to Linear Programming

Other Algorithms for Linear Programming Chapter 7

Artificial Intelligence

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

16.410/413 Principles of Autonomy and Decision Making

CSC 8301 Design & Analysis of Algorithms: Linear Programming

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Lecture 9: Linear Programming

Linear programming and duality theory

Solutions for Operations Research Final Exam

Array Dependence Analysis as Integer Constraints. Array Dependence Analysis Example. Array Dependence Analysis as Integer Constraints, cont

COLUMN GENERATION IN LINEAR PROGRAMMING

Outline. Linear Programming (LP): Principles and Concepts. Need for Optimization to Get the Best Solution. Linear Programming

Theory of Integers. CS389L: Automated Logical Reasoning. Lecture 13: The Omega Test. Overview of Techniques. Geometric Description

OPERATIONS RESEARCH. Linear Programming Problem

Introduction to Machine Learning Spring 2018 Note Sparsity and LASSO. 1.1 Sparsity for SVMs

An example of LP problem: Political Elections

NATCOR Convex Optimization Linear Programming 1

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

R n a T i x = b i} is a Hyperplane.

Finite Math Linear Programming 1 May / 7

What is linear programming (LP)? NATCOR Convex Optimization Linear Programming 1. Solving LP problems: The standard simplex method

Recap, and outline of Lecture 18

AMATH 383 Lecture Notes Linear Programming

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

Linear Programming. Course review MS-E2140. v. 1.1

Introduction to Linear Programming

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Generalized Network Flow Programming

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where

Programming, numerics and optimization

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Column Generation: Cutting Stock

Linear Programming Problems

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

Controlling Air Pollution. A quick review. Reclaiming Solid Wastes. Chapter 4 The Simplex Method. Solving the Bake Sale problem. How to move?

4.1 Graphical solution of a linear program and standard form

IE 5531: Engineering Optimization I

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Lesson 17. Geometry and Algebra of Corner Points

Marginal and Sensitivity Analyses

5.1 Introduction to the Graphs of Polynomials

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

Math 414 Lecture 30. The greedy algorithm provides the initial transportation matrix.

Chapter 3: Towards the Simplex Method for Efficient Solution of Linear Programs

Farming Example. Lecture 22. Solving a Linear Program. withthe Simplex Algorithm and with Excel s Solver

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

Data-analysis problems of interest

VARIANTS OF THE SIMPLEX METHOD

Graphical Methods in Linear Programming

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

The Affine Scaling Method

Chapter 1. Linear Equations and Straight Lines. 2 of 71. Copyright 2014, 2010, 2007 Pearson Education, Inc.

Discrete Optimization. Lecture Notes 2

Lecture 16 October 23, 2014

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

I will illustrate the concepts using the example below.

Lecture 4: Linear Programming

Chapter 4 Concepts from Geometry

Answers to practice questions for Midterm 1

Math 273a: Optimization Linear programming

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

16.410/413 Principles of Autonomy and Decision Making

3. Replace any row by the sum of that row and a constant multiple of any other row.

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

maximize minimize b T y subject to A T y c, 0 apple y. The problem on the right is in standard form so we can take its dual to get the LP c T x

Chapter 7. Linear Programming Models: Graphical and Computer Methods

February 10, 2005

Mathematical and Algorithmic Foundations Linear Programming and Matchings

AM 121: Intro to Optimization Models and Methods Fall 2017

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

Name Class Date. Quadratic Functions and Transformations

Section 3.2 Quadratic Functions

CMPSCI611: The Simplex Algorithm Lecture 24

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

II. Linear Programming

UNIT 2 LINEAR PROGRAMMING PROBLEMS

Chapter 1 Linear Programming. Paragraph 4 The Simplex Algorithm

CSE 460. Today we will look at" Classes of Optimization Problems" Linear Programming" The Simplex Algorithm"

Transcription:

College of Management, NCTU Operation Research I Fall, Chap The Theory of the Simplex Method Terminology Constraint oundary equation For any constraint (functional and nonnegativity), replace its,, sign y an sign. Each forms a hyperplane (a flat geometric shape) in n-dimensional space. This hyperplane forms the constraint oundary for the corresponding constraint. Corner point feasile (CPF) solution A feasile solution that does not lie on any line segment connecting two other feasile solution. Each CPF solution lies at the intersection of n constraint oundary equations (and satisfies the other constraints). Adjacent CPF solutions Two CPF solutions are adjacent if the line segment connecting them is an edge of the feasile region. Edge of the feasile region An edge of the feasile region is a feasile line segment that lies at the intersection of n- constraint oundaries. One more property of CPF Solutions (with feasile solutions and a ounded feasile region) There are only a finite numer of CPF solutions. Defining equation Refer to the constraint oundary equations that yield (define) the indicated CPF solution. Chap-

College of Management, NCTU Operation Research I Fall, In the Wyndor example, what are the defining equations for CPF (, )? Indicating variales for constraint oundary equations Given a CPF solution, how do we tell whether a particular constraint oundary equation is one of the defining equations? Each constraint has an indicating variale that completely indicates (y whether its value is zero) whether that constraint s oundary equation is satisfied y the current solution. Type of Constraint Form of Constraint Constraint in Augmented Form Nonnegativity x x j j Constraint Boundary Equation x j x j x i n + i Functional n n ( ) a j ij x j i j + + n aij x j xn i i a j ij x j Functional n n n () a j ij x j i a + + j ij x j x n i i a j ij x j i n+ Functional n n n ( ) a j ij x j i a + + j ij x j x n i xsi i a j ij x j Indicating Variale x x i n+ i Si Indicating variale (a nonasic variale) the corresponding constraint oundary equation is satisfied is a defining equation x What are the indicating variales for the Wyndor example? CPF Solution BF solution Nonasic Variales Defining Equations (, ) (,,,, ) x x x x (, ) (,,,, ) x x x x (, ) (,,,, ) x x x x + x Chap-

College of Management, NCTU Operation Research I Fall, (, ) (,,,, ) x x x + x x (, ) (,,,, ) x x x x It is possile for a asic variale to e zero (degenerate). It implies that the CPF solution may satisfy another constraint oundary equation in addition to its n defining equations. The Simplex Method and the CPF/BF Solutions When the simplex method chooses an entering asic variale, it is choosing one of the edges emanating from the current CPF solution to move along. Deleting one constraint oundary (defining equation) from the set of n constraint oundaries defining the current solution. Increasing this variale from zero corresponds to moving along this edge. Moving away from the current solution along the intersection of the remaining n- constraint oundaries Having one of the asic variales (the leaving asic variale) decrease to zero corresponds to reaching the first new constraint oundary at the other end of this edge of the feasile region. Recall how do we perform the simplex method - x - x x + x x + x x + x + x What do we really need in a simplex iteration? Ojective row Entering column Current solution (the right hand side) If we are dealing with too many numers, can we perform the iteration in a more efficient way? Here comes the revised simplex method. Chap-

College of Management, NCTU Operation Research I Fall, The Matrix Representations for a LP Prolem Max cx S.T. Ax x Where c x,, A After adding slack variales, we have the augmented form of the prolem. x s So that the constraints ecome [A, I] x x s x and x s One of the key features of the revised simplex method involves the way in which it solves for each new BF solution after identifying its asic and nonasic variales. In a specific iteration, variales are either asic or nonasic variales. Thus, we can represent a LP prolem (in the augmented form) as c B x B + c N x N Bx B + Nx N x B, x N Chap-

College of Management, NCTU Operation Research I Fall, Example: the Wyndor Example x + x x + x x + x x + x + x x, x, x, x, x Solve the aove model x B + B - Nx N B - (x B B - B - Nx N ) The value of asic variales x B * B - c B x B + c N x N c B (B - B - Nx N ) + c N x N c B B - + (c N c B B - N) x N Move the term of nonasic variales to the left hand side to follow the same representation. + (c B B - N - c N )x N c B B - The value of the ojective function is * c B B - Coefficients of nonasic variales in the ojective row c B B - N - c N In the Wyndor Example Iteration : asic variales {x, x, x }, nonasic variales {x, x } Final iteration: asic variales {x, x, x }, nonasic variales {x, x } Chap-

College of Management, NCTU Operation Research I Fall, The Revised simplex method Recall what we really need in a simplex iteration. Determine the entering variale. Ojective row: c B B - N - c N Compute the Entering Column: only need to compute column of B - N corresponding to the entering variale. a: The corresponding interested column in N (i.e. the column of N corresponding to entering variale). d: The only needed column in B - N. d B - a (a is known since N is known) Determine the leaving variale (the minimum ratio test). That is, x B x B * x e d. (Recall x B B - B - Nx N ) x B * is the current values of asic variales. x e is the entering variale. Optimality test: Calculate c B B - N - c N (the coefficients of nonasic variales in the ojective row). If they are all non-negative, stop. Optimal is found. Update: x * B, x B, x N, B, N, c B, c N Repeat until the optimal solution is found. Chap-

College of Management, NCTU Operation Research I Fall, Example For the Revised Simplex Method -- the Wyndor Example Max x + x S.T. x + x x + x x + x + x x, x, x, x, x Chap-

College of Management, NCTU Operation Research I Fall, Chap- Some asic concepts of linear algera and matrix operations Multiply the nd row y and add to the st row. Perform the same operation on the identity matrix yields. The former operation is equivalent to. Multiply the nd row y and add to the st row, then multiple nd row y - and add to the rd row This operation is equivalent to For the original set of equations, the matrix form is cx I A c x s x Ax + Ix s After any iteration, we know that x B * B - and * c B B - Let look at the right hand side, the original vector is, now is B B c B

College of Management, NCTU Operation Research I Fall, What is the operation on the right hand side? So, we know what happen on the left hand side c A I Thus, the desired matrix form of the set of equations after any iteration is Iteration Basic Variale Eq. Coefficient of: Original Variale Slack Variales Right Side x B () (,,, m) -c A I Any () c B B - A c c B B - x B (,,, m) B - A Let s look at the second (final) iteration of the Wyndor example B - c B B - B - Chap-

College of Management, NCTU Operation Research I Fall, A Fundamental Insight Focus on the coefficients of slack variales and the information they give. After any iteration, the coefficients of the slack variales in each equation immediately reveal how that equation has een otained from the initial equations. Let s recall the Wyndor example in Taleau form. Basic Coefficient of: Variale x x x x x Right Side - - x x x - / x x / x - / x / -/ x / x -/ / For Iteration : New row old row + (/) (old row ) New row old row + () (old row ) New row (/) (old row ) New row old row + (-) (old row ) These algeraic operations amount to pre-multiplying rows to of the initial taleau y the matrix New row- / / Chap-

College of Management, NCTU Operation Research I Fall, Chap- Notice that the coefficients of the slack variales in the new taleau do indeed provide a record of the algeraic operations performed. New row [ ] [ ] [ ] / / + For Iteration New row old row + () (old row ) New row old row + (-/) (old row ) New row old row + () (old row ) New row (/) (old row ) Thus, the multiplying matrix is / / Final row - / / / / / / / / / / Final row ()(initial row ) + (/)(initial row ) + (-/)(initial row ) Final row ()(initial row ) + (/) (initial row ) + () (initial row ) Final row ()(initial row ) + (-/)(initial row ) + (/)(initial row ) Final row [ ] [ ] [ ] / / +

College of Management, NCTU Operation Research I Fall, Thus, given the initial taleau and the coefficients/parameters of the slack variales in any iteration, we could derive the rest of parameters. An example (.-) Fill the missing numers Max x + x + x S.T. x + x + /x x x /x x + x + /x x, x,, x Initial Taleau Basic Coefficient of: Variale x x x x x x Right Side x x x Final Taleau x x - x - Finally, recall that Chap-

College of Management, NCTU Operation Research I Fall, Iteration Basic Variale Eq. Coefficient of: Original Variale Slack Variales Right Side x B () (,,, m) -c A I Any x B () (,,, m) c B B - A c B - A c B B - B - c B B - B - This taleau also gives an explanation of shadow price. Chap-