Part 1. The Review of Linear Programming The Revised Simplex Method

Similar documents
Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

Exercise Set Decide whether each matrix below is an elementary matrix. (a) (b) (c) (d) Answer:

4.1 The original problem and the optimal tableau

5 The Theory of the Simplex Method

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Introduction to Operations Research

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Linear programming II João Carlos Lourenço

The Simplex Algorithm

How to Solve a Standard Maximization Problem Using the Simplex Method and the Rowops Program

Math 414 Lecture 30. The greedy algorithm provides the initial transportation matrix.

Generalized Network Flow Programming

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS

CSE 40/60236 Sam Bailey

Read: H&L chapters 1-6

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

AM 121: Intro to Optimization Models and Methods Fall 2017

Some Advanced Topics in Linear Programming

CASE STUDY. fourteen. Animating The Simplex Method. case study OVERVIEW. Application Overview and Model Development.

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

THE simplex algorithm [1] has been popularly used

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

4.1 Graphical solution of a linear program and standard form

VARIANTS OF THE SIMPLEX METHOD

Linear Programming Problems

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

CHAPTER 3 REVISED SIMPLEX METHOD AND DATA STRUCTURES

Solutions for Operations Research Final Exam

Name: THE SIMPLEX METHOD: STANDARD MAXIMIZATION PROBLEMS

Chapter 4: The Mechanics of the Simplex Method

Solving Linear Programs Using the Simplex Method (Manual)

COLUMN GENERATION IN LINEAR PROGRAMMING

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

FreeMat Tutorial. 3x + 4y 2z = 5 2x 5y + z = 8 x x + 3y = -1 xx

ISE203 Optimization 1 Linear Models. Dr. Arslan Örnek Chapter 4 Solving LP problems: The Simplex Method SIMPLEX

x = 12 x = 12 1x = 16

A New Product Form of the Inverse

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

DEGENERACY AND THE FUNDAMENTAL THEOREM

Civil Engineering Systems Analysis Lecture XV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Math Introduction to Operations Research

The Simplex Algorithm

NATCOR Convex Optimization Linear Programming 1

MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix.

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

The Ascendance of the Dual Simplex Method: A Geometric View

What is linear programming (LP)? NATCOR Convex Optimization Linear Programming 1. Solving LP problems: The standard simplex method

Linear Programming. Course review MS-E2140. v. 1.1

WEEK 4 REVIEW. Graphing Systems of Linear Inequalities (3.1)

Lecture 9: Linear Programming

TRANSPORTATION AND ASSIGNMENT PROBLEMS

Linear Programming Terminology

George B. Dantzig Mukund N. Thapa. Linear Programming. 1: Introduction. With 87 Illustrations. Springer

0_PreCNotes17 18.notebook May 16, Chapter 12

Transforms. COMP 575/770 Spring 2013

5.4 Pure Minimal Cost Flow

Wei Shu and Min-You Wu. Abstract. partitioning patterns, and communication optimization to achieve a speedup.

An example of LP problem: Political Elections

A Generalized Model for Fuzzy Linear Programs with Trapezoidal Fuzzy Numbers

Chapter 15 Introduction to Linear Programming

Lecture 3: Camera Calibration, DLT, SVD

3 INTEGER LINEAR PROGRAMMING

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Easter Term OPTIMIZATION

Computer Packet 1 Row Operations + Freemat

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Math Week in Review #5

A NEW SIMPLEX TYPE ALGORITHM FOR THE MINIMUM COST NETWORK FLOW PROBLEM

Discrete Optimization. Lecture Notes 2

CSC 8301 Design & Analysis of Algorithms: Linear Programming

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

Tuesday, April 10. The Network Simplex Method for Solving the Minimum Cost Flow Problem

I. This material refers to mathematical methods designed for facilitating calculations in matrix

Multiplying and Dividing Rational Expressions

Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang

Problem Set 2 Geometry, Algebra, Reality

Chapter II. Linear Programming

Lecture 12 (Last): Parallel Algorithms for Solving a System of Linear Equations. Reference: Introduction to Parallel Computing Chapter 8.

hp calculators hp 39g+ & hp 39g/40g Using Matrices How are matrices stored? How do I solve a system of equations? Quick and easy roots of a polynomial

AH Matrices.notebook November 28, 2016

Data Mining. 2.4 Data Integration. Fall Instructor: Dr. Masoud Yaghini. Data Integration

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

Parallel Implementations of Gaussian Elimination

What s Linear Programming? Often your try is to maximize or minimize an objective within given constraints

SPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES. Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari

Math Models of OR: The Simplex Algorithm: Practical Considerations

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Lecture Notes 2: The Simplex Algorithm

MATLAB Solution of Linear Programming Problems

Unit VIII. Chapter 9. Link Analysis

reasonable to store in a software implementation, it is likely to be a signicant burden in a low-cost hardware implementation. We describe in this pap

Transcription:

In the name of God Part 1. The Review of Linear Programming 1.4. Spring 2010 Instructor: Dr. Masoud Yaghini

Introduction Outline in Tableau Format Comparison Between the Simplex and the Revised Simplex Methods Product Form of the Inverse References

Introduction

The revised simplex method is a systematic procedure for implementing the steps of the simplex method in a smaller array, thus saving storage space. Let us begin by reviewing the steps of the simplex method for a minimization problem. Suppose that we are given a basic feasible solution with basis B (and basis inverse B -1 ). Then:

Steps of the Simplex Method (Minimization Problem) 1. The basic feasible solution is given by x B = B -1 b = b and x N = 0. The objective z = c B B -1 b = c B b 2. Calculate the simplex multipliers w = c -1 B B. For each nonbasic variable, calculate z j - c j = c B B -1 a j c j = wa j c j. Let z k - c k = Maximum z j - c j. If z k - c k 0, then stop; the current solution is optimal. Otherwise go to step 3.

Steps of the Simplex Method (Minimization Problem) 3. Calculate y k = B -1 a k. The objective z = c B B -1 b = c B = b If y k 0, then stop; the optimal solution is unbounded. Otherwise determine the index of the variable x Br leaving the basis as follows: Update the basis B by replacing a Br with a k, and go to step 1.

The simplex method can be executed using a smaller array. Suppose that we have a basic feasible solution with a known B -1. The revised simplex tableau: where w = c B B -1 and b = B -1 b

In step 1 of simplex method: the right-hand side denotes the values of the objective function and the basic variables. In step 2 of simplex method: In order to determine whether to stop or to introduce a new variable into the basis, we need to see is the z j - c j = c B B -1 a j c j = wa j c j. Since w is known, z j - c j can be calculated In step 3 of simplex method: Suppose that z k - c k > 0, then using B -1 we may compute y k = B -1 a k If y k 0, then stop; the optimal solution is unbounded.

Otherwise the updated column of x k is inserted to the right of the above tableau: The index r of step 3 can now be calculated by the usual minimum ratio test.

in Tableau Format

in Tableau Format Initialization Step (Minimization Problem) Find an initial basic feasible solution with basis inverse B -1. Calculate w = c B B -1, b = B -1 b, and form the revised simplex tableau.

in Tableau Format Main Step For each nonbasic variable, calculate z j c j = wa j c j. Let z k c k = Maximum z j - c j. If z k c k 0, stop; the current basic feasible solution is optimal. Otherwise, calculate y k = B -1 a k. If y k 0, stop; the optimal solution is unbounded. Otherwise, insert the column to the right of the revised simplex tableau.

in Tableau Format Determine the index r as follows: Pivot at y rk. This updates the tableau. Now the column corresponding to x k is completely eliminated from the tableau and the main step is repeated.

in Tableau Format Example: Introduce the slack variables x 7, x 8, x 9. The initial basis is B = [a 7, a 8, a 9 ] = I 3. Also, w = c B B -1 = (0, 0, 0) and b = b.

in Tableau Format Iteration 1 Here w = (0, 0, 0). Noting that z j - c j = wa j - c j, we get Therefore k = 5 and x 5 enters the basis:

in Tableau Format Insert the vector to the right of the tableau and pivot at y 35 = 2.

in Tableau Format Iteration 2 Here w = (0, 0, -2). Noting that z j - c j = wa j - c j, we get Therefore k = 2 and x 2 enters the basis: Insert the vector

in Tableau Format to the right of the above tableau and pivot at y 12.

in Tableau Format Iteration 3 Here w = (-2, 0, -1). Noting that z j - c j = wa j - c j, we get Since z j c j 0 for all nonbasic variables (x 7 just left the basis and so z 7 c 7 < 0), we stop; the basic feasible solution of the foregoing tableau is optimal.

Comparison Between the Simplex and the Revised Simplex Methods

Simplex Method vs. Revised Simplex Method An array that we need: For the simplex method: (m + 1) x (n + 1) For the revised simplex: (m + 1) x (m + 2) If n is significantly larger than m, this would result in a substantial saving in computer core storage.

Simplex Method vs. Revised Simplex Method The number of multiplications (division is considered a multiplication) and additions (subtraction is considered an addition) per iteration of both procedures are given in below:

Simplex Method vs. Revised Simplex Method From the Table we see that the number of operations required during an iteration of the simplex method is slightly less than those required for the revised simplex method. Note, however, that for most practical problems the density d (number of nonzero elements divided by total number of elements) of nonzero elements in the constraint matrix is usually small (in many cases d 0.05). The revised simplex method can take advantage of this situation while calculating z j - c j. Note that z j = wa j and we can skip zero elements of a j while performing the calculation wa j = m i w i a ij

Simplex Method vs. Revised Simplex Method Therefore the number of operations in the revised simplex method for calculating the z j - c j is given by d times the entries of the, substantially reducing the total number of operations. While pivoting, for both the simplex and the revised simplex methods, no operations are skipped because the current tableaux usually fill quickly with nonzero entries, even if the original constraint matrix was sparse.

Simplex Method vs. Revised Simplex Method To summarize, if n is significantly larger than m, and if the density d is small, the computational effort of the revised simplex method is significantly smaller than that of the simplex method. Also, in the revised simplex method, the use of the original data for calculating the z j - c j and the updated column y k tends to reduce the cumulative round-off error.

Product Form of the Inverse

Product Form of the Inverse In another implementation of the revised simplex method, the inverse of the basis is stored as the product of elementary matrices an elementary matrix is a square matrix that differs from the identity in only one row or one column. This method provides greater numerical stability by reducing accumulated round-off errors.

Product Form of the Inverse Consider a basis B composed of the columns a B1, a B2,..., a Bm and suppose that B -1 is known. Now suppose that the nonbasic column a k replaces a Br resulting in the new basis B. We wish to find B -1 in terms of B -1. Noting that a -1 k = By k (y k = B a k ) and a Bi = Be i where e i is a vector of zeros except for 1 at the i th position, we have where T is the identity with the r th column replaced by y k.

Product Form of the Inverse Let E = T -1, since B = TB, therefore B -1 = T -1 B -1 = EB -1 where the elementary matrix E is: To summarize, the basis inverse at a new iteration can be obtained by premultiplying the basis inverse at the previous iteration by an elementary matrix E. The nonidentity column g, as the eta vector, and its position r need be stored to specify E.

Product Form of the Inverse Let the basis B, at the first iteration be the identity I. Then the basis inverse at iteration 2 is B 2-1 = E 1 B 1-1 = E 1 I = E 1 where E 1 is the elementary matrix corresponding to the first iteration. Similarly B -1-1 3 = E 2 B 2 = E 2 E 1, and in general B t -1 = E t-1 E t-2 E 2 E 1 This Equation specifies the basis inverse as the product of elementary matrices, is called the product form of the inverse. Using this form, all the steps of the simplex method can be performed without pivoting. First, it will be helpful to elaborate on multiplying a vector by an elementary matrix.

Post Multiplying Product Form of the Inverse Let E be an elementary matrix with nonidentity column g appearing at the r th position. Let c be a row vector. Then ce is equal to c except that the r th component is replaced by cg.

Premultiplying Product Form of the Inverse Let a be an m vector. Then Ea is equal to a except that the r th component is replaced by a r g.

Product Form of the Inverse Computing the vector w = c B B -1 At iteration t we wish to calculate the vector w. Note that After w is computed, we can calculate z j - c j = wa j - c j for nonbasic variables.

Product Form of the Inverse Computing the updated column y k If x k is to enter the basis at iteration t, then y k is calculated as follows: If y k 0, we stop with the conclusion that the optimal solution is unbounded. Otherwise the usual minimum ratio test determines the index r of the variable x Br leaving the basis. A new elementary matrix E t is generated where the nonidentity column g is given by: and appears at position r.

Product Form of the Inverse Computing the right-hand-side b The new right-hand side is given by Noting that B t -1 b is known from the last iteration.

Product Form of the Inverse Updating the basis inverse The basis inverse is updated by generating E t as discussed above. It is worthwhile noting that the number of elementary matrices required to represent the basis inverse increases by 1 at each iteration. If this number becomes large, it would be necessary to reinvert the basis and represent it as the product of m elementary matrices. It is emphasized that each elementary matrix is completely described by its nonidentity column and its position. Therefore an elementary matrix E could be stored as nonidentity column g and r is its position.

Product Form of the Inverse Example Introduce the slack variables x 4, x 5, and x 6. The original basis consists of x 4, x 5, and x 6.

Iteration 1 Product Form of the Inverse Note that z j - c j = wa j - c j Therefore Thus, k = 2, and x 2 enters the basis.

Product Form of the Inverse Here x Br leaves the basis where r is determined by Therefore r = 2; that is, x B2 = x 5 leaves the basis and x 2 enters the basis. The nonidentity column of E 1 is given by and E 1 is represented by

Iteration 2 Update b Product Form of the Inverse Then w = (0, -1, 0). Note that z j - c j = wa j c j,therefore Thus, k = 1 and x 1, enters the basis.

Product Form of the Inverse Noting Then x Br leaves the basis where r is determined by Therefore r = 1; that is, x B1 = x 4 leaves and x 1, enters the basis. The nonidentity column of E 2 is represented by

Iteration 3 Update b Product Form of the Inverse

Calculate w Product Form of the Inverse Note that z j - c j = wa j - c j Therefore Since z j c j 0 for all nonbasic variables, then the optimal solution is at hand. The objective value is -22/3 and

References

References M.S. Bazaraa, J.J. Jarvis, H.D. Sherali, Linear Programming and Network Flows, Wiley, 1990. (Chapter 5)

The End