CHAPTER 3 REVISED SIMPLEX METHOD AND DATA STRUCTURES

Similar documents
Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Column Generation: Cutting Stock

NATCOR Convex Optimization Linear Programming 1

What is linear programming (LP)? NATCOR Convex Optimization Linear Programming 1. Solving LP problems: The standard simplex method

Generalized Network Flow Programming

Part 1. The Review of Linear Programming The Revised Simplex Method

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

VARIANTS OF THE SIMPLEX METHOD

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

OPERATIONS RESEARCH. Linear Programming Problem

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Math 414 Lecture 30. The greedy algorithm provides the initial transportation matrix.

Linear Programming Terminology

Some Advanced Topics in Linear Programming

5 The Theory of the Simplex Method

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

COLUMN GENERATION IN LINEAR PROGRAMMING

Math Introduction to Operations Research

x = 12 x = 12 1x = 16

The Simplex Algorithm

Parallelizing the dual revised simplex method

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

Chap5 The Theory of the Simplex Method

Linear Programming Problems

4.1 The original problem and the optimal tableau

Put the following equations to slope-intercept form then use 2 points to graph

CHAPTER 4 IDENTIFICATION OF REDUNDANCIES IN LINEAR PROGRAMMING MODELS

Linear Programming. Course review MS-E2140. v. 1.1

SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

How to Solve a Standard Maximization Problem Using the Simplex Method and the Rowops Program

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Mathematical and Algorithmic Foundations Linear Programming and Matchings

The Ascendance of the Dual Simplex Method: A Geometric View

IIAIIIIA-II is called the condition number. Similarly, if x + 6x satisfies

MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix.

Chapter II. Linear Programming

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

Algebra Reviews & LP Graphic Solutions

Read: H&L chapters 1-6

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

5.4 Pure Minimal Cost Flow

Recap, and outline of Lecture 18

Civil Engineering Systems Analysis Lecture XV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

MLR Institute of Technology

Linear Programming. Revised Simplex Method, Duality of LP problems and Sensitivity analysis

Lecture Notes 2: The Simplex Algorithm

Julian Hall School of Mathematics University of Edinburgh. June 15th Parallel matrix inversion for the revised simplex method - a study

Solutions for Operations Research Final Exam

AM 121: Intro to Optimization Models and Methods Fall 2017

REGULAR GRAPHS OF GIVEN GIRTH. Contents

Output: For each size provided as input, a figure of that size is to appear, followed by a blank line.

New Directions in Linear Programming

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

3 INTEGER LINEAR PROGRAMMING

LP Graphic Solution & Solver for LP

Lecture notes on Transportation and Assignment Problem (BBE (H) QTM paper of Delhi University)

THE simplex algorithm [1] has been popularly used

Towards a practical simplex method for second order cone programming

5.1 Introduction to the Graphs of Polynomials

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Graphs that have the feasible bases of a given linear

Alignments to SuccessMaker. Providing rigorous intervention for K-8 learners with unparalleled precision

Defense Technical Information Center Compilation Part Notice

Dense Matrix Algorithms

Computational issues in linear programming

Linear Programming with Bounds

MATLAB Solution of Linear Programming Problems

Chapter 18 out of 37 from Discrete Mathematics for Neophytes: Number Theory, Probability, Algorithms, and Other Stuff by J. M. Cargal.

MATH 2000 Gauss-Jordan Elimination and the TI-83 [Underlined bold terms are defined in the glossary]

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS

The Simplex Algorithm

10/26/ Solving Systems of Linear Equations Using Matrices. Objectives. Matrices

Artificial Intelligence

Farming Example. Lecture 22. Solving a Linear Program. withthe Simplex Algorithm and with Excel s Solver

Lecture 2.2 Cubic Splines

Other Algorithms for Linear Programming Chapter 7

For example, the system. 22 may be represented by the augmented matrix

CSE 40/60236 Sam Bailey

Instructor: Barry McQuarrie Page 1 of 6

Sequences and Series. Copyright Cengage Learning. All rights reserved.

Unit.9 Integer Programming

Chapter 1: Number and Operations

Marginal and Sensitivity Analyses

Unconstrained Optimization

Name: THE SIMPLEX METHOD: STANDARD MAXIMIZATION PROBLEMS

Transportation problem

Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang

Transcription:

46 CHAPTER 3 REVISED SIMPLEX METHOD AND DATA STRUCTURES 3.1 INTRODUCTION While solving a linear programming problem, a systematic search is made to find a non-negative vector X which extremizes a linear objective function n Z = E c.x. (3.1) j=1 3 3 such that it satisfies a set of linear constraints of the form E a.,x. ^ b.; i = 1,2,... m; x. >0; (3.2) j =1 il 1 > i j ~ The parameter c. is the contribution/cost coefficient associated with unit output of the activity x.. The b^'s are the limited available resources in the form of men, money, machine and material. The problem is to find that vector X which satisfies the relationships (3.2) and extremizes the linear objective function(3.1). To find a solution to the above problem, revised simplex method can be used. The revised simplex method uses the same basic principles of the regular simplex method. But at each iteration, the entries in the entire tableau are not calculated. The relevant information for moving from one basic feasible solution to another is directly generated from the original set of equations.

47 3.2 REVISED SIMPLEX PROCEDURE The revised simplex method saves both storage and computational time compared to the simplex method. Unlike the original simplex method, only the inverse of the current basis is maintained to generate the next inverse. All other quantities except are computed from their definitions as and when necessary. Eventhough X can be computed from its B definition, it is more economical to transform it at each stage. When we have to determine the reduced cost coefficients, we simply determine C B P. B 3 c, 3 If all (z, - c.) > 0, 3 3 then optimal solution is obtained. 3.2.1 Detailed algorithm Again taking the standard LP model Extremize Z = C X Subject to A X P ; X > 0 J < 0 ~ (3.3) Where "11 12 In a a.. 21 22 2n A (m x n) a a.. ml m2 mn

48 b 1 X 1 b 2 X 2 1) b. l X (n x 1) X. 3 b m X n C = (c v 1, c 2'... c... j' c n' ) (1 x n) Let the columns corresponding to the matrix A be 2 ' 3' n 11 a m 21 a 2n il ml c i a. m a mn Let the vector X be partitioned as X (n x 1) B X. N Where X corresponds to the basic variables and X B N to the non-basic variables.

49 The revised simplex procedure solves repeatedly a set of linear algebraic equations of the form BX = P B 0 (3.4) and finds the value of the objective function c X B B (3.5) 0 The set of equations (3.4) and (3.5) may be put in matrix notation as 1 -CB Z r *1 0 0 B _V _po_ (3.6) Let M = -C B 0 B Rewriting the equation (3.6) as z 0 P _ B_ 0 The solution vector in terms of the matrix M is (3.7)

50 M nonsingular. Hence exists if and only if the basis matrix B is M 1 cbb B (3.8) The solution vector is given by C B B B B CB B 0 (3.9) B P. 0 The solution is based mainly in identifying B for the current iteration. The basis matrix B is different from the previous or succeeding basis matrix by only one column and so is the M matrix. Let the matrices M and M correspond to the c n current and next iterations of the revised simplex procedure. The M may be obtained from M using linear algebra n c and this eliminates the computation for direct inversion. The procedure for determining M from M is n c summarized below. Let the identity matrix I m+1 be represented as m+1 (e, e,... e.,.. e ) v 1 2 i m+l'

51 where e. is a unit vector with a unit element at the ith 1 place and the rest zero. Let x. and x be the entering and. 3 r, leaving variables at the start of any iteration. The can then be computed using the relationship -l M = E M (3.10) n 0 c K ' where the transformation matrix E has m unit vectors and 0 only one nonunit vector corresponding to the entering variable 0 0 r-» O O Ij 1----- -v orj -V ar j 1 a. rj 1/ a. rj 0 0 a mj -l/«. n leaving vector entering vector multiplier where 0j 2. - C. 3 3 and a.. = B i=l, 2,...m The entering vector is multiplied by the multiplier to generate the non-unit column (n ) of the Eq matrix

52 -a /a Oj rj _ a /a lj rj + 1 / a. n - a / a mj rj The E matrix now formed as 0 E = (e, e. e....e,n,.. e,) 0 v 1 2 3 r m+1 where e, e,..e,.. e are all unit vectors and n 1 2 r m+1 _! alone is the non unit vector. Thus the M is constructed, n using M and the transformation matrix E. c 0 The various steps involved in the revised simplex procedure are as follows: Step 1 Determination of the entering vector PJ (z. - c.) = C B_1 P. - c. = (1, C B) r~-c. 3 J B j j ' B 1 D (3.11) The most promising Otherwise, if all (z, - c.) > is attained. vector enters the basis. 0, then the optimal solution

53 Step 2; Determination of the leaving vector P. When the entering vector is P. and the current basis matrix is B, 1 c the leaving vector must correspond to 0 mm k kj a kj > where a,. *3 B 1 P. and c j k = 1,2,... n solution. If all o^_. < 0 then the problem has no bounded Step 3: Determination of the next basic solution. The E. matrix is constructed as explained above and the M is n computed by using (3.10) and the solution vector is defined by (3.7). Thus B is expressed as a function of B n c and the processing is returned to step 1. is reached. This procedure is repeated until the optimal solution 3.3 DATA STRUCTURE IN REVISED SIMPLEX Usually large scale real life linear programming problems are of high sparsity in nature. If the constraint coefficient matrix is represented by sequential mapping then it consumes lot of additional memory for storing zero entries. Again checks were made for zero/nonzero entries before every multiplication or division operation which consumed additional machine time. Hence Revised simplex algorithm which was described earlier, exploits the concept

54 of Data structure using linked list to represent the matrices which are involved in this algorithm. The different types of representation of matrices depending on their usage is described in chapter 2. Revised simplex algorithm employs the following matrices: i. Constraint coefficient matrix A ii. M inverse matrix and iii. The transformation matrix Eo. Depending on the usage of the matrices, they are represented in different manner. 3.3.1 Representation of A Matrix This matrix is used for the following purposes: i. Determination of z. 3 n. To find B P. where 3 c. vector 3 n. The constraint coefficient matrix is split up into n column vectors as P, P,... P. Then the vectors 1 2 n corresponding to the promising variables are used for post multiplication. Since A matrix is always used for post multiplication it is represented using columnwise circularly linked list as explained in Chapter 2. 3.3.2 Representation of M Matrix M matrix is used for the following purposes: i. Determination of inverse matrix z, 3 c_. using first row of M

55 ii. To find B 1 P. and 3. *.. in. To update the new M matrix using the relationship M n E M 0 c In the cases (i) and (ii) this matrix is used for premultiplication and in case (iii) it is used for postmultiplication. Hence the M matrix is represented using Multiply linked list. 3.3.3 Representation of the Transformation Matrix Eo Since the transformation matrix contains all unit vectors except n vector it is not represented physically. Only the nonunit vector n is represented using an array. 3.4 NUMERICAL EXAMPLE Max Z = x + 3x 1 2 Subject to x <5 1 x + 2x <10 1 2

56 The constraint coefficient matrix A is given by P 1 P 2 1 0 A 1 2 0 1 in Figure 3.1. The column wise representation of A matrix is shown M matrix is given by 10 0 M 0 10 0 0 1 Multiply linked list representation of M is shown in Figure 3.2. matrix The M matrix has m + 1 rowwise circular lists and m columnwise circular lists. For the first column of M matrix circular list representation is not necessary since all the elements in that column are always zero except the first entry (it has the value 1 only). To start with, the first row has no nonzero entries since initially Cg B = 0. Hence the right pointer of the first row head node points to itself.

HN:Head Node RN: Row Number VAL: Value of the element DLINK: Pointer Figure 3.1 Columnwise circulary linked list

58 HN: Head Node RN : Row Number CN: Column NUmber VAL: Value of the element RLINK/OLINK; Pointers Figure 3.2 Multiply linked list

59 Iteration 1: SteD 1: fz. - c. 1 = r -31 D D z - c is the most negative and hence x is the 2 2 2 most promising variable. 0 5 B P = 2 BP 10 0 1 4 S. 3 the minimum ratio test the leavi -3 3 0 0 old ^ new 2-2 1 1 The M 1 is updated using the relationship M next EM 0 current In order to reduce the number of computations in the above matrix multiplication, the following procedure is adopted, instead of forming the EQ matrix and performing the matrix multiplication, the q vector is used as described below. While entering the q vector in a particular column, it is to be verified whether any other column has already

61 HN: Head Node RN: Row Number CN: Column Number UAL: Value of the element RLINK/DLINK: Pointers Figure 3.3 multiply linked list

62 Iteration 2: Step.. 1: z.. - c.. = ( 0) z^ - is having negative value, i.e., x^ is the promising variable to enter the basis. 1 B P 1 1 0 10 B Po - 2 4 variable. Using the minimum ratio test, s is the leaving ' 1 1 7 fi 1 new 1 0 0 The new M 1 matrix is updated using the relationship (3.12) and the M matrix and the multiply linked list representation is shown in Figure 3.4. 1 M 10 11 0 12 0 0 1-2 0 0 0 1

63 HN: Haad Node RN: Row Number CN: Column Number UAL: Value of the element RIINK/OIINK: Pointers Figure 3.4 Multiply linked list

64 Iteration 3: Step 1: z. - c. 3 3 (0 0) There are no more promising variables and therefore optimal solution is obtained. Thus the solution is given by z 0 = M X P B_ 0 14 x. 3.5 COMPARISON OF REVISED SIMPLEX AND REVISED SIMPLEX WITH DATA STRUCTURE Algorithms, in general, may have different computational efficiencies. Comparisons based on the time taken for a few typical problems are furnished in Table 3.1. One of the drawbacks of the ordinary simplex procedure is that it occupies lot of memory to store zero entries in the constraint coefficient matrix and M matrix. Again each and everytime whenever a multiply/divide operation is done, all entries are accessed and it has to be checked whether it is zero or not.

65 COMPARISON OF REVISED SIMPLEX WITH REVISED SIMPLEX USING DATA STRUCTURE CONCEPT CD oo co u u C -r4 r'»r'-oem<nco'3-<tcnin>n (0 CO 00 00<3'CT'CJ'CT'CJ'Cr>O'CJ\CV O h t-> «CD & CL. CO 05 T3 X CO TJ CD 0) Q o <f ocn io m n co is h C (A r i loncmvj(j.-ot4irioocnco O (Xtfi o > e -u H H CM C4 t^l CM in VO O 00 lo 0) O-H-H t-4 t 1 t-4 CO st ^ CO C*J CO 5 CM t-4 c H T3 X HsfNinOinnNininH CD CD CD cmcnisonoicncnnocm 6 I0H *H *rl CU cmfnnno'no'ioomio -«> S mionionn CD -rl m th Ce3 CO 1 4 C0 0) O M C 0) O looorlooocoiohon O 4 -H HNC0 4tmOHrt«lO^ 2HD rlhrl CM i-i 44 1 05 O 4J 4 0) C O CtI 2 O «O I- rsvohiohvohcm<n40' i-irhcmcmroinvovo-sj-kvo 44 1 05 O -r4 <D U r-4 HOOrlOOOfONNH O COO miooicminocnconcmvo 2 > co WHncOMCOlJiN O 2 HCMm-}mcOI^OO<7'OH t-4 t-4 CO

66 made: From this comparison the following inferences are i. If the sparsity of the constraint matrix is high, then the computational savings in the revised simplex using data structure concept are proportional to the size of the problems due to the following reasons: a. Memory requirement is considerably small for storing the constraint coefficient matrix and inverse matrix. b. Unnecessary checking of zero/nonzero entries for every multiply/divide operation is avoided. ii. If the sparsity of the constraint matrix is low, then the savings for all the different type of problems are marginal and that too due to the nature of the M matrix, i.e. to start with the M matrix will have only (m + 1) nonzero entries instead of (m +1) x (m +1) entries. Then each time when a promising variable is entered the M matrix is filled with nonzero entries. Graphs connecting the number of resource constraints and the time required to solve the linear programming problems listed in Table 3.1 by the revised simplex and the revised simplex with data structure concepts are shown in Figure 3.5. It may be observed from the graphs drawn using the computer results that the slope of the curve for the simplex algorithm is steeper than that of the simplex with data structure for any range of constraints. This shows that for a given problem, the computing time required per constraint of the simplex algorithm with data structure is less than that of the simplex algorithm.

67 No. of Constraints -- R.S R.S.D.S Comparison of Revised Simplex, with and without Data Structure Q U 3 00 r4 u< o o oco o ocn o o spuocas u auitl

68 3.6 CONCLUSION In this chapter different type of data structure concepts are involved for representing matrices. Revised simplex algorithm is compared with the Revised simplex algorithm using data structure concepts. It has been observed that the Revised simplex using data structure concept is always better compared with the original Revised simplex method for any type of problems. In the next chapter Multiplex and Dual Multiplex algorithms using data structure concepts are discussed.