A New Product Form of the Inverse

Similar documents
Solving the Linear Transportation Problem by Modified Vogel Method

Part 1. The Review of Linear Programming The Revised Simplex Method

NATCOR Convex Optimization Linear Programming 1

Julian Hall School of Mathematics University of Edinburgh. June 15th Parallel matrix inversion for the revised simplex method - a study

What is linear programming (LP)? NATCOR Convex Optimization Linear Programming 1. Solving LP problems: The standard simplex method

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

Solutions for Operations Research Final Exam

Some Advanced Topics in Linear Programming

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Math 414 Lecture 30. The greedy algorithm provides the initial transportation matrix.

DEGENERACY AND THE FUNDAMENTAL THEOREM

THE simplex algorithm [1] has been popularly used

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS

4.1 The original problem and the optimal tableau

Introduction to Operations Research

Solving Linear Programs Using the Simplex Method (Manual)

The Ascendance of the Dual Simplex Method: A Geometric View

10/26/ Solving Systems of Linear Equations Using Matrices. Objectives. Matrices

Parallelizing the dual revised simplex method

Ryerson Polytechnic University Department of Mathematics, Physics, and Computer Science Final Examinations, April, 2003

VARIANTS OF THE SIMPLEX METHOD

AM 121: Intro to Optimization Models and Methods Fall 2017

High performance computing and the simplex method

Marginal and Sensitivity Analyses

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

The Simplex Algorithm. Chapter 5. Decision Procedures. An Algorithmic Point of View. Revision 1.0

Sphere Methods for LP

Discrete Optimization. Lecture Notes 2

3. Replace any row by the sum of that row and a constant multiple of any other row.

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Linear Programming Problems

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

The Simplex Algorithm

Linear Programming. Course review MS-E2140. v. 1.1

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Working Under Feasible Region Contraction Algorithm (FRCA) Solver Environment

Math Introduction to Operations Research

Read: H&L chapters 1-6

Civil Engineering Systems Analysis Lecture XV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

CSE 40/60236 Sam Bailey

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Aim. Structure and matrix sparsity: Part 1 The simplex method: Exploiting sparsity. Structure and matrix sparsity: Overview

Lecture Notes 2: The Simplex Algorithm

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Generalized Network Flow Programming

Linear Programming. Revised Simplex Method, Duality of LP problems and Sensitivity analysis

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Tuesday, April 10. The Network Simplex Method for Solving the Minimum Cost Flow Problem

Improved Gomory Cuts for Primal Cutting Plane Algorithms

Computational issues in linear programming

Graphs and Network Flows IE411. Lecture 20. Dr. Ted Ralphs

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Easter Term OPTIMIZATION

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

International Journal of Advance Research in Computer Science and Management Studies

CHAPTER 3 REVISED SIMPLEX METHOD AND DATA STRUCTURES

New Directions in Linear Programming

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

The simplex method and the diameter of a 0-1 polytope

Crash-Starting the Simplex Method

EXTENSION. a 1 b 1 c 1 d 1. Rows l a 2 b 2 c 2 d 2. a 3 x b 3 y c 3 z d 3. This system can be written in an abbreviated form as

The Simplex Algorithm with a New. Primal and Dual Pivot Rule. Hsin-Der CHEN 3, Panos M. PARDALOS 3 and Michael A. SAUNDERS y. June 14, 1993.

An iteration of the simplex method (a pivot )

Linear Optimization and Extensions: Theory and Algorithms

A Generalized Model for Fuzzy Linear Programs with Trapezoidal Fuzzy Numbers

Linear Equation Systems Iterative Methods

Graph Adjacency Matrix Automata Joshua Abbott, Phyllis Z. Chinn, Tyler Evans, Allen J. Stewart Humboldt State University, Arcata, California

3 INTEGER LINEAR PROGRAMMING

Duality. Primal program P: Maximize n. Dual program D: Minimize m. j=1 c jx j subject to n. j=1. i=1 b iy i subject to m. i=1

5 The Theory of the Simplex Method

J.A.J.Hall, K.I.M.McKinnon. September 1996

Math Models of OR: The Simplex Algorithm: Practical Considerations

A Comparative study on Algorithms for Shortest-Route Problem and Some Extensions

Extensions of Semidefinite Coordinate Direction Algorithm. for Detecting Necessary Constraints to Unbounded Regions

CHAPTER 4 IDENTIFICATION OF REDUNDANCIES IN LINEAR PROGRAMMING MODELS

ASYNPLEX, an asynchronous parallel revised simplex algorithm J. A. J. Hall K. I. M. McKinnon 15 th July 1997 Abstract This paper describes ASYNPLEX, a

Other Algorithms for Linear Programming Chapter 7

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Matrix-free IPM with GPU acceleration

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

CSE 460. Today we will look at" Classes of Optimization Problems" Linear Programming" The Simplex Algorithm"

Methods of solving sparse linear systems. Soldatenko Oleg SPbSU, Department of Computational Physics

Linear Programming Motivation: The Diet Problem

Linear Programming MT 4.0

Matrix Inverse 2 ( 2) 1 = 2 1 2

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

Chapter II. Linear Programming

4.1 Graphical solution of a linear program and standard form

Tribhuvan University Institute Of Science and Technology Tribhuvan University Institute of Science and Technology

Exploration Assignment #1. (Linear Systems)

The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R

3 Interior Point Method

Transcription:

Applied Mathematical Sciences, Vol 6, 2012, no 93, 4641-4650 A New Product Form of the Inverse Souleymane Sarr Department of mathematics and computer science University Cheikh Anta Diop Dakar, Senegal ssarr@voilafr Youssou Gningue Department of mathematics and computer science Laurentian s University Ontario, Canada Abstract Many algorithms of solving linear programs are based on the revised simplex method The product form of the inverse is used to inverse the base in the revised simplex methodwe present in this paper an inversion of matrix which complexity is quadratic This method is more efficient than the product form of the inverse We tested the revised simplex method and the algorithm proposed about 55 linear problems The densities of the LPs are 10%, 5% and 25% they are randomly generated Keywords: the revised simplex method, product form of the inverse, linear programming, simplex method 1 Introduction The revised simplex is based on the different steps of the simplex tableau In the tableau simplex, the next iteration is determined by the Gauss-Jordan row operations but in the revised simplex, it is used the base and its inverse the inverse is determined by using the product form of the inverse Noting that the inversion of the base is the costliest [7] step among the steps of the revised simplex The method we present here is obtained from the product form of the inverse It s complexity is quadratic In fact, the new base is inverted directly

4642 S Sarr and Y Gningue every 20 iterations Many variants of the revised simplex exist Among them we can cite the Bartels-Golub s method, the sparse Bartels-Golub s method, the Reid s method, and the forrest-tomlin s method This paper is subdivided in five sections We present the revised simplex method in the second section In the third section, the new product form is exposed The numerical results are presented in the four th section and we conclude in the fifth section 2 Algorithm of the revised simplex method The algorithm solves the following linear problem Max Z = c j x j subject to aij x j b i i =1, 2,,mandj =1, 2,n x j 0 c 1,c 2,,c n are the cost coefficients x 1,x 2,,x n are the decision variables a ij for i =1, 2,,m and j =1, 2,,n are called the technological coefficients The computations in the revised simplex are based on the original data (C, A, b) and the inverse of the basis In each iteration the inverse of the basis is computed by using the product form of the inverse 21 Product form of the inverse Let P r the entering vector and P s the leaving vector in the current simplex iteration Suppose that B 1 is the inverse of the basis, to compute Bnext 1 we use the formula : B 1 next = EB 1 E is an m-identity matrix whose sth column is replaced by :

A new product form of the inverse 4643 ξ = 1 (B 1 P r ) s (B 1 P r ) 1 (B 1 P r ) 2 (B 1 P r ) (s 1) 1 (B 1 P r ) (s+1) (B 1 P r ) m with (B 1 P r ) s > 0 22 Algorithm Let C B the row vector of cost coefficients associated to the basic variables, P j the jth column of A associated to x j we have A =[P 1,P 2,,P n ] Step 1 Initialization Put the problem in a standard form : B = I m = B 1, ˆb = b Step 2 optimality conditions For each nonbasic variable x j determine the cost coefficient associated Ĉ j = C B B 1 P j C j Then, let Ĉr = min{ĉj} If Ĉr 0, then the current basic feasible solution is optimal, Z = C Bˆb stop Otherwise Step 3 entering variable x r is the entering variable Step 4 leaving variable Let a r = B 1 P r, the leaving variable x s is determined as follows : min { } ˆbi /(a r ) i > 0 = ˆb s with (a r ) i (a r ) ˆb = B 1 b s If (a r ) i 0 i then the optimal solution is unbounded Otherwise let γ = ˆb s (a r) s

4644 S Sarr and Y Gningue Step 5 update We get ˆb = ˆb γa r, then ˆb(s) =γ The inverse of the new base is determined by the product form of the inverse Bnext 1 = EB 1 with E =(e 1,e 2,,e s 1,ξ,e s+1,,e m ), where e i is a vector of zeros except for 1 at the ith position go to step 2 ξ = 1 (a r ) s (a r ) 1 (a r ) 2 (a r ) (s 1) 1 (a r ) (s+1) (a r ) m 3 The new product form of the inverse Suppose that x s is the leaving variable, B 1 next is determined by the following formula : B 1 next = B 1 +(ξ e s )L s e s is a vector (dimension m) of zeros except for 1 at the sth position L s =(l s1,l s2,,l sm ) is the sth row of B 1 31 Algorithm Step 1 Initialization Put the problem in a standard form : B = I m = B 1, ˆb = b Step 2 optimality conditions For each nonbasic variable x j determine the cost coefficient associated Ĉ j = C B B 1 P j C j Then, let Ĉr = min{ĉj} If Ĉr 0, then the current basic feasible solution is optimal, Z = C Bˆb stop Otherwise

A new product form of the inverse 4645 Step 3 entering variable x r is the entering variable Step 4 leaving variable Let a r = B 1 P r, the leaving variable x s is determined as follows : min { } ˆbi /(a r ) i > 0 = ˆb s with (a r ) i (a r ) ˆb = B 1 b s If (a r ) i 0 i then the optimal solution is unbounded Otherwise let γ = ˆb s (a r) s Step 5 update We get ˆb = ˆb γa r, then ˆb(s) =γ Let B 1 =(l ij ) with i =1,,m and j =1,,m The inverse of the new base is determined as follows : ξ = 1 (a r ) s (a r ) 1 (a r ) 2 (a r ) (s 1) 1 (a r ) (s+1) (a r ) m then ξ(s) =ξ(s) 1 Extract the sth row of B 1 : L s =(l s1,l s2,,l sm ) Now we get : go to step 2 B 1 nouv = B 1 + ξ L s 32 Proof In the revised simplex, the inverse of the new base is determined by the product form of the inverse : B 1 next = EB 1

4646 S Sarr and Y Gningue E =(e 1,e 2,,e s 1,ξ,e s+1,,e m ) with ( ξ t = a 1r, a 2r,, a (s 1)r, a sr a sr a sr 1 a sr, a (s+1)r a sr I is an m-identity matrix Let s consider Q = E I then E = I + Q Bnext 1 = EB 1 =(I + Q)B 1 = B 1 + QB 1 We can note B 1 =(l ij ) with i =1,,m and j =1,,m QB 1 =(c ij ) with i =1,,m and j =1,,m Thus we get : c ij = a ir a sr l sj i s, a ) mr a sr c sj = ( 1 a sr 1 ) l sj Let L s =(l s1,l s2,,l sm ) We have the equality : QB 1 =(ξ e s )L s Therefore Bnext 1 = B 1 + QB 1 = B 1 +(ξ e s )L s Then Bnext 1 = B 1 +(ξ e s )L s 33 Algorithmic analysis To compute B 1 next we use the formula : B 1 next = B 1 + ξ L s with ξ = ξ e s Let D = ξ L s, we get now : Bnext 1 = B 1 + D D = ξ L s is a product of a vector column of dimension m and a row vector of dimension m, we have here m 2 multiplications Bnext 1 = B 1 + D is a sum of two matrices, we get m 2 additions Thus the new form product is O(m 2 ) 4 Numerical results In this section we present numerical results on randomly generated sparse LPs we use three different cases of density 10% and 5% and 25% The LPs that have been solved are under the form : max Z = CX subject to AX b X 0

A new product form of the inverse 4647 A is a matrix of dimension m n a ij [50, 400] C is a row vector of dimension n c j [0, 700] b is a vector column of dimension m b i [10, 100] The implemented algorithms are running in Matlab 65 We tested two algorithms : the revised simplex called revsimp, the algorithm in section 21 called mrs NbIter is the number of iterations and nnz is the nonzero elements of the matrix A 5 Conclusion The algorithm presented in this paper is based on the revised simplex method It uses an new inversion of matrix which complexity is quadratic This new inversion of matrix replaces the product form of the inverse The computations show that the operations are reduced considerably

4648 S Sarr and Y Gningue Table 1Results on sparse LPs of density 10% and dimension m m 100 100 962 96 080 030 150 150 2150 171 346 142 200 200 3824 266 1035 387 250 250 5956 450 3574 1154 300 300 8564 624 7848 2365 350 350 11663 761 18023 4491 400 400 15254 1538 48596 12274 450 450 19258 1198 56519 12422 500 500 23782 1255 76996 16922 Table 2Results on sparse LPs of density 10% and dimension (n + 50) n 150 100 1430 201 354 133 200 150 2846 230 879 321 250 200 4753 525 3730 1162 300 250 7158 793 9991 2852 350 300 10010 834 18036 5012 400 350 13314 715 21470 5834 450 400 17151 1376 63199 127 500 450 21375 1470 85879 170 Table 3Results on sparse LPs of density 10% and dimension n (n + 50) 100 150 1433 297 201 103 150 200 2851 289 528 238 200 250 4750 280 1089 44 250 300 7130 841 6312 2181 300 350 9999 850 10904 3403 350 400 13236 860 16907 5356 400 450 17103 919 28988 6701 450 500 21383 1733 71072 16543

A new product form of the inverse 4649 Table 4Results on sparse LPs of density 5% and dimension m m 100 100 489 81 061 032 150 150 1094 211 381 163 200 200 1951 380 1567 531 250 250 3051 660 5604 155 300 300 4391 1550 19850 5922 350 350 5980 1166 23227 6494 400 400 7799 1193 350 8541 450 450 9883 1530 63313 14964 500 500 12198 1734 10695 211 Table 5Results on sparse LPs of density 5% and dimension (n + 50) n 150 100 724 159 285 105 200 150 1460 200 746 256 250 200 2435 416 2997 933 300 250 3654 720 9254 2577 350 300 5109 1130 21977 5786 400 350 6811 1636 46718 11228 450 400 8770 2135 86944 19253 500 450 10968 2167 11929 247 Table 6Results on sparse LPs of density 5% and dimension n (n + 50) 100 150 730 196 172 096 150 200 1457 340 699 328 200 250 2453 918 3952 1485 250 300 3662 1413 13886 4266 300 350 5131 1045 17112 4877 350 400 6821 1743 56423 14907 400 450 8756 2174 1226 26544 450 500 10985 3157 14398 54296

4650 S Sarr and Y Gningue Table 7Results on sparse LPs of density 25% and dimension m m 400 400 3946 1640 521 147 425 425 4462 1774 6487 15248 450 450 5009 2434 12193 26289 475 475 5576 2069 10851 22482 500 500 6165 4296 30058 64241 References [1] J Acher, J Gardelle, Programmation linéaire Dunod décision, 1978 [2] G Desbazeille,Exercices et problémes de recherche opérationnelle Bordas, Paris, 1976 [3] RFaure, BLemaire, C Picouleau, Précis de recherche opérationnelle 5 e édition Dunod (2000) [4] YGningue, La programmation linéaire avec applications aux problèmes de transport université Laurentienne, Ontario, Canada, (2000) [5] THamdy, Operations research an introduction sixth édition Prentice- Hall, Inc, (1997) [6] AKaufman, Méthodes et modéles de la recherche opérationnelle tome 1 Dunod, Paris (1962) [7] SS Morgan, A comparison of simplex method algorithms Thesis, university of Florida, (1997) [8] KPaparrizos, SNikolaos, GStephanides,A new primal dual simplex algorithm Computers and operations research, 2001 Received: April, 2012