AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM

Similar documents
A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS

Three nearly scaling-invariant versions of an exterior point algorithm for linear programming

On the Computational Behavior of a Dual Network Exterior Point Simplex Algorithm for the Minimum Cost Network Flow Problem

Exterior Point Simplex-type Algorithms for Linear and Network Optimization Problems

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

A NEW SIMPLEX TYPE ALGORITHM FOR THE MINIMUM COST NETWORK FLOW PROBLEM

Worst case examples of an exterior point algorithm for the assignment problem

A Feasible Region Contraction Algorithm (Frca) for Solving Linear Programming Problems

3 Interior Point Method

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

EARLY INTERIOR-POINT METHODS

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Chapter II. Linear Programming

Linear Programming Duality and Algorithms

Discrete Optimization. Lecture Notes 2

The simplex method and the diameter of a 0-1 polytope

Two new variants of Christofides heuristic for the Static TSP and a computational study of a nearest neighbor approach for the Dynamic TSP

A Simplex-Cosine Method for Solving Hard Linear Problems

The Simplex Algorithm with a New. Primal and Dual Pivot Rule. Hsin-Der CHEN 3, Panos M. PARDALOS 3 and Michael A. SAUNDERS y. June 14, 1993.

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Convex Optimization CMU-10725

Some Advanced Topics in Linear Programming

NATCOR Convex Optimization Linear Programming 1

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

Programming, numerics and optimization

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Sphere Methods for LP

Algorithmic Game Theory and Applications. Lecture 6: The Simplex Algorithm

Linear Programming Problems

DEGENERACY AND THE FUNDAMENTAL THEOREM

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Artificial Intelligence

A CSP Search Algorithm with Reduced Branching Factor

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

arxiv: v1 [cs.cc] 30 Jun 2017

MATH 310 : Degeneracy and Geometry in the Simplex Method

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

A Simplex Based Parametric Programming Method for the Large Linear Programming Problem

16.410/413 Principles of Autonomy and Decision Making

Graphs that have the feasible bases of a given linear

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

What is linear programming (LP)? NATCOR Convex Optimization Linear Programming 1. Solving LP problems: The standard simplex method

Introduction to Linear Programming

Read: H&L chapters 1-6

THEORY OF LINEAR AND INTEGER PROGRAMMING

Linear Programming Motivation: The Diet Problem

THE simplex algorithm [1] has been popularly used

Lecture 16 October 23, 2014

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Extensions of Semidefinite Coordinate Direction Algorithm. for Detecting Necessary Constraints to Unbounded Regions

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Column Generation: Cutting Stock

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

Integer Programming Theory

MATLAB Solution of Linear Programming Problems

Working Under Feasible Region Contraction Algorithm (FRCA) Solver Environment

High performance computing and the simplex method

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

A Subexponential Randomized Simplex Algorithm

3 INTEGER LINEAR PROGRAMMING

(67686) Mathematical Foundations of AI July 30, Lecture 11

Heuristic Optimization Today: Linear Programming. Tobias Friedrich Chair for Algorithm Engineering Hasso Plattner Institute, Potsdam

Identical text Minor difference Moved in S&W Wrong in S&W Not copied from Wiki 1

New Directions in Linear Programming

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

Linear programming and duality theory

Linear programming II João Carlos Lourenço

1. Introduction. Consider the linear programming (LP) problem in the standard. minimize subject to Ax = b, x 0,

Linear Programming. Course review MS-E2140. v. 1.1

Lecture Notes 2: The Simplex Algorithm

Online algorithms for clustering problems

LARGE SCALE LINEAR AND INTEGER OPTIMIZATION: A UNIFIED APPROACH

Primal Dual Schema Approach to the Labeling Problem with Applications to TSP

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Parallel Auction Algorithm for Linear Assignment Problem

16.410/413 Principles of Autonomy and Decision Making

Notes for Lecture 18

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

COVERING POINTS WITH AXIS PARALLEL LINES. KAWSAR JAHAN Bachelor of Science, Bangladesh University of Professionals, 2009

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

The Simplex Algorithm

SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

IDENTIFICATION AND ELIMINATION OF INTERIOR POINTS FOR THE MINIMUM ENCLOSING BALL PROBLEM

Towards a practical simplex method for second order cone programming

Math 5593 Linear Programming Lecture Notes

Algorithms for Integer Programming

Civil Engineering Systems Analysis Lecture XV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

On Covering a Graph Optimally with Induced Subgraphs

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Rules for Identifying the Initial Design Points for Use in the Quick Convergent Inflow Algorithm

CSC 8301 Design & Analysis of Algorithms: Linear Programming

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Performance evaluation of a family of criss-cross algorithms for linear programming

Transcription:

AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM Glavelis Themistoklis Samaras Nikolaos Paparrizos Konstantinos PhD Candidate Assistant Professor Professor Department of Applied Informatics, University of Macedonia, 156 Egnatia Str., 54006 Thessaloniki, Greece Department of Applied Informatics, University of Macedonia, 156 Egnatia Str., 54006 Thessaloniki, Greece Department of Applied Informatics, University of Macedonia, 156 Egnatia Str., 54006 Thessaloniki, Greece Abstract The aim of this paper is to present an experimental investigation of a Primal-Dual Exterior Point Simplex Algorithm (PDEPSA) for Linear Programming problems (LPs). There was a huge gap between the theoretical worst case complexity and practical performance of simplex type algorithms. The algorithm bases on interior points to move from one basic solution to another. The main advantage of PDEPSA is its computational performance. This performance stems from the fact that PDEPSA can deal much better with the problem of stalling and cycling. Moreover, the use of interior points is responsible for the reduction of iterations and the CPU time and especially in linear degenerate problems. A computational study is also presented with experiments on randomly generated sparse optimal linear problems, which increases the reliability of the conclusions. The algorithm is very encouraging and promising due to the fact that it proved its superiority to the exterior point algorithm and the primal revised simplex algorithm on the computational study. KEYWORDS Linear programming, Primal Simplex algorithms, Exterior Point Algorithms, Computational results. 1. INTRODUCTION One of the most significant and well-studied optimization problems is the Linear Programming problem (LP). Linear programming consists of optimizing, (minimizing or maximizing) a linear function over a certain domain. The domain is given by a set of linear constraints. The simplex algorithm is the oldest and most well-known solution method for the LP. George B. Dantzig is regarded as the founder of linear programming and the person who set the fundamental principles of optimization. The simplex starts from a feasible solution and moves from one vertex to an adjacent one until an optimum solution is computed (Dantzig, 1963). The two main drawbacks of the simplex algorithm are the stalling and/or the cycling problem. Moreover, the phenomenon of cycling is very often in many practical problems for the simplex algorithm. In order to avoid the problem of cycling, researchers have introduced many anti-cycling pivoting rules (Terlaky & Zhang, 1993). The simplex algorithm performs effectively in practice only under specific circumstances and especially on small or medium size LPs. It has been proved that the average number of iterations in simplex algorithm can be estimated with a polynomial bound (Borgwardt, 1982). Despite that the worst case complexity of the simplex method has exponential behavior (Klee & Minty, 1992). Until the decade of 1980 simplex algorithm and barrier methods were known as the only solution approach for the LP. This situation changed in 1984 when the first try of interior point methods for linear programming appeared (Karmarkar, 1984). Next years researchers focused their attention on how the simplex algorithm and the Interior Point Methods (IPMs) can be combined in order to improve the computational behavior of software packages (Erling et al., 1996). In contrast to the simplex method, IPMs compute an optimal solution by moving inside the feasible region. Moreover, the research adopted primal-dual algorithms based on IPMs and simplex method which were regarded as the most significant and useful algorithms. The primal-dual methods for linear programming have interesting theoretical properties. On the

computational side, Mehrotra s predictor corrector algorithm was the main idea for the most interior-point software (Wright, 1997). Primal-dual methods have good computational performance and they can be extended to wider classes of problems in mathematical programming (Gondzio, 1996). Apart from the IPMs, another approach to solve LPs is to modify the main idea of simplex algorithm to move from one basic vertex to another in the exterior of the feasible region. The algorithm can construct basic infeasible solutions instead of feasible. The algorithm which includes these kinds of features are called exterior point simplex algorithm (EPSA). The first attempt for an exterior point algorithm was introduced in 1991 for the assignment problem (Paparrizos, 1991). Since then, many papers and articles have been presented in order to enhance the capabilities of the exterior point algorithms. A significant improvement is the primal-dual versions of the algorithm which reveal its superiority to simplex algorithm. The main idea of exterior point algorithm is that it relies on two paths for estimating the optimal solution. The first past refers to a number of feasible solutions until the optimal is found and the other path includes basic points which lead to the exterior path. The exterior point algorithm has two main disadvantages (Paparrizos et al., 2003B). The first refers to the construction of moving directions which should lead the algorithm close to the optimal solution. The creation of a direction with these features is a difficult process. The second disadvantage is the fact that there is no known method which can reveal the path that leads into the interior of the feasible region, something which would make easier the search of a computational good direction. These advantages can be avoided if the exterior path is replaced with a dual feasible simplex path. This method has been introduced by Paparrizos et al. (Paparrizos et al., 2003A). The main idea of the Revised Primal Dual Simplex Algorithm (RPDSA) is based on the process of moving from any interior point to an optimal basic solution. The advantage of this algorithm is that the optimal solution is not computed by an interior point method but it can be used only in the first stage. After the first few iterations IPMs do not result in great enhancements of the objective function s value. RPDSA can be applied in the second stage and computes the optimal solution in a few iterations, much faster from an interior point method. Although this algorithm is better from the exterior point algorithm and it deals very well with the two disadvantages which were described above. The algorithm of our paper introduces a new technique which deals with the problems of stalling and cycling and improves the performance of the RPDSA. This algorithm has been developed by Samaras (Samaras, 2001). This algorithm is primal-dual, meaning that it simultaneously solves both the primal and dual problem. RPDSA begins with a boundary point of the feasible region. This boundary point is used in order to compute the leaving variable. It has been observed that in this stage the problem of stalling can arise very often. This weakness can be overcome if the boundary point is replaced by an interior point. The transfer into the interior of feasible region disappear the problem of cycling. In order to gain an insight into the practical behavior of the proposed algorithm, we have performed some computational experiments. Preliminary results on randomly generated LPs show that the reduction in the computational effort is promising. The paper is organized as follows. Following introduction, in section 2 we briefly describe the general framework of the proposed algorithm. In section 3 the computational study is presented. Finally, in section 4 there are the conclusions and possible enhancements of the proposed algorithm. 2. ALGORITHM DESCRIPTION In this section we briefly describe the Primal-Dual Exterior Point Simplex Algorithm (PDEPSA). For solving general LPs see (Paparrizos et al., 2003B) and for a full description of PDEPSA see (Samaras, 2001). Consider now the following linear program in the standard form. min c T x s. t. Ax = b (LP) x 0 where A R mxn, c,x R n, b R m and T denotes transpose. Assume that A has full rank, rank(a)=m (m<n). The dual problem associated with (LP) is max b T x s. t. A T w+s = c (DP) s 0

where w R n and s R n. Step 0 (Initialization): A) Start with a dual feasible basic partition (B, N) and an interior point y of (LP). Set: P = N, Q = and compute 1 Τ T 1 T T xb = ( AB) b, w = ( cb ) ( AB),( sn) = ( cn) w AN B) Compute the direction d B from the relation: db = yb xb Step 1 (Test of optimality and choice of the leaving variable): If x 0, STOP. The (LP) is optimal. Else, choose the leaving variable xk = xb[ r] from the relation: xbr [ ] xbr [ ] a = l max{ : db[] i 0 [] 0} d = d > x < B i Br [ ] Br [ ] Step 2 (Computation of the next interior point): Set: Compute the interior point: yb = xb + adb a a = 2 1 + 1 Step 3 (Choice of the entering variable): 1 Set: H rn = ( B ) r. A. N Choose the entering variable x l from the relation: s s l j = min{ : H rj j N} H H 1 Compute the pivoting column: hl = B A. l If l P, P P\{l} Else Q Q\{} l rn Step 4 (Pivoting): Set: Br [ ] = land Q Q {k} Using the new partition (B, N) where N = (P, Q), compute the new basis inverse B -1 and the variables: 1 Τ T 1 T T xb = ( AB) b, w = ( cb ) ( AB),( sn) = ( cn) w AN Go to step 0B. rj 3. COMPUTATIONAL STUDY In order to check and test the performance and practical effectiveness of our algorithm, a computational study is presented below. The problems which were included in the computational study were small and medium scale sparse instances. The computational study includes the revised simplex algorithm, the Exterior Point Simplex Algorithm (EPSA) and the Primal Dual Exterior Point Simplex Algorithm (PDEPSA). All implemented algorithms are running in MATLAB Professional R2010b. MATLAB (MATrix LABoratory) is a powerful programming environment and as its name suggests, is especially designed for

matrix computations like, solving systems of linear equations or factorizing matrices. Also, it is an interactive environment and programming language for numeric scientific computing. The computing environment includes an Intel(R) Core i7 3.00 GHz (2 processors) and 16.384 MB RAM. The operating system was Microsoft Windows 7 Professional SP1 edition. All times in the following tables are measured in seconds with the cputime built-in function of MATLAB, including the time spent on scaling. All runs were made as a batch job. Two techniques are used in order to improve the performance of memory-bound code in MATLAB. These techniques are: (i) Pre-allocate arrays, and (ii) Store and access data in columns. Three different density classes of LPs are used in the computational study: 5%, 10% and 20%. In each dimension 10 different instances are tested. These LPs include only randomly generated matrices. The initial basis consists of only the slack variables. Tables 1-3 present the average execution time in seconds (CPU time) and the average number of iterations (niter) that each of the competitive spent to solve the LPs in request. Table 1. Results for randomly generated sparse LPs with dimension nxn and density 5% PDEPSA EPSA Simplex Density 5% CPU_time (sec) Niter CPU_time (sec) Niter CPU_time (sec) Niter 500 x 500 0,8502 406 3,7924 1.536 3,9617 1.660 600 x 600 1,4492 520 6,8079 2.042 8,5560 2.487 700 x 700 2,3447 684 11,4099 2.627 17,9035 3.706 800 x 800 3,5225 831 19,4112 3.238 33,4693 5.187 900 x 900 5,2479 1.010 27,5498 3.591 48,6587 6.034 1000 x 1000 7,7650 1.218 39,6211 4.148 82,6693 7.962 1100 x 1100 11,3865 1.319 59,9605 4.991 138,3463 10.331 1200 x 1200 15,6400 1.552 75,3610 5.406 197,5266 12.372 1300 x 1300 21,9790 1.749 103,0355 6.151 300,8046 15.533 1400 x 1400 27,4468 1.916 125,4295 6.676 416,6580 18.237 1500 x 1500 34,0847 2.071 151,6782 7.120 574,1572 21.802 1600 x 1600 46,9520 2.428 186,7300 7.712 748,6919 24.400 1700 x 1700 54,4100 2.576 225,6000 8.388 1012,0000 28.642 1800 x 1800 64,1663 2.696 270,6056 9.015 1366,8000 34.026 1900 x 1900 80,0582 2.988 328,8189 9.713 1643,0000 36.365 2000 x 2000 91,4950 3.175 375,5810 10.164 2057,0000 41.011 Total 468,7981 27.139 2011,3925 92.518 8650,2031 269.755 As it is obvious from the first table, PDEPSA is clearly superior to the other two algorithms. The execution time of PDEPSA is much lower than EPSA and simplex algorithm. For example, PDEPSA needs almost 8 seconds to solve a 1000 x 1000 problem when EPSA demands almost 40 seconds and even worse Simplex Algorithm needs 83 seconds to complete the computations. Likewise, the number of iterations is much greater for Simplex Algorithm and much less from PDEPSA. In addition, in a 2000 x 2000 problem the simplex algorithm requires in average 41.000 iterations when PDEPSA claims for 3.200 iterations and EPSA 10.200 algorithms. It is clear that PDEPSA can lead to significant reductions of the execution time and the number of iterations. In the same problem, PDEPSA can solve the linear problem almost in 90 seconds when Simplex algorithm needs 2.100 seconds. One may infer a growth in the relative speed of PDEPSA with respect to simplex and EPSA as problem sizes increase. Table 2. Results for randomly generated sparse LPs with dimension nxn and density 10% PDEPSA EPSA Simplex Density 10% CPU_time (sec) Niter CPU_time (sec) Niter CPU_time (sec) Niter 500 x 500 0,9641 420 3,7627 1.529 5,9177 2.281

600 x 600 1,7690 569 7,0434 1.996 13,9170 3.557 700 x 700 2,5460 653 11,8650 2.378 24,2287 4.658 800 x 800 4,2027 828 19,6858 2.817 41,2261 6.001 900 x 900 6,7361 1.049 26,6060 3.186 71,3937 7.869 1000 x 1000 9,0184 1.109 38,6524 3.646 112,3835 9.647 1100 x 1100 13,8825 1.290 55,6768 4.195 174,3109 11.878 1200 x 1200 20,6296 1.539 67,4408 4.485 245,1684 13.907 1300 x 1300 25,7230 1.644 97,9452 5.265 387,0143 17.688 1400 x 1400 31,8000 1.762 111,0493 5.484 514,3469 19.897 1500 x 1500 42,4307 2.048 139,9000 5.889 646,8230 22.105 1600 x 1600 49,3322 2.077 166,4187 6.305 878,5583 25.665 1700 x 1700 63,1305 2.351 208,5203 6.852 1137,4827 28.744 1800 x 1800 77,4171 2.567 251,1772 7.338 1441,0763 32.301 1900 x 1900 90,2918 2.680 296,2397 3.505 1773,6488 35.054 2000 x 2000 104,0885 2.803 332,9217 8.147 2300,1632 40.319 Total 543,9622 25.389 1834,9050 76.706 9767,6594 281.571 In less sparse problems the PDEPSA continues to present its surprisingly good performance. Its superiority is clear and with no doubt it is an effective and promising algorithm. In all dimensions PDEPSA has the shortest execution time and the smallest number of iterations. Table 3. Results for randomly generated sparse LPs with dimension nxn and density 20% Density 20% PDEPSA EPSA Simplex CPU_time (sec) Niter CPU_time (sec) Niter CPU_time (sec) Niter 500 x 500 1,1747 403 3,4476 1.313 6,5975 2.270 600 x 600 2,2386 527 5,4132 1.548 13,6264 3.255 700 x 700 3,4632 634 11,9684 1.964 26,4489 4.469 800 x 800 5,2323 762 17,7794 2.302 43,8315 5.597 900 x 900 8,6003 905 25,5483 2.598 74,2187 7.137 1000 x 1000 12,6954 1.043 35,2547 2.937 125,6610 8.928 1100 x 1100 16,7529 1.088 50,4632 3.304 184,9891 10.597 1200 x 1200 22,9415 1.246 63,3988 3.555 231,0437 11.292 1300 x 1300 31,5106 1.415 87,3496 4.099 349,3742 13.595 1400 x 1400 42,2669 1.604 104,9980 4.250 441,9010 15.130 1500 x 1500 51,6083 1.733 129,9145 4.673 600,8061 17.246 1600 x 1600 63,8855 1.850 171,6245 5.232 840,5429 20.385 1700 x 1700 76,7353 1.963 210,9352 5.653 1097,0000 23.049 1800 x 1800 90,8534 2.068 251,1585 6.049 1364,2000 24.985 1900 x 1900 112,5890 2.291 297,1148 6.407 1720,4000 27.967 2000 x 2000 135,8129 2.462 362,7834 6.951 2183,1451 30.733 Total 678,3608 21,994 1829,1521 62,835 9303,7861 226,635 In addition, in the last group of problems with bigger density PDEPSA keeps to be the algorithm with the best performance by far. A characteristic example, with a 1500 x 1500 problem PDEPSA requires in average 52 seconds when EPSA needs 130 seconds and the simplex algorithm 600 seconds. Likewise, the number of iterations is much less in PDEPSA which is able to estimate the optimal solution only within 1.730 iterations when EPSA needs to make 4.670 and the simplex algorithm almost 17.200 iterations.

Furthermore, below figures can show more clearly the superiority of PDEPSA over EPSA and the Simplex Algorithm. Figures 1-3 present the ratios (execution time of the EPSA)/(execution time of the PDEPSA), (iterations of the EPSA)/(iterations of the PDEPSA), (execution time of the Simplex Algorithm)/(execution time of the PDEPSA) and (iterations of the Simplex Algorithm)/(iterations of the PDEPSA) for the corresponding densities and dimensions. The above ratios indicate how many times PDEPSA is better than EPSA and the Simplex Algorithm. Figure 1. Speed-up ratios for nxn LPs and density 5% 25,00 cpu time ratio EPSA over PDEPSA cpu time ratio Simplex over PDEPSA niter ratio EPSA over PDEPSA niter ratio Simplex over PDEPSA 20,00 15,00 10,00 5,00 0,00 From the above figure, it is clear that as the problem dimensions increases the superiority of PDEPSA over EPSA does not vary a lot and we can claim that PDEPSA is five times faster than EPSA and it requires five times less iterations to complete its computations. On the other hand, the superiority of PDEPSA over simplex algorithm increases proportionally with the dimension of the problems. With small size LPs PDEPSA is almost five times faster and requires five times less iterations than simplex method. In contrast, in bigger LPs, like 2000 x 2000 LPs PDEPSA is almost 23 times faster and it needs 13 times less iterations. Figure 2. Speed-up ratios for nxn LPs and density 10% 25,00 cpu time ratio of EPSA over PDEPSA niter ratio of EPSA over PDEPSA cpu time ratio of Simplex over PDEPSA niter ratio of Simplex over PDEPSA 20,00 15,00 10,00 5,00 0,00 As the density of problems increases, PDEPSA continues its superiority and the results are as satisfactory as in more sparse problems. Comparatively to EPSA, PDEPSA is 4 times faster and it demands 4 times less iterations in small size LPs. However, in bigger scale LPs the difference decreases to 3 times for the execution time and the number of iterations. In regard to the simplex method, the results are very similar with the previous group of problems (density 5%), the superiority of PDEPSA over simplex algorithm increases analogically with the dimension of problems.

Figure 3. Speed-up ratios for nxn LPs and density 20% 18,00 16,00 14,00 12,00 10,00 8,00 6,00 4,00 2,00 0,00 cpu time ratio of EPSA over PDEPSA cpu time ratio of Simplex over PDEPSA niter ratio of EPSA over PDEPSA niter ratio of EPSA Simplex PDEPSA In the last group of problems with the density at the level of 20%, the results do not differ a lot from the previous. PDEPSA is almost 3 times better than EPSA both in the execution time and the number of iterations. On the other hand, PDEPSA is almost 6 times faster and it requires 6 times less iterations in small size LPs like 500 x 500 and 600 x 600 dimension problems. In bigger scale LPs, the difference increases and it reaches 16 times faster referring to execution time and the 10 times less iterations for PDEPSA. The results of the computational study clearly proved that the PDEPSA can perform faster and with a minor number of iterations in perspective to the performance of the simplex algorithm. Nevertheless, it is significant to mention the fact that as the density increases the superiority of PDEPSA decreases. More specific, in 2000 x 2000 dimension the PDEPSA is 23 times faster than the simplex algorithm in the 5 % density when in 20% density PDEPSA is almost 16 times faster. Despite the fact that this looks like a drawback because the performance of PDEPSA decreases comparing to simplex algorithm, in fact is not. In real world, the vast majority of problems are extremely sparse problems, and their density does not reach the level of 5%. Consequently, this computational performance of PDEPSA is a remarkable advantage and clearly shows its worth. 4. CONCLUSION The current paper investigates the practical behavior of the PDEPSA. The PDEPSA which is presented in this paper refers to the attempt to avoid the problem of stalling and/or cycling. The elimination of these disadvantages can lead to a more effective algorithm with better computational performance. Apart from the description of the algorithm, we presented an extended comparative computational study between the Revised Primal Simplex Algorithm, the Exterior Point Simplex Algorithm and the Primal-Dual Exterior Point Simplex Algorithm. In this computational study randomly generated sparse optimal linear problems are used and all the implementations were accomplished with the help of MATLAB programming environment. Regarding to the results, PDEPSA has proved its superiority to EPSA and the simplex method in all densities and sizes. PDEPSA indicates the same behavior comparing to EPSA in all sizes of the LPs. Their difference is not affected by the dimensions of the linear problems. In contrast to this, in all densities the performance of PDEPSA is getting better comparing to the simplex algorithm while the size of the LPs increases. Moreover, the computational performance of PDEPSA is much better in very sparse problems. As the results clarify, PDEPSA can perform little better in 5% density than in 20% density. The difference between simplex algorithm and PDEPSA is greater in sparser problems. This is a strong and significant advantage of PDEPSA because in real problems the level of density is very low.

REFERENCES Borgwardt H.K., 1982. The average number of pivot steps required by the simplex method is polynomial. Zeitschrift fur Operational Research, Series A: Theory Vo. 26, No. 5, pp. 157 177 Dantzig, G.B., 1963. Linear Programming and Extensions, Princeton, University Press, Princeton, NJ. Erling, D., Andersen, and Yinyu, Ye, 1996. Combining interior-point and pivoting algorithms for Linear Programming, Management Science, Vol. 42, No. 12, pp. 1719-1731. Gondzio, J., 1996. Multiple centrality corrections in a primal-dual method for linear programming, Computational Optimization and Applications, Vol. 6, No. 2, pp. 137-156. Karmarkar N.K., 1984. A polynomial-time algorithm for linear programming, Combinatorica, Vol. 4, pp. 373 395 Klee V, Minty GJ. 1992. How good is the simplex algorithm?, Inequalities III. New York: Academic Press, pp. 159 175. Paparrizos K., 1991. An infeasible exterior point simplex algorithm for assignment problems. Mathematical Programming, Vol, 51, pp. 45 54. Paparrizos K., Samaras, N., and Stephanides, G., 2003A. A new efficient primal dual simplex algorithm, Computers and Operations Research, vol. 30, pp. 1383-1399. Paparrizos, K., Samaras, N., Stephanides, G., 2003B. An efficient simplex type algorithm for sparse and dense linear programs, European Journal of Operational Research, Vol. 148, No. 2, pp. 323-334. Samaras N., 2001. Computational improvements and efficient implementation of two path pivoting algorithms, PhD Thesis, Department of Applied Informatics, University of Macedonia. Terlaky T., Zhang S., 1993. Pivot rules for linear programming A survey, Annals of Operations Research, Vol. 3, No. 1, pp. 203-233. Wright St., 1997, Primal dual interior point methods, Society for Industrial and Applied Mathemaics, Philadelphia, United States of America.