Three nearly scaling-invariant versions of an exterior point algorithm for linear programming

Size: px
Start display at page:

Download "Three nearly scaling-invariant versions of an exterior point algorithm for linear programming"

Transcription

1 Optimization A Journal of Mathematical Programming and Operations Research ISSN: (Print) (Online) Journal homepage: Three nearly scaling-invariant versions of an exterior point algorithm for linear programming Charalampos Triantafyllidis & Nikolaos Samaras To cite this article: Charalampos Triantafyllidis & Nikolaos Samaras (2015) Three nearly scaling-invariant versions of an exterior point algorithm for linear programming, Optimization, 64:10, , DOI: / To link to this article: Published online: 08 Jul Submit your article to this journal Article views: 66 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at Download by: [University of Macedonia] Date: 30 March 2016, At: 01:39

2 Optimization, 2015 Vol. 64, No. 10, , Three nearly scaling-invariant versions of an exterior point algorithm for linear programming Charalampos Triantafyllidis and Nikolaos Samaras Department of Applied Informatics, School of Information Sciences, University of Macedonia, Thessaloniki, Greece (Received 16 July 2013; accepted 15 May 2014) In this paper, we describe three versions of a primal exterior point Simplex type algorithm for solving linear programming problems. Also, these algorithms are not affected mainly by scaling techniques. We compare their practical effectiveness versus the revised primal Simplex algorithm (our implementation) and the MATLAB s implementations of Simplex and Interior Point Method. A computational study on randomly generated sparse linear programs is presented to establish the practical value of the proposed versions. The results are very encouraging and verify the superiority of the exterior point versions over the other algorithms either using scaling techniques or not. Keywords: linear programming; simplex algorithm; exterior point algorithm; scale invariance attribute AMS Subject Classifications: 90C05; 90C06; 90C49 1. Introduction Linear programming is one of the most useful and well-studied mathematical programming models which is used in many scientific, industrial and economical applications. Many algorithms have been invented for the solution of a linear program (LP). These algorithms can be divided into two main categories: (i) Simplex-type or pivoting algorithms and (ii) interior point methods. The computational performance of Simplex-type algorithms on realworld problems is usually far better than the theoretical worst-case complexity, but none of them admits polynomial complexity.[1 4] A breakthrough in the complexity analysis of the Simplex algorithm was made by [5]. In this paper, Borgwardt showed that on average the Shadow Vertex Algorithm (SVA) converged in a number of iterations that was polynomial in m and n. An Exterior Point Simplex-type Algorithm (EPSA) was originally developed by Pararrizos [6] for the assignment problem. Later, Paparrizos et al. [7] developed a general EPSA for the solution of general LPs. EPSA differs from Primal Simplex Algorithm (PSA) because its basic solutions are not feasible, or more precisely it uses two paths to converge to the optimal solution. One path is exterior to the feasible region while the other is feasible. Thus, EPSA is not bounded to examine each adjacent vertex as PSA does, and can avoid a lot Corresponding author. samaras@uom.gr 2014 Taylor & Francis

3 2164 C. Triantafyllidis and N. Samaras of boundary points of the optimal path. On the contrary, PSA starts with a feasible basis and uses pivot operations in order to preserve feasibility of the basis and to guarantee the monotonicity of the objective function. EPSA like the SVA is a special case of the dual Monotonic Build UP (MBU) Simplex algorithm.[8] EPSA and SVA share the common idea to use an auxiliary vector in order to force primal feasibility. Primal and/or dual feasibilities are guaranteed only at the last iteration in which the optimal solution is found. Terlaky [9] developed the criss-cross algorithm that does not preserve primal and dual feasibility. It is well known that Simplex-type algorithm s computational behaviour can be improved by modifying: (i) the initial solution and (ii) the pivoting rule. How to choose a pair of indices (entering/leaving variables) is crucial for the Simplex-type algorithm s efficiency. Specifically, the number of iterations needed to solve LP by Simplex-type algorithms depends upon the pivot columns used. The big challenge with the Simplex-type algorithms is to develop and to implement computational versions of them that can solve large-scale LPs efficiently. Several pivoting rules have been proposed and tested in the past to improve efficiency of the classical Simplex algorithm.[10,11] A complete presentation of them can be found in [8]. Recently, a lot of work has been done in this direction. Pan [12] proposed a new pivot rule for the Simplex algorithm which is based on normalized reduced costs. He reports very promising computational results against Devex pivoting rule. Furthermore, Pan [13,14] offer computational results with large-scale sparse LPs, demonstrating superiority of nested Dantzig, steepest-edge and Devex rules, over commonly used pivoting rules. A new general solution algorithm which is free from artificial variables is developed by Arsham et al. [15]. This new algorithm initially constructs a basic variable set which is empty, then it fills it up with variables having large c j. Later, Arsham [16] proposed a hybrid gradient pivotal method for solving LPs, which consists of three phases. A new method called direct cosine Simplex algorithm is developed by Yeh and Corley [17]. Recently, an algorithm that produces a sequence of interior and boundary points is presented in [18]. Sherali and Özdaryal [19] examined three exterior point approaches solving LPs. These algorithms have the advantage that they can be initialized at arbitrary starting solutions and present a great degree of flexibility in designing particular algorithmic variants. Li et al. [20] proposed a primal Phase I method using the most-obtuse-angle column pivoting rule. Another approach for solving LPs has been proposed by Jurik [21]. His method is based on the fact that the minimum distance between the feasible region and a hyperplane which is perpendicular to the cost coefficient vector is obtained at the optimal point. Recently, Malakooti andal-najjar [22,23] developed a hybrid algorithm which moves in the interior of the feasible region to another point on the boundary of the feasible region. As a consequence, the proposed method bypasses several extreme (boundary) points. The aim of this paper is to examine versions of EPSA using three different strategies to enter Phase II. In order to gain an insight into the practical behaviour of the proposed versions, we have performed some computational experiments. Since our implementations were done under the environment of MATLAB, we compare the computational behaviour of each one version of EPSA against PSA (our implementation) and MATLAB s implementation of Simplex algorithm and interior-point method on randomly generated sparse LPs. LINPROG s IPM is a variant of Mehrotra s predictor corrector algorithm,[24] a primaldual approach. It is considered to be one of the state-of-the-art solvers for LP. 1 All the implementations are designed to take the advantages of MATLAB s sparse matrix functions and capabilities. There was a huge gap between the theoretical worst-case complexity and

4 Optimization 2165 practical performance of EPSA. Our computational results reveal that all the versions of EPSA are substantially faster than Simplex algorithm (our and MATLAB s implementations) on a class of randomly generated sparse LPs. A review of scaling techniques for LPs with a focus on the impact of these techniques on the computational performance of the Simplex algorithm was recently presented in [25]. According to the computational results presented in [25], none of the scaling techniques outperforms the simplest one (equilibration technique). During the computational study, we observe that EPSA s versions are almost scaling invariant. This means that EPSA s versions would still behave in almost the same manner should the order of magnitude for the coefficients of the LP change. PSA with the largest coefficient rule is not scaling invariant. Particularly, in dimension of density 20%, the third version of EPSA without using a scaling technique needs 10.84% more iterations and 15.20% more CPU time to solve the problem instead of using a scaling technique. For the same dimension and density MATLAB s implementation of Simplex algorithm needs % more iterations and % more CPU time whereas, PSA (our implementation) needs % more iterations and % more CPU time. The paper is organized as follows. In Section 2, we briefly describe EPSA. In Section 3, we give fundamentals of the artificial problem of Phase I. The three EPSA s versions are presented in Section 4. In Section 5, we present the scaling-invariant attribute. Computational results are presented in Section 6 and finally concluding remarks in Section EPSA algorithm Consider the following linear programming problem (LP.1) in the standard form: min c T x s.t. Ax = b x 0 (LP.1) where A R m n, b R m, c, x R n, T denotes transposition, and rank(a) = m, 1 m < n. Consequently, the linear system Ax = b is consistent. Partitioning the matrix A as A = (B, N) and with a corresponding partitioning and ordering of x T =[x B x N ] and c T =[c B c N ] (LP.1) is written as: min c T B x B + c T N x N s.t. Bx B + Nx N = b x B, x N 0 The matrix B is an m n non-singular submatrix of A called basic matrix, or basis, whereas N is an m (n m) submatrix of A called non-basic matrix. The variables corresponding to B will be called basic. The rest columns of A will be referred to as non-basic. Given a basis B the associated solution x B = B 1 b, x N = 0 is called a basic solution. This solution is feasible if x B 0. Otherwise, it is called infeasible. The reduced cost vector which corresponds to the basis B is (s N ) T = (c N ) T w T N,(s B ) T = 0 where w T = (c B ) T B 1 are the Simplex multipliers and s are the dual slack variables. Dual feasibility means that s 0. The ith row of the coefficient matrix A is denoted by A i. and the jth column by A. j. In solving LPs by the Simplex type methods, a great deal of computational effort is devoted to inverting the basis. The basis is maintained in some factorized form. At every

5 2166 C. Triantafyllidis and N. Samaras iteration, its factors have to be updated. The simplest updating scheme is the product form of the inverse. Note that the total computational effort of one iteration of Simplex type algorithms is dominated by the computation of the inverse matrix B 1. The current inverse B 1 can be computed from the previous inverse B 1 with pivoting operations using the relation B 1 = E 1 B 1, where E 1 is the inverse of the eta-matrix. Eta-matrix E is a square matrix in which all the columns are the corresponding columns of the identity matrix save one. The matrix E 1 is computed using the following relation: E 1 = 1 h 1r /h rr /h rr.... h mr /h rr 1 where h rr is the pivot element. EPSA generates solutions that are not feasible. Specifically, in every iteration, EPSA generates two paths to the optimal solution. One path is exterior (infeasible) and the other is feasible. This relaxation of the feasibility requirements seems to be efficient in practice. Since the original papers of EPSA [2,6] were published 20 years ago, we will proceed with a brief description of EPSA. Let B be an initial non-optimal basis and not necessarily feasible to (LP.1). Contrary to PSA, EPSA first selects the leaving variable and then the entering one. In that sense, EPSA resembles to the dual Simplex algorithm. Set P = { j N : s j < 0 } and Q = { j N : s j 0 }.IfP =, then the current basis B is optimal to (LP.1). In this case, EPSA stops. Otherwise (P = ), a leaving variable x k, where k = B[r] is determined. An equivalent notation is to write x k = x B[r]. In order to determine the leaving variable x k, an improving direction d B (c B T d B < 0) to (LP.1) is constructed. Then the leaving variable is determined by the minimum ratio test of the PSA. Similar to the PSA, if d B 0 EPSA stops. Problem LP.1 is unbounded. Hence, the ray {x B : x B = x B + td B, t > 0} corresponding to the current basic solution x B and d B intersects the feasible region of (LP.1). The entering variable x l is chosen using the following two minimum ratio tests: θ 1 = s { } p s j = min : H rj > 0, j P H rp H rj θ 2 = s { } q s j = min : H rj < 0, j Q H rq H rj (1) where H rp = (B 1 ) r. A P and H rq = (B 1 ) r. A Q.Ifθ 1 θ 2, the entering variable is chosen from P. Otherwise (θ 1 >θ 2 ), is chosen from Q. It is easily seen that priority is given to the variables in P. If the entering variable is chosen from Q, the new basic solution is exterior (infeasible) to problem (LP.1). At this point, a new basis, B is constructed and a new iteration can be initiated. A formal description of the EPSA is given below.

6 Optimization 2167 EPSA algorithm Step 0 (Initialization): Start with a feasible basic partition (B, N). Compute B 1 and vectors x B,w,s N. Find the sets of indexes P, Q using the relations: P = { j N : s j < 0 } and Q = { j N : s j 0 } Compute s 0 using the relation s 0 = j P s j Also compute the vector direction d B from d B = h j and h j = B 1 A. j j P Step 1 (Test of termination): i. (Optimality test): If P = ST OP. Problem (LP.1) is optimal. ii. (Choice of leaving variable): If d B 0, ST OP. Ifs 0 = 0, problem (LP.1) is optimal. If s 0 < 0, problem (LP.1) is unbounded. Otherwise choose the leaving variable x k = x B[r] using the minimum ratio test: α = x { } B[r] xb[i] = min : d B[i] < 0 d B[r] d B[i] If α =+ problem (LP.1) is unbounded. Step 2 (Choice of entering variable): Compute the row vectors H rp = (B 1 ) r. A P and H rq = (B 1 ) r. A Q, where (B 1 ) r. denotes the rth row of the matrix B 1. Compute the ratios θ 1 and θ 2 using the relations: θ 1 = s { } p s j = min : H rj > 0, j P H rp θ 2 = s q H rq = min H rj { s j H rj } : H rj < 0, j Q Determine the variables t 1, t 2 such that P(t 1 ) = p and Q(t 2 ) = q. Ifθ 1 θ 2 set l = p. Otherwise set l = q. The non-basic variable x l enters the basis. Step 3 (Pivoting): Set B[r] =l (put l-index to the r position of the set of basic indices B). If θ 1 θ 2 set P P \ {l} and Q Q {k}. Otherwise set Q[t 2 ]=k. Using the new partition (B, N), where N = (P, Q) compute the matrix B 1 and update vectors x B,w,s N. Also update d B by d B = E 1 d B where E 1 is computed by (1) and compute s 0.Gotostep1. 3. The problem of Phase I The problem of Phase I is constructed by the following procedure. First, a variable x n+1 0 is added to the initial problem (LP.1). The coefficients of x n+1 are given by the relation g = Be where e R m is a column vector of ones. Hence, the LP which is solved in Phase I has the form

7 2168 C. Triantafyllidis and N. Samaras min x n+1 s.t. Ax + gx n+1 = b x, x n+1 0 (LP.2) A pivot must now be performed in order to insert the variable x n+1 into the basis. The leaving variable is selected by: x k = x B[r] = min { x B[i] : i = 1, 2,...,m } The new partition is B[r] =n+1 and N[n+1] =k. It is obvious now that the corresponding basic solution is feasible. Since x B[r] = b r > 0, x B[i] = b i b r 0, i = r and x j = 0, j N. 4. EPSA s versions EPSA algorithm requires an initial primal feasible solution to start with. In [7] Paparrizos et al. presented a big -M method for solving general linear programming problems with EPSA. In this section, three versions of EPSA are presented using the well-known Two Phases method. As we mentioned before, EPSA also visits infeasible vertexes and as a result when the variable x n+1 leaves the basis, this does not necessarily mean that the current partition is also feasible. Therefore, we have to define how the EPSA exits Phase I. We implemented the EPSA using three different versions all related to the strategy that is used to exit Phase I. These versions are: EPSA1 (Hybrid EPSA). This is a hybrid algorithm because in Phase I we apply PSA to solve the problem (LP.2) and only when a primal feasible partition is found, we apply EPSA in Phase II to (LP.1). EPSA2. This version uses EPSAin both phases and moves to Phase II if the problem (LP.2) is reached to optimality. EPSA2 is actually the worst case of EPSA3. EPSA3. This version uses EPSA in both phases. Specifically, EPSA is applied to the problem (LP.2) of Phase I. EPSA moves to Phase II if (i) the variable x n+1 leaves the basis and at the same time d B direction crosses the feasible region, or (ii) d B direction doesn t cross the feasible region after x n+1 leaves the basis. In the latter case, EPSA must reach (LP.2) to optimality in order to obtain a feasible solution for the initial problem (LP.1). EPSA checks if the current d B direction crosses the feasible region using the following relation β = max where 1 i m. { xb(i) d B(i) : x B(i) < 0 } α = min { } xb(i) : d B(i) < 0 d B(i) The last two versions (EPSA2 and EPSA3) are shown in Figure Near scaling-invariant attribute Scaling techniques are used widely in optimization solvers to adjust and cure rounding errors. Roughly speaking, a matrix is said to be well-scaled if the magnitudes of its nonzero elements are close to each other. Poorly scaled coefficient matrices of an LP may

8 Optimization 2169 Figure 1. Exiting strategies in Phase I for EPSA. affect the number of iterations and the running time of a solver. LPs which have well-scaled matrices and vectors reduce the computational effort dramatically. It is known that PSA using the largest coefficient pivoting rule is not scale-invariant.[26] In order to show that EPSA is almost scaling invariant, we use the well-known equilibration scaling technique. In this method, all elements of the coefficient matrix have values between [ 1,1]. This scaling method is heuristic in the sense that it is not guaranteed to improve the computational performance of an LP. Also, this method can generate small round-off errors. A short description of the equilibrium scaling technique is given below. Equilibrium scaling technique Step 1 (Search by columns) For each column of the coefficient matrix A find the largest entry in absolute value using the relation colmax( j) = max { a ij : j = 1, 2...,n }, i = 1, 2,...,m and multiply the column A. j and the c j element by the number colmax(j), j = 1, 2,...,n. Consequently, all the elements of the A. j column will have values between [ 1,1]. Step 2 (Search by rows) For each row of the coefficient matrix A which doesn t include 1 as the largest entry in absolute value, find the largest entry in absolute value using the relation 1 rowmax(i) = max { a ij :i = 1, 2...,m }, j = 1, 2,...,n and multiply the row A i. and the b i element by the number rowmax(i), i = 1, 2,...,m. Consequently, all the elements of the A i. row will have values between [ 1,1]. 1

9 2170 C. Triantafyllidis and N. Samaras Consider now the problem (LP.1) and min c T x s.t. Ax = b x 0. (SLP.1) where (c, x) R n,(b) R m,(a) R m n and (LP.1) is the original problem, without applying the equilibrium scaling technique, and (SLP.1) is the scaled one. In both problems, EPSA starts with the same basic partition (B, N). Recall that, every column of A. j of the 1 colmax(j), coefficient matrix A and the corresponding c j are multiplied by the numbers j = 1, 2,...n and some rows A i. of A and the corresponding b i are multiplied by the numbers, i = 1, 2,...,m. 1 rowmax(i) PSA and EPSA use a similar minimum ratio test for the choice of the leaving variable. PSA uses the minimum ratio test { } xb[i] mrt PSA = min : h il > 0, i = 1, 2,...,m h il whereas EPSA uses { } xb[i] mrt EPSA = min : d B[i] < 0, i = 1, 2,...,m d B[i] The only difference is in the dominator of these minimum ratio tests. In EPSA, all the eligible non-basic variables, j P, take part to the dominator whereas in PSA only the entering variable x l (h l = B 1 A.l ). This fact absorbs the changes in the magnitudes of the non-zero elements of A and does not affect mainly in the choice of the leaving variable. On the other hand, PSA and EPSA use a different rule for the choice of the entering variable. PSA uses the largest coefficient pivoting rule, min { s j : s j < 0, j N }, for the selection of the entering variable. At step 2 of EPSA the choice of the entering variable is computed using the minimum ratio tests (θ 1 and θ 2 ). In EPSAthe non-basic variables, j P take part to the dominator of θ 1 and the remaining ones ( j Q) take part to the dominator of θ 2. Also, this fact absorbs the changes in the magnitudes of the non-zero elements of A and does not affect mainly in the choice of the entering variable. For these reasons, EPSA is a nearly scale invariant algorithm. An illustrative example Let us apply EPSA to the following LP min x 1 x 2 + 3x 3 s.t. x 1 + 2x 2 x 3 6 x 1 + x 2 3x 3 9 x 1 + 4x 2 x 3 3 x i 0, i {1, 2, 3} (LP.3)

10 Optimization 2171 Applying the equilibrium scaling technique to the (LP.3), we obtain the following equivalent LP min x x 2 + x 3 s.t. x x x 3 6 x x 2 x 3 9 x 1 + x x 3 3 x i 0, i {1, 2, 3} (LP.4) First we introduce the slack variables x 4, x 5 and x 6. In matrix notation the problem (LP.4) is written min c T x s.t. Ax = b x 0. where c, x R 6, b R 3, A R 3x6 and c = [ ] A = , b = At this point, we determine the basic partition (B, N), where B =[456] and N =[123]. It is easily seen that the current partition is not feasible because (x B ) T = (B 1 b) T = [ 6, 9, 3] < 0. Thus, we introduce the variable x n+1 = x 7 and we apply the Phase I method as it is shortly described in Section 3. The basic partition (B, N) where B =[756] and N =[1234] is feasible. We will now show that if we apply EPSA, as it is described in Section 2, to the problems (LP.3) and (LP.4), we will take the same pair of leaving/entering variables. Applying EPSA to (LP.3) without scaling The basic solution along with the dual slack variables is: x B = [ ], s N = [ ] Since there is only one negative element in s N, sets P, Q are: P = [ 3 ], Q = [ ] and their linked values corresponding to the indices of sets P, Q are: S P = [ 1 ], S Q = [ ]. The basis inverse B 1 and the direction d B are: d B = [ ] 1 0 0, B 1 =

11 2172 C. Triantafyllidis and N. Samaras The selection of the leaving variable using the minimum ratio test, plus the necessary quantities to compute the theta ratios are: α = 6, r = 1, k = 7, H rp = [ 1 ], H rq = [ ], θ 1 = 1, θ 2 = 1, l = 3. Hence, x k = x 7 is the leaving variable and x l = x 3 is the entering one. Applying EPSA to (LP.4) with scaling Observe now that the basic solution remains intact: x B = [ ], s N = [ ]. The set of indices P, Q and the sets of their corresponding values now are: P = [ 3 ], Q = [ ], S P = [ ], S Q = [ ]. Observe again, that the sets of indices P, Q do not change. In same manner, the rest quantities are: d B = [ ] 1 0 0, B 1 = 1 1 0, α = 18, r = 1, k = 7, H rp = [ ], H rq = [ ], θ 1 = 1, θ 2 = 1, l = 3. As we can see, EPSA computed the same pair of leaving / entering variables (k, l) = (7, 3) in both cases (LP.3 and LP.4). 6. Computational results Computational studies are useful tools to measure the practical effectiveness of algorithms. These studies provide not only a good criterion for their complexity but also for their stability. In this section, we present implementation details and a comparative study for the following algorithms: (i) PSA (our implementation) (ii) EPSA1, (iii) EPSA2, (iv) EPSA3, (v) Simplex algorithm (MATLAB s LINPROG function) and (vi) Interior Point Method (MATLAB s lingprog function). LINPROG [27] function is included in MATLAB s optimization toolbox. We execute LINPROG with Large-Scale option turned off (Simplex algorithm) and turned on (Interior Point Method). IPMs are considered to be the stateof-the-art methods for solving LPs at the moment. Even these algorithms (IPMs) use a Table 1. Description of the computing environment. CPU RAM Size L2 cache size L1 cache size Operating system MATLAB version Intel(R)Xeon TM, 3.00 GHz (2 processors) 2048 MB KB 2 16 KB Microsoft Windows XP Professional SP R14 SP1

12 Optimization 2173 Table 2. Parameters: scaling = no. Algorithm Problem size PSA P-I niter P-II niter , , , , niter , , , , cpu EPSA1 P-I niter P-II niter niter cpu EPSA2 P-I niter P-II niter niter cpu EPSA3 P-I niter P-II niter niter cpu MATLAB (PSA) P-I niter 12, , , , , , , , P-II niter niter 13, , , , , , , , cpu MATLAB (IPM) niter cpu

13 2174 C. Triantafyllidis and N. Samaras Table 3. Parameters: scaling = yes. Algorithm Problem size PSA P-I niter P-II niter niter cpu EPSA1 P-I niter P-II niter niter cpu EPSA2 P-I niter P-II niter niter cpu EPSA3 P-I niter P-II niter niter cpu MATLAB (PSA) P-I niter , , , , , P-II niter niter , , , , , cpu MATLAB (IPM) niter cpu

14 Optimization 2175 Table 4. Parameters: scaling = yes. n PSA EPSA1 EPSA2 EPSA3 MATLAB (PSA) MATLAB (IPM) niter (%) cpu (%) niter (%) cpu (%) niter (%) cpu (%) niter (%) cpu (%) niter (%) cpu (%) niter (%) cpu (%) Mean

15 2176 C. Triantafyllidis and N. Samaras very different methodology, the choice of IPM of the MATLAB Optimization Toolbox is made only as a reference point. All the competitive algorithms were implemented in the MathWorks MATLAB environment, version R14. The main reasons for this choice were the inherent capability of MATLAB for matrix operations and the support for high-level sparse matrix operations. The same technology (m-file functions) was used into the making of all these algorithms alongside the ones already built-in MATLAB. Our tests ran in the computing environment described in Table 1. Four programming techniques are used to improve the performance of memory-bound code in MATLAB. These techniques are: (i) pre-allocate arrays before accessing them within loops, ii) store and access data in columns, (iii) avoid creating unnecessary variables and (iv) vectorized loops. PSA (our implementation) was implemented in the same framework with EPSA s variants, using the same initial basis and then solving the same problem in the Phase I (LP.2). The reported cpu times were measured in seconds with MATLAB s built-in function CPU time. All runs were made as a batch job. To handle numerical errors, several tolerances are used. The feasibility and precision tolerance were set to 1.0E-08. In order to guarantee the accuracy, periodically we compute from scratch the inverse of the basis. The default value of the period is 80. In all test sets, the compared algorithms converge to the same solution while the initial basis consists of only the slack variables. The sparse LPs that have been solved are of the following form min c T x s.t. Ax b x 0 (LP.5) where c, x R n, b R m, A R m n and R m. All the constraints are inequalities ( ={, }). More specifically, the ranges for the following matrices and vectors varied as follows : c =[ 300, 900], A =[10, 400], b =[10, 100]. In our computational study we used LPs with 20% density. We consider 20% to be a robust density, neither too sparse nor too dense to measure the performance of the proposed versions of EPSA. In all the experiments, the coefficient matrix size was varying from Table 5. Parameters: scaling = yes. n PSA EPSA1 EPSA2 EPSA3 MATLAB (PSA) MATLAB (IPM) niter cpu niter cpu niter cpu niter cpu niter cpu niter cpu Mean

16 Optimization 2177 (a) 10 5 Mean number of iterations (log) (b) Mean number of iterations (log) MATLAB (PSA) MATLAB (IPM) PSA EPSA1 EPSA3 EPSA x x x x x x x x MATLAB (PSA) MATLAB (IPM) PSA EPSA1 EPSA3 EPSA2 Problem size x x x x x x x x1000 Problem size Figure 2. Total number of iterations for all algorithms (log format). With (b) and without (a) scaling.

17 2178 C. Triantafyllidis and N. Samaras (a) Mean cputime in seconds (log) (b) Mean cputime in seconds (log) x x x x x x x x MATLAB (PSA) MATLAB (IPM) PSA EPSA1 EPSA3 EPSA2 Problem size MATLAB (PSA) MATLAB (IPM) PSA EPSA1 EPSA3 EPSA x x x x x x x x1000 Problem size Figure 3. Total CPU time for all algorithms (log format). With (b) and without (a) scaling.

18 Optimization to and were generated using a uniform random generator. Each of these classes contain 10 random sparse LPs, without having any special block structure. In randomly generated sparse LPs, algorithms (PSA,EPSA1,EPSA2,EPSA3) are initialized with the slack basis B = I m and B = {n + 1, n + 2,...,n + m} Because, all the constraints of the randomly generated LPs are inequalities, the total number of variables after the insertion of the slack variables is equal to n + m. For each size we bring together information of all the competitive algorithms, PSA, EPSA1, EPSA2, EPSA3, MATLAB (PSA) and MATLAB (IPM). Rows of Tables 2 and 3 contain initial problem size, the average number of iterations in Phase I (P-I niter), in Phase II (P-II niter) and totally (niter), and the mean CPU time for each algorithm. Table 2 differs from Table 3 in the sense that all instances were solved without applying the equilibrium scaling technique. From Tables 2 and 3 we can observe the following: (i) all the versions of EPSA (EPSA1, EPSA2 and EPSA3) have better computational performance versus PSA (our implementation), MATLAB (PSA) and MATLAB (IPM) (ii) the scaling technique has major affect only in the computational behaviour of the PSA and MATLAB (PSA), and (iii) that our implementation of the revised Simplex algorithm (PSA) is much faster than the corresponding Simplex algorithm implemented in MAT- LAB. In order to show more clearly the superiority of EPSA s and the scaling invariant attribute of them, we provide some tables and plots showing some ratios relative with the above tables. In Table 4, we present the difference of the results of Tables 2 and 3 as a percentage, while in Table 5 we give the same difference as a ratio. The last row of each table shows the mean value of each column. As we can see, all the versions of the EPSA are near scaling invariant. For example, in dimension , EPSA3 without using a scaling technique needs 10.84% more iterations and 15.20% more CPU time to solve it instead of using a scaling technique. For the same dimension, PSA needs % more iterations and % more CPU time, MATLAB (PSA) needs % more iterations and % more CPU time and MATLAB (IPM) needs 9.22% more iterations and % more CPU time. On average (in all randomly generated sparse instances), PSA needs 2.00 times more iterations and 2.20 times more CPU time, MATLAB (PSA) needs 2.06 times more iterations and 2.63 times more CPU time, MATLAB (IPM) needs 0.15 times more iterations and 0.90 times more CPU time while EPSA3 needs 0.11 times more iterations and 0.17 times more CPU time. Figures 2(a) and 3(b) show the comparison of all competitive algorithms in terms of number of iterations and in terms of CPU time, respectively. 7. Conclusions In this paper, we have presented three attractive versions of the EPSA to solving sparse LPs. The computational results in Section 6 indicate the superiority of the proposed versions of EPSA over MATLAB s implementation of simplex algorithm. In all randomly generated scaled test problems, EPSA s versions find an optimal solution in a number of iterations that varies between 0.9n and 1.05n. In our opinion, this is a good practical performance. Finally, from the computational results, we can observe that all versions are almost scaling invariant.

19 2180 C. Triantafyllidis and N. Samaras Note 1. According to ( and the latest comparison conducted in June 2013, 7 serial LP solvers were tested, among which LINPROG is really efficient ( References [1] Klee V, Minty GJ. How good is the simplex algorithm? In: Shisha O, editor. Inequalities III. New York (NY): Academic Press; p [2] Paparrizos K. An exterior point simplex algorithm for general linear problems. Ann. Oper. Res. 1993;32: [3] Paparrizos K. Pivoting rules directing the simplex method through all feasible vertices of Klee Minty examples. Oper. Res. 1989;26: [4] Roos C. An exponential example for Terlaky s pivoting rule for the criss-cross simplex method. Math. Prog. 1990;46: [5] Borgwardt KH. The simplex method: a probabilistic analysis. Berlin: Springer-Verlag; [6] Paparrizos K. An infeasible exterior point simplex algorithm for assignment problems. Math. Prog. 1991;51: [7] Paparrizos K, Samaras N, Stephanides G. An efficient simplex type algorithm for sparse and dense linear programs. Eur. J. Oper. Res. 2003;148: [8] Terlaky T, Zhang S. Pivot rules for linear programming: a survey on recent theoretical developments. Ann. Oper. Res. 1993;46: [9] Terlaky T. A convergent criss-cross method. Optim. J. Math. Prog. Oper. Res. 1985;16: [10] Maros I. Computational techniques of the simplex method. Vol. 61, International series in operations research and management science. Boston (MA): Kluwer Academic; [11] Pan P-Q. Practical finite pivoting rules for the simplex method. OR Spektrum. 1990;12: [12] Pan P-Q. Alargest-distance pivot rule for the simplex algorithm. Eur. J. Oper. Res. 2008;187: [13] Pan P-Q. Computational results with nested pricing in the simplex algorithm. Available from: [14] Pan P-Q. Efficient nested pricing in the simplex algorithm. Oper. Res. Lett. 2008;36: [15] Arsham H, Cimperman G, Damij N, Damij T, Grad J. A computer implementation of the Push-and-Pull algorithm and its computational comparison with LP simplex method. Appl. Math. Comp. 2005;170: [16] Arsham H. A hybrid gradient and feasible direction pivotal solution algorithm for general linear programs. Appl. Math. Comp. 2007;188: [17] Yeh W-C, Corley HW. A simple direct cosine simplex algorithm. Appl. Math. Comput. 2009;214: [18] Pan P-Q. An affine-scaling algorithm for linear programming. Optim. J. Math. Prog. Oper. Res. 2013;62: [19] Sherali HD, Özdaryal B, Adams WP, Attia N. On using exterior penalty approaches for solving linear programming problems. Comput. Oper. Res. 2001;28: [20] Li W, Guerrero P-G, Santos A-P. A basis-deficiency-allowing primal phase-i algorithm using the most-obtuse-angle column rule. Comput. Math. Appl. 2006;51: [21] Jurik T. A nearest point approach algorithm for a class of linear programming problems. J. Appl. Math. Stat. Inform. (JAMSI). 2008;4: [22] Malakooti B, Al-Najjar C. The Complex Interior-Boundary method for linear and nonlinear programming with linear constraints. Appl. Math. Comp. 2010;216: [23] Al-Najjar C, Malakooti B. Hybrid-LP: finding advanced starting points for simplex, and pivoting LP methods. Comput. Oper. Res. 2011;38:

20 Optimization 2181 [24] Mehrotra S. On the implementation of a primal-dual interior point method. SIAM J. Optim. 1992;2: [25] Elble MJ, Sahinidis VN. Scaling linear optimization problems prior to application of the simplex method. Comput. Optim. Appl. 2012;52: [26] Vanderbei RJ. Linear programming: foundations and extensions. New York (NY): Springer Academic Verlag; [27] Zhang Y. User s guide to LIPSOL. Optim. Meth. Softw. 1999;11 12:

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS Yugoslav Journal of Operations Research Vol 19 (2009), Number 1, 123-132 DOI:10.2298/YUJOR0901123S A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS Nikolaos SAMARAS Angelo SIFELARAS

More information

AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM

AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM Glavelis Themistoklis Samaras Nikolaos Paparrizos Konstantinos PhD Candidate Assistant Professor Professor Department of

More information

Exterior Point Simplex-type Algorithms for Linear and Network Optimization Problems

Exterior Point Simplex-type Algorithms for Linear and Network Optimization Problems Annals of Operations Research manuscript No. (will be inserted by the editor) Exterior Point Simplex-type Algorithms for Linear and Network Optimization Problems Konstantinos Paparrizos Nikolaos Samaras

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm LECTURE 6: INTERIOR POINT METHOD 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm Motivation Simplex method works well in general, but suffers from exponential-time

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

Worst case examples of an exterior point algorithm for the assignment problem

Worst case examples of an exterior point algorithm for the assignment problem Discrete Optimization 5 (2008 605 614 wwwelseviercom/locate/disopt Worst case examples of an exterior point algorithm for the assignment problem Charalampos Papamanthou a, Konstantinos Paparrizos b, Nikolaos

More information

A NEW SIMPLEX TYPE ALGORITHM FOR THE MINIMUM COST NETWORK FLOW PROBLEM

A NEW SIMPLEX TYPE ALGORITHM FOR THE MINIMUM COST NETWORK FLOW PROBLEM A NEW SIMPLEX TYPE ALGORITHM FOR THE MINIMUM COST NETWORK FLOW PROBLEM KARAGIANNIS PANAGIOTIS PAPARRIZOS KONSTANTINOS SAMARAS NIKOLAOS SIFALERAS ANGELO * Department of Applied Informatics, University of

More information

3 Interior Point Method

3 Interior Point Method 3 Interior Point Method Linear programming (LP) is one of the most useful mathematical techniques. Recent advances in computer technology and algorithms have improved computational speed by several orders

More information

THE simplex algorithm [1] has been popularly used

THE simplex algorithm [1] has been popularly used Proceedings of the International MultiConference of Engineers and Computer Scientists 207 Vol II, IMECS 207, March 5-7, 207, Hong Kong An Improvement in the Artificial-free Technique along the Objective

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

On the Computational Behavior of a Dual Network Exterior Point Simplex Algorithm for the Minimum Cost Network Flow Problem

On the Computational Behavior of a Dual Network Exterior Point Simplex Algorithm for the Minimum Cost Network Flow Problem On the Computational Behavior of a Dual Network Exterior Point Simplex Algorithm for the Minimum Cost Network Flow Problem George Geranis, Konstantinos Paparrizos, Angelo Sifaleras Department of Applied

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Math Models of OR: The Simplex Algorithm: Practical Considerations

Math Models of OR: The Simplex Algorithm: Practical Considerations Math Models of OR: The Simplex Algorithm: Practical Considerations John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Simplex Algorithm: Practical Considerations

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

DEGENERACY AND THE FUNDAMENTAL THEOREM

DEGENERACY AND THE FUNDAMENTAL THEOREM DEGENERACY AND THE FUNDAMENTAL THEOREM The Standard Simplex Method in Matrix Notation: we start with the standard form of the linear program in matrix notation: (SLP) m n we assume (SLP) is feasible, and

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

More information

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

Introduction. Linear because it requires linear functions. Programming as synonymous of planning. LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid-20th cent. Most common type of applications: allocate limited resources to competing

More information

1. Introduction. Consider the linear programming (LP) problem in the standard. minimize subject to Ax = b, x 0,

1. Introduction. Consider the linear programming (LP) problem in the standard. minimize subject to Ax = b, x 0, A FAST SIMPLEX ALGORITHM FOR LINEAR PROGRAMMING PING-QI PAN Abstract. Recently, computational results demonstrated the superiority of a so-called largestdistance rule and nested pricing rule to other major

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

A Simplex-Cosine Method for Solving Hard Linear Problems

A Simplex-Cosine Method for Solving Hard Linear Problems A Simplex-Cosine Method for Solving Hard Linear Problems FEDERICO TRIGOS 1, JUAN FRAUSTO-SOLIS 2 and RAFAEL RIVERA-LOPEZ 3 1 Division of Engineering and Sciences ITESM, Campus Toluca Av. Eduardo Monroy

More information

VARIANTS OF THE SIMPLEX METHOD

VARIANTS OF THE SIMPLEX METHOD C H A P T E R 6 VARIANTS OF THE SIMPLEX METHOD By a variant of the Simplex Method (in this chapter) we mean an algorithm consisting of a sequence of pivot steps in the primal system using alternative rules

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

NATCOR Convex Optimization Linear Programming 1

NATCOR Convex Optimization Linear Programming 1 NATCOR Convex Optimization Linear Programming 1 Julian Hall School of Mathematics University of Edinburgh jajhall@ed.ac.uk 5 June 2018 What is linear programming (LP)? The most important model used in

More information

AMATH 383 Lecture Notes Linear Programming

AMATH 383 Lecture Notes Linear Programming AMATH 8 Lecture Notes Linear Programming Jakob Kotas (jkotas@uw.edu) University of Washington February 4, 014 Based on lecture notes for IND E 51 by Zelda Zabinsky, available from http://courses.washington.edu/inde51/notesindex.htm.

More information

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Marc Uetz University of Twente m.uetz@utwente.nl Lecture 5: sheet 1 / 26 Marc Uetz Discrete Optimization Outline 1 Min-Cost Flows

More information

Sphere Methods for LP

Sphere Methods for LP Sphere Methods for LP Katta G. Murty Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109-2117, USA Phone: 734-763-3513, Fax: 734-764-3451 murty@umich.edu www-personal.engin.umich.edu/

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Introduction to Linear Programming

Introduction to Linear Programming Introduction to Linear Programming Eric Feron (updated Sommer Gentry) (updated by Paul Robertson) 16.410/16.413 Historical aspects Examples of Linear programs Historical contributor: G. Dantzig, late 1940

More information

CSc 545 Lecture topic: The Criss-Cross method of Linear Programming

CSc 545 Lecture topic: The Criss-Cross method of Linear Programming CSc 545 Lecture topic: The Criss-Cross method of Linear Programming Wanda B. Boyer University of Victoria November 21, 2012 Presentation Outline 1 Outline 2 3 4 Please note: I would be extremely grateful

More information

Algebraic Iterative Methods for Computed Tomography

Algebraic Iterative Methods for Computed Tomography Algebraic Iterative Methods for Computed Tomography Per Christian Hansen DTU Compute Department of Applied Mathematics and Computer Science Technical University of Denmark Per Christian Hansen Algebraic

More information

Towards a practical simplex method for second order cone programming

Towards a practical simplex method for second order cone programming Towards a practical simplex method for second order cone programming Kartik Krishnan Department of Computing and Software McMaster University Joint work with Gábor Pataki (UNC), Neha Gupta (IIT Delhi),

More information

What is linear programming (LP)? NATCOR Convex Optimization Linear Programming 1. Solving LP problems: The standard simplex method

What is linear programming (LP)? NATCOR Convex Optimization Linear Programming 1. Solving LP problems: The standard simplex method NATCOR Convex Optimization Linear Programming 1 Julian Hall School of Mathematics University of Edinburgh jajhall@ed.ac.uk 14 June 2016 What is linear programming (LP)? The most important model used in

More information

Performance evaluation of a family of criss-cross algorithms for linear programming

Performance evaluation of a family of criss-cross algorithms for linear programming Intl. Trans. in Op. Res. 10 (2003) 53 64 Performance evaluation of a family of criss-cross algorithms for linear programming Tibérius Bonates and Nelson Maculan Universidade Federal do Rio de Janeiro,

More information

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 05 Lecture - 24 Solving LPs with mixed type of constraints In the

More information

Linear programming II João Carlos Lourenço

Linear programming II João Carlos Lourenço Decision Support Models Linear programming II João Carlos Lourenço joao.lourenco@ist.utl.pt Academic year 2012/2013 Readings: Hillier, F.S., Lieberman, G.J., 2010. Introduction to Operations Research,

More information

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018 Section Notes 4 Duality, Sensitivity, and the Dual Simplex Algorithm Applied Math / Engineering Sciences 121 Week of October 8, 2018 Goals for the week understand the relationship between primal and dual

More information

Linear Programming. Course review MS-E2140. v. 1.1

Linear Programming. Course review MS-E2140. v. 1.1 Linear Programming MS-E2140 Course review v. 1.1 Course structure Modeling techniques Linear programming theory and the Simplex method Duality theory Dual Simplex algorithm and sensitivity analysis Integer

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016 Simulation Lecture O Optimization: Linear Programming Saeed Bastani April 06 Outline of the course Linear Programming ( lecture) Integer Programming ( lecture) Heuristics and Metaheursitics (3 lectures)

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

1. Introduction. Consider the linear programming problem in the standard form [5, 14] minimize subject to Ax = b, x 0,

1. Introduction. Consider the linear programming problem in the standard form [5, 14] minimize subject to Ax = b, x 0, EFFICIENT NESTED PRICING IN THE SIMPLEX ALGORITHM PING-QI PAN Abstract. We report a remarkable success of nested pricing rules over major pivot rules commonly used in practice, such as Dantzig s original

More information

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch. Iterative Improvement Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change the current feasible

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

EARLY INTERIOR-POINT METHODS

EARLY INTERIOR-POINT METHODS C H A P T E R 3 EARLY INTERIOR-POINT METHODS An interior-point algorithm is one that improves a feasible interior solution point of the linear program by steps through the interior, rather than one that

More information

MATLAB Solution of Linear Programming Problems

MATLAB Solution of Linear Programming Problems MATLAB Solution of Linear Programming Problems The simplex method is included in MATLAB using linprog function. All is needed is to have the problem expressed in the terms of MATLAB definitions. Appendix

More information

Selected Topics in Column Generation

Selected Topics in Column Generation Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a

More information

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 Mathematical programming (optimization) problem: min f (x) s.t. x X R n set of feasible solutions with linear objective function

More information

High performance computing and the simplex method

High performance computing and the simplex method Julian Hall, Qi Huangfu and Edmund Smith School of Mathematics University of Edinburgh 12th April 2011 The simplex method for LP Not... Nonlinear programming... Integer programming... Stochastic programming......

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :)

Outline. Combinatorial Optimization 2. Finite Systems of Linear Inequalities. Finite Systems of Linear Inequalities. Theorem (Weyl s theorem :) Outline Combinatorial Optimization 2 Rumen Andonov Irisa/Symbiose and University of Rennes 1 9 novembre 2009 Finite Systems of Linear Inequalities, variants of Farkas Lemma Duality theory in Linear Programming

More information

A Row-and-Column Generation Method to a Batch Machine Scheduling Problem

A Row-and-Column Generation Method to a Batch Machine Scheduling Problem The Ninth International Symposium on Operations Research and Its Applications (ISORA 10) Chengdu-Jiuzhaigou, China, August 19 23, 2010 Copyright 2010 ORSC & APORC, pp. 301 308 A Row-and-Column Generation

More information

MATLAB Simulink Modeling and Simulation of Recurrent Neural Network for Solving Linear Programming Problems

MATLAB Simulink Modeling and Simulation of Recurrent Neural Network for Solving Linear Programming Problems International Conference on Mathematical Computer Engineering - ICMCE - 8 MALAB Simulink Modeling and Simulation of Recurrent Neural Network for Solving Linear Programming Problems Raja Das a a School

More information

Extensions of Semidefinite Coordinate Direction Algorithm. for Detecting Necessary Constraints to Unbounded Regions

Extensions of Semidefinite Coordinate Direction Algorithm. for Detecting Necessary Constraints to Unbounded Regions Extensions of Semidefinite Coordinate Direction Algorithm for Detecting Necessary Constraints to Unbounded Regions Susan Perrone Department of Mathematics and Statistics Northern Arizona University, Flagstaff,

More information

arxiv: v1 [cs.cc] 30 Jun 2017

arxiv: v1 [cs.cc] 30 Jun 2017 On the Complexity of Polytopes in LI( Komei Fuuda May Szedlá July, 018 arxiv:170610114v1 [cscc] 30 Jun 017 Abstract In this paper we consider polytopes given by systems of n inequalities in d variables,

More information

New Directions in Linear Programming

New Directions in Linear Programming New Directions in Linear Programming Robert Vanderbei November 5, 2001 INFORMS Miami Beach NOTE: This is a talk mostly on pedagogy. There will be some new results. It is not a talk on state-of-the-art

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 16: Mathematical Programming I Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 8, 2010 E. Frazzoli

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

Pivot Rules for Linear Programming: A Survey on Recent Theoretical Developments

Pivot Rules for Linear Programming: A Survey on Recent Theoretical Developments Pivot Rules for Linear Programming: A Survey on Recent Theoretical Developments T. Terlaky and S. Zhang November 1991 1 Address of the authors: Tamás Terlaky 1 Faculty of Technical Mathematics and Computer

More information

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm Instructor: Shaddin Dughmi Algorithms for Convex Optimization We will look at 2 algorithms in detail: Simplex and Ellipsoid.

More information

Graphs that have the feasible bases of a given linear

Graphs that have the feasible bases of a given linear Algorithmic Operations Research Vol.1 (2006) 46 51 Simplex Adjacency Graphs in Linear Optimization Gerard Sierksma and Gert A. Tijssen University of Groningen, Faculty of Economics, P.O. Box 800, 9700

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes On the Complexity of the Policy Improvement Algorithm for Markov Decision Processes Mary Melekopoglou Anne Condon Computer Sciences Department University of Wisconsin - Madison 0 West Dayton Street Madison,

More information

Part 1. The Review of Linear Programming The Revised Simplex Method

Part 1. The Review of Linear Programming The Revised Simplex Method In the name of God Part 1. The Review of Linear Programming 1.4. Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Outline in Tableau Format Comparison Between the Simplex and the Revised Simplex

More information

Read: H&L chapters 1-6

Read: H&L chapters 1-6 Viterbi School of Engineering Daniel J. Epstein Department of Industrial and Systems Engineering ISE 330: Introduction to Operations Research Fall 2006 (Oct 16): Midterm Review http://www-scf.usc.edu/~ise330

More information

Linear Programming Motivation: The Diet Problem

Linear Programming Motivation: The Diet Problem Agenda We ve done Greedy Method Divide and Conquer Dynamic Programming Network Flows & Applications NP-completeness Now Linear Programming and the Simplex Method Hung Q. Ngo (SUNY at Buffalo) CSE 531 1

More information

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014 5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38

More information

The Simplex Algorithm with a New. Primal and Dual Pivot Rule. Hsin-Der CHEN 3, Panos M. PARDALOS 3 and Michael A. SAUNDERS y. June 14, 1993.

The Simplex Algorithm with a New. Primal and Dual Pivot Rule. Hsin-Der CHEN 3, Panos M. PARDALOS 3 and Michael A. SAUNDERS y. June 14, 1993. The Simplex Algorithm with a New rimal and Dual ivot Rule Hsin-Der CHEN 3, anos M. ARDALOS 3 and Michael A. SAUNDERS y June 14, 1993 Abstract We present a simplex-type algorithm for linear programming

More information

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem Column Generation: Cutting Stock A very applied method thst@man.dtu.dk Outline History The Simplex algorithm (re-visited) Column Generation as an extension of the Simplex algorithm A simple example! DTU-Management

More information

Column Generation: Cutting Stock

Column Generation: Cutting Stock Column Generation: Cutting Stock A very applied method thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline History The Simplex algorithm (re-visited) Column Generation as an extension

More information

Lecture Notes 2: The Simplex Algorithm

Lecture Notes 2: The Simplex Algorithm Algorithmic Methods 25/10/2010 Lecture Notes 2: The Simplex Algorithm Professor: Yossi Azar Scribe:Kiril Solovey 1 Introduction In this lecture we will present the Simplex algorithm, finish some unresolved

More information

An iteration of the simplex method (a pivot )

An iteration of the simplex method (a pivot ) Recap, and outline of Lecture 13 Previously Developed and justified all the steps in a typical iteration ( pivot ) of the Simplex Method (see next page). Today Simplex Method Initialization Start with

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

R n a T i x = b i} is a Hyperplane.

R n a T i x = b i} is a Hyperplane. Geometry of LPs Consider the following LP : min {c T x a T i x b i The feasible region is i =1,...,m}. X := {x R n a T i x b i i =1,...,m} = m i=1 {x Rn a T i x b i} }{{} X i The set X i is a Half-space.

More information

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25 Linear Optimization Andongwisye John Linkoping University November 17, 2016 Andongwisye John (Linkoping University) November 17, 2016 1 / 25 Overview 1 Egdes, One-Dimensional Faces, Adjacency of Extreme

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Two common formulations of linear programming (LP) problems are: min Subject to: 1,,, 1,2,,;, max Subject to: 1,,, 1,2,,;, Linear Programming Problems The standard LP problem

More information

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.

More information

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way. University of Southern California Viterbi School of Engineering Daniel J. Epstein Department of Industrial and Systems Engineering ISE 330: Introduction to Operations Research - Deterministic Models Fall

More information

Lecture 16 October 23, 2014

Lecture 16 October 23, 2014 CS 224: Advanced Algorithms Fall 2014 Prof. Jelani Nelson Lecture 16 October 23, 2014 Scribe: Colin Lu 1 Overview In the last lecture we explored the simplex algorithm for solving linear programs. While

More information

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs 11 3.1 Forms of linear programs... 12 3.2 Basic feasible solutions... 13 3.3 The geometry of linear programs... 14 3.4 Local search among basic feasible solutions... 15 3.5 Organization in tableaus...

More information

The simplex method and the diameter of a 0-1 polytope

The simplex method and the diameter of a 0-1 polytope The simplex method and the diameter of a 0-1 polytope Tomonari Kitahara and Shinji Mizuno May 2012 Abstract We will derive two main results related to the primal simplex method for an LP on a 0-1 polytope.

More information

A Feasible Region Contraction Algorithm (Frca) for Solving Linear Programming Problems

A Feasible Region Contraction Algorithm (Frca) for Solving Linear Programming Problems A Feasible Region Contraction Algorithm (Frca) for Solving Linear Programming Problems E. O. Effanga Department of Mathematics/Statistics and Comp. Science University of Calabar P.M.B. 1115, Calabar, Cross

More information

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008 LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following

More information

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar Submodularity Reading Group Matroid Polytopes, Polymatroid M. Pawan Kumar http://www.robots.ox.ac.uk/~oval/ Outline Linear Programming Matroid Polytopes Polymatroid Polyhedron Ax b A : m x n matrix b:

More information

IDENTIFICATION AND ELIMINATION OF INTERIOR POINTS FOR THE MINIMUM ENCLOSING BALL PROBLEM

IDENTIFICATION AND ELIMINATION OF INTERIOR POINTS FOR THE MINIMUM ENCLOSING BALL PROBLEM IDENTIFICATION AND ELIMINATION OF INTERIOR POINTS FOR THE MINIMUM ENCLOSING BALL PROBLEM S. DAMLA AHIPAŞAOĞLU AND E. ALPER Yıldırım Abstract. Given A := {a 1,..., a m } R n, we consider the problem of

More information

Linear programming and duality theory

Linear programming and duality theory Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

More information

MATH 310 : Degeneracy and Geometry in the Simplex Method

MATH 310 : Degeneracy and Geometry in the Simplex Method MATH 310 : Degeneracy and Geometry in the Simplex Method Fayadhoi Ibrahima December 11, 2013 1 Introduction This project is exploring a bit deeper the study of the simplex method introduced in 1947 by

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov An Extension of the Multicut L-Shaped Method INEN 698 - Large-Scale Stochastic Optimization Semester project Svyatoslav Trukhanov December 13, 2005 1 Contents 1 Introduction and Literature Review 3 2 Formal

More information

Computational issues in linear programming

Computational issues in linear programming Computational issues in linear programming Julian Hall School of Mathematics University of Edinburgh 15th May 2007 Computational issues in linear programming Overview Introduction to linear programming

More information

Parallel Auction Algorithm for Linear Assignment Problem

Parallel Auction Algorithm for Linear Assignment Problem Parallel Auction Algorithm for Linear Assignment Problem Xin Jin 1 Introduction The (linear) assignment problem is one of classic combinatorial optimization problems, first appearing in the studies on

More information

Programs. Introduction

Programs. Introduction 16 Interior Point I: Linear Programs Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen

More information

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS GRADO EN A.D.E. GRADO EN ECONOMÍA GRADO EN F.Y.C. ACADEMIC YEAR 2011-12 INDEX UNIT 1.- AN INTRODUCCTION TO OPTIMIZATION 2 UNIT 2.- NONLINEAR PROGRAMMING

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Ellipsoid Methods Barnabás Póczos & Ryan Tibshirani Outline Linear programs Simplex algorithm Running time: Polynomial or Exponential? Cutting planes & Ellipsoid methods for

More information

Julian Hall School of Mathematics University of Edinburgh. June 15th Parallel matrix inversion for the revised simplex method - a study

Julian Hall School of Mathematics University of Edinburgh. June 15th Parallel matrix inversion for the revised simplex method - a study Parallel matrix inversion for the revised simplex method - A study Julian Hall School of Mathematics University of Edinburgh June 5th 006 Parallel matrix inversion for the revised simplex method - a study

More information