An Overview of Cutting Plane Methods for Semidefinite Programming

Size: px
Start display at page:

Download "An Overview of Cutting Plane Methods for Semidefinite Programming"

Transcription

1 SF2827: Topics in Optimization An Overview of Cutting Plane Methods for Semidefinite Programming Supervisor: Anders Forsgren Department of Mathematics Royal Institute of Technology (KTH) Author: Luca Furrer May 2009

2 Abstract Cutting plane methods are an alternative to interior methods to solve semidefinite programming (SDP) problems. In this paper, we are going to present five different cutting plane algorithms for semidefinite programming. The methods use relaxations and the semi-infinite representation of SDP s. The presented methods are the polyhedral cutting plane method, the polyhedral bundle method, the block diagonal cutting plane method and the primal active set approach. We present also a generic cutting plane methods which tries to unify all the other methods. In the second part the implementation of the polyhedral cutting plane method into MATLAB is presented followed by some test examples. 1

3 CONTENTS CONTENTS Contents 1 Introduction 3 2 Semidefinite Programming 4 3 Semi-infinite Programming Formulation 6 4 Polyhedral Cutting Plane Algorithm 8 5 Block Diagonal Cutting Plane Scheme 11 6 Polyhedral Bundle Scheme 13 7 Spectral Bundle Method 15 8 Primal Active Set Approach 17 9 Generic Cutting Plane Method Conclusion Implementation Ellipsoid Problem Numerical Results and Test Problems 28 2

4 1 INTRODUCTION 1 Introduction This paper was written in the context of the course Topics in Optimization by Anders Forsgren at the Royal Institute of Technology of Stockholm, KTH, during the authors exchange year at this school during 2008/2009. The course treated semidefinite programming, an interesting domain in optimization. There existes literature concerning this subject e.g. the surveys of Laurent and Rendl [9] and Todd [14]. There exist efficient interior point methods to solve such program but they have problem size limitation and they are not so good to warm start. However there are also cutting plane methods available. In Krishnan and Mitchell [8] several are presented. This paper presents different cutting plane methods for semidefinite programming as well as a generic algorithm for the different cutting plane methods. There exists methods using a relaxation of the transformation to a semiinfinite program, methods using the idea of bundle methods as well as one which mimics the simplex method for linear programming. The discussed methods are the polyhedral methods polyhedral cutting plane method, the polyhedral bundle method and the non-polyhedral methods spectral bundle method, block diagonal cutting plane method and the primal active set method. In the conclusion we compare the different methods. There finds also the description of the implementation of the polyhedral cutting plane method in MATLAB. This implementation was tested with some problems in particular an ellipsoid optimization problem. 3

5 2 SEMIDEFINITE PROGRAMMING 2 Semidefinite Programming Let us introduce the notations first. If A and B are two symmetric matrices of size n n, i.e. A, B S n, then.. denotes the Frobenius inner product. So A B = tr(a T B) = n i=1 j=1 We use also the corresponding norm A F = tr(a T A). n A ij B ij. To denote that A is definite or positive semidefinite we use A 0 respectively A 0. So let us consider the following SDP problem This problem has the dual minimize C X subject to A(X) = b, (SDP ) X 0. maximize b T y minimize A T y + S = C, (SDD) S 0. with X, S, C S n and y, b R m, where m is the number of constraints. Here C and b are given by the problem and X, S and y are the unknown variables. The notation A stands for a linear operator A : S n R m with an adjoint A T : R m S n. Those two operators have this form: A(X) = A 1 X. A m X and A T y = m y i A i i=1 4

6 2 SEMIDEFINITE PROGRAMMING where A i, i = 1,..., m, are given problem parameters. To apply primal-dual based algorithms to solve an semidefinite programming problem, it is often required that strong duality holds, i.e. that the optimal objective value of (SDP) is equal to the one of (SDD). Theorem 1. If the set of solutions to (SDP) and the set of solutions to (SDD) such that S 0 are non-empty then the duality gap is zero, i.e. the optimal values of (SDP) and (SDD) are equal. A proof of this theorem can be found in Todd [14, Chapter 4]. following two theorems are direct consequences of the one before. The Theorem 2. If the set of solutions to (SDP) such that X 0 and the set of solutions to (SDD) are non-empty then the duality gap is zero, i.e. the optimal values of (SDP) and (SDD) are equal. Theorem 3. If (SDP) and (SDD) have both strictly feasible solutions then strong duality holds The last two theorems are direct consequences of Theorem 1. To read more about semidefinite programming in general there existe some literature among others Laurent and Rendl [9] and Todd [14]. For more information for interior methods there finds among others the papers of Helmberg, Rendl, Vanderbrei and Wolkowicz [3], Monteiro [11] and Zhang [16]. 5

7 3 SEMI-INFINITE PROGRAMMING FORMULATION 3 Semi-infinite Programming Formulation Sometimes it would be interesting to avoid the notation of positive semidefinite in a (SDP) problem. Since we have that X 0 is equivalent to d T Xd = dd T X 0 d R n Therefore it is possible to formulate (SDP) and (SDD) as a semi-infinite programs. minimize C X subject to A(X) = b (SIP ) d T Xd 0 d 2 = 1 maxmimize b T y subject to A T y + S = C (SID) d T Sd 0 d 2 = 1 with d R n. This subject and the solutions of such problems is discussed in Krishnan and Mitchell [7]. To solve such problems we present to theorems which are both found in [7], where one find also their proofs which are omitted in this document. Theorem 4. If the solution of (SIP) is finite and the objective value of (SIP) and (SID) are the same. Then (SID) has a finite discretization (SIR) with the same optimal value. Hence it is possible to find a subset of vectors d i, i = 1,..., m, such that we obtain linear program to solve. maxmimize b T y subject to A T y + S = C (SIR) d T i Sd i 0 i = 1,..., m The choice of vectors is not obvious but there exists another theorem which reduces the number of vectors needed. Theorem 5. If there exists an discretization of (SID) which has the same objective value as (SID) then there exists a discretization (SIR) with m k where m is the size of the vector set and k is the number of constraint matrices A i. 6

8 3 SEMI-INFINITE PROGRAMMING FORMULATION So it is possible to find a (SIR) which is a linear problem and has limited number of constraints but it is not obvious how to find the set. Some approaches are presented in the next chapters which discuss several algorithms to solve semidefinite programming problems with a cutting plane approach. 7

9 4 POLYHEDRAL CUTTING PLANE ALGORITHM 4 Polyhedral Cutting Plane Algorithm As first algorithm we consider an algorithm called the polyhedral cutting plane algorithm. The polyhedral cutting plane method was introduced to semidefinite programming by Krishnan and Mitchell [5] as well as by Goldfarb [1]. Actually Goldfarb considers conic programming, where semidefinite programming is a special case. In order to use this method we add two assumptions to basic semidefinite programming problem. Assumption 1 A(X) = b tr(x) = a for some constant a. Assumption 2 (SDP) and (SDD) both have strictly feasible solutions. This means that {X S n : A(X) = b, X 0} and {(y, S) R k S n : Ay + S = C, S 0} are non-empty. The first assumption allows us to transform (SDD) to an eigenvalue optimization problem. This assumption is required for the bundle method to create linear constraints. The second assumption assures that (SDP) and (SDD) have an optimal solution and that the optimal values are equal. Therefore we have also a duality gap equal to zero. As already discussed in section it is possible to reformulate (SDD) to a semi-infinite program (LDD). maximize b T y subject to dd T (C A T y) 0 d (LDD) We want to consider a relaxation of this problem. We consider a finte set of variables {d i, i = 1,..., m} at the place of d. Hence we get the following problem: maximize b T y subject to d i d T i (C A T y) 0 for i = 1,..., m (LDR) A reason why we work with with the relaxation of (SDD) instead of (SDP) is that this problem has m variables but (SDP) leads to n(n 1)/2 variables and most of the time we have that m < n(n 1)/2. 8

10 4 POLYHEDRAL CUTTING PLANE ALGORITHM When we consider the constraints of (LDR), we find k k d i d T i A T y = d i d T i y j A j = y j d T i A j d i j=1 j=1 and we see that the constraints are equivalent to k y j d T i A j d i = d T i Cd i j=1 And therefore we find the the dual of (LDR) minimize subject to m d T i Cd i x i i=1 m d T i A j d i x i = b j i=1 x 0 j = 1,..., m which can be rewritten as the following problem m minimize C x i d i d T i ( i=1 m ) subject to A x i d i d T i = b (LP R) x 0 i=1 The following lemma helps us to compare the solution of (LPR) to the solution of (SDP). Lemma 1. A feasible solution x to (LPR) gives a feasible solution X to (SDP) Proof. Let X = m i=1 x id i d T i. From (LPR) we have that X satisfies the condition AX = b. So we have also to prove that X is semi-definite positive. ( m ) m d T Xd = d T x i d i d T i d = x i (d T i d) 2 0 for all d since x 0. i=1 With help of this lemma, we can deduce an upper bound for the solution of (SDP) with help of the optimal value of (LDR). So we get the following algorithm: 9 i=1

11 4 POLYHEDRAL CUTTING PLANE ALGORITHM Algorithm (Initialization) Choose an initial set of constraints for (LDR) and the termination parameters ɛ 1, ɛ 2 > 0. Set the current lower and upper bounds to LB = and UB =. Choose also T OL as the tolerance to solve (LDR) and (LPR) as well as µ to update the tolerance if wanted. 2. In the k th iteration, obtain a solution y k to the problem (LDR) by using an interior method and tolerance T OL. Update the upper bound UB = min{ub, b T y k }. 3. Compute λ = λ min (C A T y k ) and a corresponding eigenvector d. Update LB = max{lb, b T y k + λa} where a comes from assumption 1. If LB UB ɛ 1 or λ ɛ 2, got to step Add the constraint dd T A T y dd T C to (LDR). Set k = k + 1 and T OL = µt OL 5. The current solution (x k, y k ) for (LDR) and (LPR) gives an optimal solution X for (SDP) and y for (SDD) (see lemma 1). Note It is also possible to add s constraints to (LDR). In this case use the s most negative eigenvalues and the corresponding eigenvector. But in this case it is sometimes necessary to drop some constraints. See Krishnan [6]. 10

12 5 BLOCK DIAGONAL CUTTING PLANE SCHEME 5 Block Diagonal Cutting Plane Scheme The non-polyhedral block diagonal cutting plane algorithm is similar to the algorithm discussed in section 4. Actually this algorithm can nearly be considered as an generalization of the polyhedral cutting plane algorithm. This is not exactly the case but it is possible to find the connection of those two algorithms. It is discussed in Oskoorouchi and Goffin [12]. There is also a short overview in Krishnan and Mitchell [8]. Let us denote the multiplicity of λ min (C A T y) by r. The main idea is now to add the following semidefinite constraints m y i (D T A i D) (D T CD), i=1 where D R n r, with D T D = I r, whose columns form an eigenbasis for the eigenspace of C A T y with eigenvalue λ min (C A T y), instead of m y i (d T j A i d j ) (d T j Cd j ) i=1 j = 1,..., r to the considered problem. So actually d j is included as a row in the matrix D. So we can form our problem. maximize b T y (LDR ) m subject to y i (Dj T A i D j ) (Dj T CD j ), j = 1,..., k i=1 The dual of the changed relaxation dual, and therefor the primal of the changed problem is: ( k ) minimize C D i V i Di T ( i=1 k ) subject to A D i V i Di T = b (LP R ) i=1 V i 0, i = 1,..., k. With the knowledge of (LPR ) and (LDR ) it is possible to formulate the algorithm. 11

13 5 BLOCK DIAGONAL CUTTING PLANE SCHEME Algorithm Choose the initial set of constraints for (LDR ), the tolerance T OL to solve (LDR ) and (LPR ) and the termination parameters ɛ 1 and ɛ 2 > 0. Set the lower and upper bound to respectively. If desired choose an update parameter µ for the tolerance. 2. Find an approximate solution with tolerance T OL to (X k, y k ) to (LDR ) and (LPR ). 3. Compute λ = λ m in(c A T y k ) and an orthonormal matrix D k R n rk. r k is the multiplicity of the eigenvalue. Update LB = max{lb, b T y k + λa} and UB = min{ub, b T y k }. 4. If UB LB ɛ 1 or λ ɛ 2 then we have found an optimal solution. Stop. Otherwise add the following constraint to (LDR ) m y i (Dk T A id k ) (Dk T CD k). i=1 5. Set k = k + 1, T OL = µt OL and go to step 2. One may recognise that this algorithm is the same as polyhedral cutting plane algorithm of section 4 as long the multiplicity of λ is one. So one may see the connection of those two algorithms. It is also possible to get (X k, y k ) with help of the analytic center. This is well discussed in [12]. 12

14 6 POLYHEDRAL BUNDLE SCHEME 6 Polyhedral Bundle Scheme Another way to improve the polyhedral cutting plane algorithm is using proximal bundle as for example presented in Kiwiel [4]. We assume again that Assumptions 1 and 2 hold. The idea is to maximize b T y u/2 y y k instead of maximizing b T y. With this method we penalize big steps away from the current iterate y k. With a small u we penalize big steps less than with a big u. If u is too big, the algorithm is forced to perform steps which are close to null steps, which is not that useful. There are many proposed choices for u. But all of them keep the solution bounded. By changing the objective function as describen get a changed version of (LPR) from section 4. maximize b T y u 2 y ŷ 2 subject to d i d T i (C A T y) 0 for i = 1,..., m (BD) There is only a change in the objective function compared to (LPR). The constraints are staying the sames. We get also the Lagrangian dual 1 minimize 2u b A(X) 2 b T ŷ (C A t ŷ) X k subject to X = x i d i d T i (BP ) i=1 k x i = 1 i=1 x i 0, i = 1,..., k. Due to strong dualtiy (BD) and (BP) have the same objective value. Therefore we have the relation Algorithm 3. y = ŷ + 1 (b A(X)). (1) u 1. Let y 1 R m, let p be the normalized eigenvector corresponding to λ min (C A T y 1 ). Also choose the weight u > 0, an improvement parameter ν (0, 1) and finally a termination parameter ɛ > 0 13

15 6 POLYHEDRAL BUNDLE SCHEME 2. At the k th iteration, compute X k+1 from (BP) and y with help of equation (1) where ŷ = y k. Also let f X k+1(y k+1 ) = b T y k+1 + (C A t y k ) X k+1 3. If f X k+1(y k+1 ) ɛ stop. 4. Compute f(y k+1 ) and the eigenvector p k+1 corresponding to λ min (C A T y k+1 ). 5. If f(y k+1 ) ν(f X k+1(y k+1 ) then perform a serious step, i.e. step, i.e. y k+1 = y k ŷ k+1 = y k+1. Else perform a null 6. Let k = k + 1 and return to step 2. 14

16 7 SPECTRAL BUNDLE METHOD 7 Spectral Bundle Method In this section we discuss the spectral bundle method due to Helmberg and Rendl [2]. Krishnan and Mitchell [8] also discuss this method briefly. It also uses the bundle method. But the big difference is that we transform the problem to a eigenvalue optimization problem and solve it. It is considered to be a good solver. The considered problem is (SDP) with dual (SDD). We assume also that strong duality holds and therefore for any optimal solution X to (SDP) and any optimal solution Z to (SDD) it holds that X Z = 0. Further we assume that assumption 2 A(X) = b tr(x) = a, a > 0, holds. Actually we can add tr(x) = a as a redundant constraint to (SDP). From this new problem with the added constraint we get a new dual (SDD ) to (SDP). maximize aλ + b T y subject to Z = A T y + λi C 0 (SDD ) Since we have a > 0 X cannot be zero at the optimum and therefore any optimal Z must be singular. Hence all dual optimal solutions Z satisfy 0 = λ min ( Z) which leads to λ = λ min (C A T y). Thus it follows that (SDD) and (SDD ) are equivalent to the problem min aλ min (C A T y) + b T y. To avoid complicated and inconvenient notations we set a = 1. And we get a eigenvalue problem min λ min (C A T y) + b T y (E) which is a well know und studied problem. We are mainly interested in the function f(y) = λ min (C A T y) + b T y 15

17 7 SPECTRAL BUNDLE METHOD Now we want to majorize f by ˆf. We set ˆf(y) = min{(c A T y) W : W Ŵ } where Ŵ = {αw + P V P T : α + tr(v ) = 1, α 0, V 0} with P R n r and P T P = I and W 0 and tr(w ) = 1. Using the idea of proximal bundle we get the algorithm. The variable u is at free choice. Algorithm (Initialization) Start with an initial point y 0 R m, a normalized eigenvector v 0 for λ min (C A T y), an ɛ > 0 for termination, an improvement parameter ν (0, 1 2 ), a weight u > 0. We set as well k = 0, x 0 = y 0, P 0 = v 0 and W 0 = v 0 v 0 T 2. In the k th iteration solve max (C Ax k ) W k + b T x k 1 W W ck 2u (A(W k ) b) (A(W k ) b) to get y k+1 = x k + 1 u (b A(W k+1 )) Decompose V into V = Q 1 Λ 1 Q T 1 + Q 2 Λ 2 Q T 2. Compute W k+1 = 1 ( ) α α W k + P k Q 2 Λ 2 (P k Q 2 ) T. + tr(λ 2 ) 3. Compute λ min (C A T y k+1 ) and the corresponding eigenvector v k+1. Compute P k+1 by searching for the orthonormal basis to P k Q 1 v k+1 4. (Termination) If f(x k ) ˆf k (y k+1 ) ɛ then stop. 5. If f(y k+1 ) f(x k ) ν(f(x k ) ˆf k (y k+1 )) then set x k+1 = y k+1. Otherwise set x k+1 = x k 6. k = k + 1 and go to step 2. One may see that the similarities between this algorithm and the polyhedral bundle algorithm. But it is also obvious that the problems to solve in the steps are different. 16

18 8 PRIMAL ACTIVE SET APPROACH 8 Primal Active Set Approach There exists also a primal active set approach to solve semidefinite programming problems as a sequence of smaller SDP s. As the simplex method for linear programming it uses the notation of extrem points. Unfortunately the number of documentations on this algorithm is rather small and unavailable, therefore we present this algorithm as it has been done by Mitchell and Krishnan in [8]. There are some similarities to the other algorithms but the simplex approach let it move a little bit away from the other algorithms. Algorithm Consider an extreme point solution X 1 = P 1 V 1 P 1 T. Set the lower and upper bound to respectively and choose an ɛ > 0 for precision. 2. In the k th iteration, choose a subset of m independent equations from S 11 = P T 1 (C AT y)p 1 = 0 S 12 = P T 1 (C AT y)p 2 = 0 Solve the resulting system S B = 0 for y k+1 3. If S k+1 = (C A T y k+1 ) 0, stop. Otherwise update P k+1 to be the columns of S B = 0. Compute the normalized eigenvector of λ min (S k+1 ) p k+1. Set P k+1 = orth P k+1, p k+1. Update LB = max{lb, b T y k + λa} and UB = min{ub, b T y k }. If UB LB < ɛ stop. 4. Solve minimize ( P k+1t C P k+1 ) V subject to ( P k+1t A i P k+1 ) V = b i i = 1,..., m V 0 Let V k+1 = R1 k+1 M k+1 R1 k+1 T with M k+1 0. Set P1 k+1 = P k+1 R1 k+1 and X k+1 = P1 k+1 M k+1 P1 k+1 T. 5. If X k+1 is not an extrem point, use the Algorithm 1 in chapter 4 of Pataki [13] and return to step 1. 17

19 9 GENERIC CUTTING PLANE METHOD 9 Generic Cutting Plane Method There have been several cutting plane methods presented before in this paper. Now we want to describe a generic cutting plane method as done in Krishnan and Mitchell [8]. We assume that Assumptions 1 and 2 hold. For simplicity we consider the special case of Assumption 1 that A(X) = b tr(x) = 1 There is on one side the semi-infinte approach as discussed in Section 4 where we transform (SDD) to this semi-infinite problem max b T y s.t. dd T Ay dd T C d 2 = 1 (LDM). The advantage of the (LDM) formulation is that we can reduce the number of involved variables. We construct a relaxation of (LDM) by considering a discretization of d 2 = 1 by a finite set of vectors {d 1, i = 1,..., k}. We obtain the following discretization of (LDM): max b T y s.t. d i d T i Ay d i d T i C for i = 1,..., k (LDR) with the dual ( k ) min C x i d i d T i ( i=1 k ) s.t. A x i d i d T i = b (LP R) i=1 We used as well the bundle methods. So finally we get a generic cutting plane algorithm. Algorithm Choose an inital point ŷ, an initial finite set D = {D i } and a scalar u Solve the subproblem to get (y, λ ). maximize λ + b T y u y ŷ 2 2 subject to Di T (C A T y)d i λi, i D 18

20 9 GENERIC CUTTING PLANE METHOD 3. If S = (C A T y ) 0, we are optimal. Stop. Otherwise find D R n r, with r 2m such that D T (C A T y )D Add D to D or aggregate D into D 5. Set ŷ = y and return to step 2 The vector λ in the subproblem refers to tr(x) = 1. One recognizes that for u = 0 we get to the algorithms which do not use the proximal bundle methods. 19

21 10 CONCLUSION 10 Conclusion We presented several cutting plane methods which are the polyhedral cutting plane method, the polyhedral bundle method, the spectral bundle method, the block diangonal cutting plane method and the primal active set method. This methods are all working with relaxations of (SDP) and (SDD). We saw that they have all an algorithm which is similaire and presented it in Section 2. But we did not compare the different algorithms. The algorithm in Section 4 can be implemented in polynomial time. There are computational results for this algorithm available in [5] and [6]. This algorithm can be compared with interior point methods even if his convergence is rather poor. Algorithm 2 can also be implemented in polynomial time. After [8] the spectral bundle algorithm described in Section 7 showed some good results for large problems which could hardly been solved by interior methods. Hence it is a interesting algorithm. The method in Section 1 tries to apply the simplex method to semidefinite programming but it does not find always directly an extrem point and therefore it is necessary to use another algorithm to find them. The computational performance of this algorithm is apparently still under discussion. There are some cutting plane algorithms which are interesting in view of computational performance. It is possible to warm start them and at least one is very efficient for large semidefinite programs. 20

22 11 IMPLEMENTATION 11 Implementation In order to solve semidefinite programming problems with help of a computer, we decided to implement a cutting plane method. The chosen algorithm was the polyhedral cutting plane algorithm which is presented in Chapter 4. The algorithm was implemented in MATLAB with help of the modelling language YALMIP [10], which simplifies the handling of constraints and objective function and uses then a external solver such as the implemented solver of MATLAB or of another solver such as CDD. YALMIP is free of charge and openly distributed and therefore easy to access. To implement the polyhedral cutting plane algorithm we followed Algorithm 1 with some slight changes which appeared due to problems during the implementation. The corresponding function in MATLAB is called function [X,y]=cuttingplane(C, A, b, epsilon1, epsilon2). Where the returned values are corresponding to the primal and dual solution of (SDP) and (SDD). The input variables C,A and b are as in (SDP). epsilon1 and epsilon2 are the values for the stopping conditions denoted as ɛ 1 and ɛ 2 in the algorithm. The function uses beyond the normal MATLAB functions YALMIP function Ay=opAt(A,y). Hence it is necessary to have YALMIP installed on the computer to use cuttingplane. The file opat.m, which is part of this code for the algorithm, should also be saved in the same directory as cuttingplane.m. YALMIP is used to solve (LDR) which is a main step of the algorithm and was chosen because it allows to handle the constraints for a linear programming problem in a simple way which corresponds to the notation in the algorithm. The disadvantage is that it takes YALMIP some time to translate the constraints into the form required by the solver. But this is compensated that we do not have to this by ourself. opat is a small function that calculates A T y. Due to a unknown reason the stopping condition based on the lower and upper bounds as described in this document, where not working and the UB LB did not converge to zero. Therefore this stopping condition 21

23 11 IMPLEMENTATION was changed to the condition y k y k 1 ɛ 1. The stopping condition on λ was not concerned. The algorithm seem to work well with this changed condition. Another small change is that instead of let the user define the tolerance with which (LDR) is solved, the standard value of YALMIP was used and therefore it is also not possible to reduce the tolerance for solving (LDR). One may remember that we made an assumption in order to use the polyhedral cutting plane algorithm which was A(X) = b tr(x) = a for some constant a. In the implementation it was avoided to pass this a with the input variables to avoid confusion. Since a was mainly used in the stopping condition based on the lower and upper bound which was removed, this did not influence the implementation of the algorithm. It is also possible to solve problems which do not have a constant trace and therefore do not fulfill the assumption. With the problems the algorithm was tested there appeared no problems with the fact, that the trace is not supposed to be constant. However it can not be totally assured that this is the case for all possible SDP-problems. But hence the trace of X gets approximated during the repetitions of the algorithm and it converges, it should work for a big part of the problems. It can also happen that YALMIP returns NaN as a value of the optimal solution by solving (LPR). Since it is not possible to compute for example eigenvalues of matrices containing NaN, those are replaced by 0. This avoids problems in the first iterations where the problem has not yet many constraints. Another problem which raised have been problems that leaded to a infinite solution for (LPR). The attempt to solve this problem by creating new constraints to avoid this, did not worked out well. The idea was to add the corresponding eigenvector to the maximal eigenvalue m λ max r j A j, j=1 where r is the descent direction for (LDR), to the set of constraints. But as already said this did not sove the problem. Therefore this procedure was 22

24 11 IMPLEMENTATION not added to the algorithm. Instead the algorithm stops and returns the problem of indicated by the solver, e.g. unconstrained problem. The source code of the implementation of the polyhedral cutting plane algorithm can be found on www/cuttingplane/. 23

25 12 ELLIPSOID PROBLEM 12 Ellipsoid Problem In this section a semidefinite programming problem which is used to test the implementation of the polyhedral cutting plane algorithm is presented. This problem comes from Vandenberghe and Boyd [15, p ]. It is a geometrical problem. Suppose that we have k ellipsoids ε 1,..., ε k given. We want find the ball with minimal radius containing all ellipsoids. The ellipsoids can be described with help of quadratic functions f i = x T A i x + 2b T i x + c i, i = 1,..., k, where ε i = {x f i 0}. It is possible the express the fact that one ellipsoid contains an in a matrix form. If we have the ellipsoids ε = {x f 0} and ε = {x f 0} with the corresponding functions f = x T Ax + 2b T x + c and f = x T Ãx + 2 b T x + c, then ε contains ε if and only if there exists a τ 0 such that [ ] [ ] A b à b b T τ c bt c Now since a ball is also an ellipsoid it can be described as well in the form used above. The ball S can be represented by f = x T x 2x T c x + γ. Therefore for a ball S which contains all ellipsoids ε 1,..., ε k, there must exist τ 1,..., τ k 0 such that [ ] [ ] I x c Ai b τ i x c γ i for i = 1,..., k. (2) c i b T i Since we want to minimize the ball s radius r = x T c x c γ, we use r 2 t, which can be presented in a matrix inequality [ ] I xc 0, (3) γ and we minimize t. x T c 24

26 12 ELLIPSOID PROBLEM Gathering all this information together we get the following semidefinite program minimize subject to t [ ] [ I x c Ai τ x c γ i b T i τ [ i 0, i ] = 1,..., k, I xc 0. γ x T c In this program the variables are x c, τ 1,..., τ k, γ and t. b i c i ] for i = 1,..., k But this problem in not in standard form which must be known to solve it with help of our implemented cutting plane solver. Therefore we are going to transform this problem into standard form. First one may recognize that it is possible to rewrite (2) in one block diagonal matrix inequation. In the same manner we can add the other constraints to this matrix to get one huge blockdiagonal matrix inequation. Then we need to separate the variables and constants in (2) and (3). Since our semidefinite program is closer to the dual form then the primal form we want to transform it into the form maximize minimize b T y m y i A i C. i=1 Therefore we separate the the variables and the corresponding matrices. For example (2) becomes [ ] [ ] [ 0 e1 0 en 0 0 x c1 e T x 1 0 cn e T + γ n [ ] [ ] [ 0 0 A1 b +t τ 1 Ak b T τ 1 c k 1 b T k ] + b k c k ] [ I Since the notation for ellipsoids in a n-dimensional space is not convenient to write down we are going to consider now the 2-dimensional space. It is possible to do the same thing without any problems for higher dimensions. Let us set y = [ x c1 x c2 γ t τ 1 τ k ] T Now it is necessary to find the corresponding matrices denoted with B i since A i is already used. The matrix corresponding to x c1 is a block diagonal 25 ].

27 12 ELLIPSOID PROBLEM matrix with [ 0 e 1 e T 1 0 for the first k blocks at the beging corresponding to (2), then a block of k k zero-matrix and then the matrix of the beging again. The matrix for x c2 is the same except e 1 is switched to e 2. For γ there are first k matrices of the form [ ] 0 0, 0 1 followed also k k zero-matrix and at the end, there is [ ] This block diagonal matrix is multiplied with γ. The matrix which is multiplied with t has the same size as the other and consists out of zeros except for the last block which is [ The matrices corresponding to τ i, i = 1,..., k are for the first k block zero except the ith block which is [ Ai b i b T i c i The next block is zero except for the ith diagonal element which is -1. The last block is 0. The C matrix is [ I for the first k blocks, then follows a k k zero-matrix and at the last block is [ I The only component missing is now b, which is a zero-vector except for the 4 component which is corresponding to t. This component is 1 since t is minimized but we are in dual form which must be maximized. Then we have a problem in the form ]. ] ]. ] ]. 26

28 12 ELLIPSOID PROBLEM maximize minimize b T y m y i B i C, i=1 where B i are the matrices introduced before. Hence this problem can also be written in the primal form minimize C X subject to B i X = b i for i = 1,..., k, X 0. So we have transformed the ellipsoid problem into standard form. It is now possible to test the implemented algorithm on it. 27

29 13 NUMERICAL RESULTS AND TEST PROBLEMS 13 Numerical Results and Test Problems In this chapter two different tests of the algorithm are presented: one small example and ellipsoid problem example. The algorithm was tested with YALMIP using the standard solver of MATLAB for linear programming linprog. First we consider a small semidefinite programming problem with constant trace [ 2 1 minimize 1 5 subject to I [ X = 2, ] X ] X = 1. We used ɛ 1 = ɛ 2 = 10 5 as precision parameter. With this setting a result was found in 8 iterations. The result is [ X = This result result consists with the optimal value obtained on a other way. This was just a small problem to test if the algorithm is working. To test it on a bigger problem we are going to consider the an ellipsoid problem as presented in Chapter 12. Since this is a geometrical problem it is rather easy to check the obtained result on a graphical way. We consider the following ellipses in the the 2-dimensional space: [ ] [ ] A 1 = b = c 8 1 = 20 [ ] [ ] A 2 = b = c 5 2 = 10 [ ] [ ] A 3 = b = c 4 3 = 30 [ ] [ ] A 4 = b = c 1 4 = 36 [ ] [ ] A 5 = b = c 7 5 = 48 The goal is to find the circle with minimal radius which contains all the ellipses. This problem can be expressed as in a standard semidefinite programming form. When we solve the problem with cuttingplane we get a result. The optimal solution for the primal problem is not particularly 28 ]. (4)

30 13 NUMERICAL RESULTS AND TEST PROBLEMS interesting since the basic problem is formulated in the dual form. result for the dual problem is The y = [ ]. The solution can be presented graphically as done in Figure 1. The green ellipses are corresponding to (4). The red circle is the optimal solution. One Figure 1: Solution of the ellipsoid problem can check graphically that the red circle is the optimal solution and the algorithm works also for problems when the assumption for a constant trace is dropped. This solution was found performing 45 iterations. The stopping parameters have been ɛ 1 = ɛ 2 = 10 5 By analyzing the time spend on during the process one recognizes that a lot of the time is used by functions of YALMIP especially by solvesdp. Actually the time for solving the problem was rather high. This is mainly due to the reformulation of the problem such that it can be solved by linprog. Therefore the algorithm would probably become a lot faster formulating the 29

31 13 NUMERICAL RESULTS AND TEST PROBLEMS problem in another way such that it takes less time to transform it. It must also be mentioned that linprog is probably not the best sover available and there could be some gains by using another one. Other improvements of the algorithm have already discussed in Chapter 4. For example one may consider to add more than one constraint to the set of all constraints during an iteration. After all, the algorithm is working good but it could be optimized. There are some possible improvements. One should also not forget that this algorithm is not considered to be the best of the presented algorithms but it contains the main idea of a cutting plane approach to semidefinite programming. 30

32 REFERENCES REFERENCES References [1] D Goldfarb. The simplex method for conic programming. Technical report, CORC, Columbia University, [2] C. Helmberg and F. Rendl. A spectral bundle method for semidefinite programming. SIAM J. Optim., 10(3): (electronic), [3] Christoph Helmberg, Franz Rendl, Robert J. Vanderbei, and Henry Wolkowicz. An interior-point method for semidefinite programming. SIAM J. Optim., 6(2): , [4] Krzysztof C. Kiwiel. Proximity control in bundle methods for convex nondifferentiable minimization. Math. Programming, 46(1, (Ser. A)): , [5] K Krishnan and J E Mitchell. A linear programming approach to semidefinite programming problems. Technical report, Department of Mathematical Science, Rensselaer Polytechnic Instituite, [6] Kartik Krishnan. Linear programming (LP) approaches to semidefinite programming (SDP) problems. PhD thesis, Rensselaer Polytechnic Institute, [7] Kartik Krishnan and John E. Mitchell. Semi-infinite linear programming approaches to semidefinite programming problems. In Novel approaches to hard discrete optimization (Waterloo, ON, 2001), volume 37 of Fields Inst. Commun., pages Amer. Math. Soc., Providence, RI, [8] Kartik Krishnan and John E. Mitchell. A unifying framework for several cutting plane methods for semidefinite programming. Optim. Methods Softw., 21(1):57 74, [9] Monique Laurent and Franz Rendl. Semidefinite programming and integer programming. In K. Aardal, G. Nemhauser, and R. Weismantel, editors, Discrete Optimization, volume 12 of Handbooks in Operations Research and Management Science, pages Elsevier B.V.,

33 REFERENCES REFERENCES [10] J. Löfberg. Yalmip : A toolbox for modeling and optimization in MAT- LAB. In Proceedings of the CACSD Conference, Taipei, Taiwan, [11] Renato D. C. Monteiro. Primal-dual path-following algorithms for semidefinite programming. SIAM J. Optim., 7(3): , [12] Mohammad R. Oskoorouchi and Jean-Louis Goffin. The analytic center cutting plane method with semidefinite cuts. SIAM J. Optim., 13(4): (electronic), [13] Gábor Pataki. Cone-LP s and semidefinite programs: geometry and a simplex-type method. In Integer programming and combinatorial optimization (Vancouver, BC, 1996), volume 1084 of Lecture Notes in Comput. Sci., pages Springer, Berlin, [14] M. J. Todd. Semidefinite optimization. Acta Numer., 10: , [15] Lieven Vandenberghe and Stephen Boyd. Semidefinite programming. SIAM Rev., 38(1):49 95, [16] Yin Zhang. On extending some primal-dual interior-point algorithms from linear programming to semidefinite programming. SIAM J. Optim., 8(2): (electronic),

Research Interests Optimization:

Research Interests Optimization: Mitchell: Research interests 1 Research Interests Optimization: looking for the best solution from among a number of candidates. Prototypical optimization problem: min f(x) subject to g(x) 0 x X IR n Here,

More information

Conic Duality. yyye

Conic Duality.  yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #02 1 Conic Duality Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Properties of a Cutting Plane Method for Semidefinite Programming 1

Properties of a Cutting Plane Method for Semidefinite Programming 1 Properties of a Cutting Plane Method for Semidefinite Programming 1 Kartik Krishnan Sivaramakrishnan Department of Mathematics North Carolina State University Raleigh, NC 27695-8205 kksivara@ncsu.edu http://www4.ncsu.edu/

More information

Lec13p1, ORF363/COS323

Lec13p1, ORF363/COS323 Lec13 Page 1 Lec13p1, ORF363/COS323 This lecture: Semidefinite programming (SDP) Definition and basic properties Review of positive semidefinite matrices SDP duality SDP relaxations for nonconvex optimization

More information

Alternating Projections

Alternating Projections Alternating Projections Stephen Boyd and Jon Dattorro EE392o, Stanford University Autumn, 2003 1 Alternating projection algorithm Alternating projections is a very simple algorithm for computing a point

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

Towards a practical simplex method for second order cone programming

Towards a practical simplex method for second order cone programming Towards a practical simplex method for second order cone programming Kartik Krishnan Department of Computing and Software McMaster University Joint work with Gábor Pataki (UNC), Neha Gupta (IIT Delhi),

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Convex Optimization MLSS 2015

Convex Optimization MLSS 2015 Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Lecture 15: Log Barrier Method

Lecture 15: Log Barrier Method 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 15: Log Barrier Method Scribes: Pradeep Dasigi, Mohammad Gowayyed Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

Week 5. Convex Optimization

Week 5. Convex Optimization Week 5. Convex Optimization Lecturer: Prof. Santosh Vempala Scribe: Xin Wang, Zihao Li Feb. 9 and, 206 Week 5. Convex Optimization. The convex optimization formulation A general optimization problem is

More information

Selected Topics in Column Generation

Selected Topics in Column Generation Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a

More information

Introduction to Modern Control Systems

Introduction to Modern Control Systems Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Ellipsoid Methods Barnabás Póczos & Ryan Tibshirani Outline Linear programs Simplex algorithm Running time: Polynomial or Exponential? Cutting planes & Ellipsoid methods for

More information

CMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro

CMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro CMU-Q 15-381 Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization Teacher: Gianni A. Di Caro GLOBAL FUNCTION OPTIMIZATION Find the global maximum of the function f x (and

More information

REGULAR GRAPHS OF GIVEN GIRTH. Contents

REGULAR GRAPHS OF GIVEN GIRTH. Contents REGULAR GRAPHS OF GIVEN GIRTH BROOKE ULLERY Contents 1. Introduction This paper gives an introduction to the area of graph theory dealing with properties of regular graphs of given girth. A large portion

More information

EARLY INTERIOR-POINT METHODS

EARLY INTERIOR-POINT METHODS C H A P T E R 3 EARLY INTERIOR-POINT METHODS An interior-point algorithm is one that improves a feasible interior solution point of the linear program by steps through the interior, rather than one that

More information

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms

More information

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:

More information

Lecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix.

Lecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix. CSE525: Randomized Algorithms and Probabilistic Analysis Lecture 9 Lecturer: Anna Karlin Scribe: Sonya Alexandrova and Keith Jia 1 Introduction to semidefinite programming Semidefinite programming is linear

More information

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm Instructor: Shaddin Dughmi Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Integer Programming Chapter 9

Integer Programming Chapter 9 1 Integer Programming Chapter 9 University of Chicago Booth School of Business Kipp Martin October 30, 2017 2 Outline Branch and Bound Theory Branch and Bound Linear Programming Node Selection Strategies

More information

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018 15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018 In this lecture, we describe a very general problem called linear programming

More information

Open problems in convex geometry

Open problems in convex geometry Open problems in convex geometry 10 March 2017, Monash University Seminar talk Vera Roshchina, RMIT University Based on joint work with Tian Sang (RMIT University), Levent Tunçel (University of Waterloo)

More information

Lecture 12: Feasible direction methods

Lecture 12: Feasible direction methods Lecture 12 Lecture 12: Feasible direction methods Kin Cheong Sou December 2, 2013 TMA947 Lecture 12 Lecture 12: Feasible direction methods 1 / 1 Feasible-direction methods, I Intro Consider the problem

More information

LECTURE 18 LECTURE OUTLINE

LECTURE 18 LECTURE OUTLINE LECTURE 18 LECTURE OUTLINE Generalized polyhedral approximation methods Combined cutting plane and simplicial decomposition methods Lecture based on the paper D. P. Bertsekas and H. Yu, A Unifying Polyhedral

More information

CMPSCI611: The Simplex Algorithm Lecture 24

CMPSCI611: The Simplex Algorithm Lecture 24 CMPSCI611: The Simplex Algorithm Lecture 24 Let s first review the general situation for linear programming problems. Our problem in standard form is to choose a vector x R n, such that x 0 and Ax = b,

More information

A primal-dual Dikin affine scaling method for symmetric conic optimization

A primal-dual Dikin affine scaling method for symmetric conic optimization A primal-dual Dikin affine scaling method for symmetric conic optimization Ali Mohammad-Nezhad Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University July 15, 2015 A primal-dual

More information

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Marc Uetz University of Twente m.uetz@utwente.nl Lecture 5: sheet 1 / 26 Marc Uetz Discrete Optimization Outline 1 Min-Cost Flows

More information

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

Bilinear Programming

Bilinear Programming Bilinear Programming Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida 32611-6595 Email address: artyom@ufl.edu

More information

13. Cones and semidefinite constraints

13. Cones and semidefinite constraints CS/ECE/ISyE 524 Introduction to Optimization Spring 2017 18 13. Cones and semidefinite constraints ˆ Geometry of cones ˆ Second order cone programs ˆ Example: robust linear program ˆ Semidefinite constraints

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008 LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following

More information

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone: MA4254: Discrete Optimization Defeng Sun Department of Mathematics National University of Singapore Office: S14-04-25 Telephone: 6516 3343 Aims/Objectives: Discrete optimization deals with problems of

More information

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization

More information

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding B. O Donoghue E. Chu N. Parikh S. Boyd Convex Optimization and Beyond, Edinburgh, 11/6/2104 1 Outline Cone programming Homogeneous

More information

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge October 20, 2016 Abstract We consider the problem of minimizing

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

15. Cutting plane and ellipsoid methods

15. Cutting plane and ellipsoid methods EE 546, Univ of Washington, Spring 2012 15. Cutting plane and ellipsoid methods localization methods cutting-plane oracle examples of cutting plane methods ellipsoid method convergence proof inequality

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. Author: Martin Jaggi Presenter: Zhongxing Peng

Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. Author: Martin Jaggi Presenter: Zhongxing Peng Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization Author: Martin Jaggi Presenter: Zhongxing Peng Outline 1. Theoretical Results 2. Applications Outline 1. Theoretical Results 2. Applications

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 29 CS 473: Algorithms, Spring 2018 Simplex and LP Duality Lecture 19 March 29, 2018

More information

Convexity: an introduction

Convexity: an introduction Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and

More information

Lecture Notes 2: The Simplex Algorithm

Lecture Notes 2: The Simplex Algorithm Algorithmic Methods 25/10/2010 Lecture Notes 2: The Simplex Algorithm Professor: Yossi Azar Scribe:Kiril Solovey 1 Introduction In this lecture we will present the Simplex algorithm, finish some unresolved

More information

ML Detection via SDP Relaxation

ML Detection via SDP Relaxation ML Detection via SDP Relaxation Junxiao Song and Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) ELEC 5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture

More information

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25 Linear Optimization Andongwisye John Linkoping University November 17, 2016 Andongwisye John (Linkoping University) November 17, 2016 1 / 25 Overview 1 Egdes, One-Dimensional Faces, Adjacency of Extreme

More information

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :

More information

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm LECTURE 6: INTERIOR POINT METHOD 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm Motivation Simplex method works well in general, but suffers from exponential-time

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

Solution Methodologies for. the Smallest Enclosing Circle Problem

Solution Methodologies for. the Smallest Enclosing Circle Problem Solution Methodologies for the Smallest Enclosing Circle Problem Sheng Xu 1 Robert M. Freund 2 Jie Sun 3 Tribute. We would like to dedicate this paper to Elijah Polak. Professor Polak has made substantial

More information

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions

More information

Convex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c)

Convex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c) Convex Optimization (for CS229) Erick Delage, and Ashutosh Saxena October 20, 2006 1 Convex Sets Definition: A set G R n is convex if every pair of point (x, y) G, the segment beteen x and y is in A. More

More information

CS 435, 2018 Lecture 2, Date: 1 March 2018 Instructor: Nisheeth Vishnoi. Convex Programming and Efficiency

CS 435, 2018 Lecture 2, Date: 1 March 2018 Instructor: Nisheeth Vishnoi. Convex Programming and Efficiency CS 435, 2018 Lecture 2, Date: 1 March 2018 Instructor: Nisheeth Vishnoi Convex Programming and Efficiency In this lecture, we formalize convex programming problem, discuss what it means to solve it efficiently

More information

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018 Section Notes 4 Duality, Sensitivity, and the Dual Simplex Algorithm Applied Math / Engineering Sciences 121 Week of October 8, 2018 Goals for the week understand the relationship between primal and dual

More information

Characterizing Improving Directions Unconstrained Optimization

Characterizing Improving Directions Unconstrained Optimization Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not

More information

Linear Programming in Small Dimensions

Linear Programming in Small Dimensions Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 1 A. d Aspremont. Convex Optimization M2. 1/49 Today Convex optimization: introduction Course organization and other gory details... Convex sets, basic definitions. A. d

More information

Tutorial on Convex Optimization for Engineers

Tutorial on Convex Optimization for Engineers Tutorial on Convex Optimization for Engineers M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge May 10, 2017 Abstract We consider the problem of minimizing

More information

Convex Geometry arising in Optimization

Convex Geometry arising in Optimization Convex Geometry arising in Optimization Jesús A. De Loera University of California, Davis Berlin Mathematical School Summer 2015 WHAT IS THIS COURSE ABOUT? Combinatorial Convexity and Optimization PLAN

More information

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1.

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1. ACTA MATHEMATICA VIETNAMICA Volume 21, Number 1, 1996, pp. 59 67 59 A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM NGUYEN DINH DAN AND

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

Optimization under uncertainty: modeling and solution methods

Optimization under uncertainty: modeling and solution methods Optimization under uncertainty: modeling and solution methods Paolo Brandimarte Dipartimento di Scienze Matematiche Politecnico di Torino e-mail: paolo.brandimarte@polito.it URL: http://staff.polito.it/paolo.brandimarte

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 18 All-Integer Dual Algorithm We continue the discussion on the all integer

More information

In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems.

In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems. 2 Basics In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems. 2.1 Notation Let A R m n be a matrix with row index set M = {1,...,m}

More information

Algebraic Geometry of Segmentation and Tracking

Algebraic Geometry of Segmentation and Tracking Ma191b Winter 2017 Geometry of Neuroscience Geometry of lines in 3-space and Segmentation and Tracking This lecture is based on the papers: Reference: Marco Pellegrini, Ray shooting and lines in space.

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2018 04 24 Lecture 9 Linear and integer optimization with applications

More information

Extensions of Semidefinite Coordinate Direction Algorithm. for Detecting Necessary Constraints to Unbounded Regions

Extensions of Semidefinite Coordinate Direction Algorithm. for Detecting Necessary Constraints to Unbounded Regions Extensions of Semidefinite Coordinate Direction Algorithm for Detecting Necessary Constraints to Unbounded Regions Susan Perrone Department of Mathematics and Statistics Northern Arizona University, Flagstaff,

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

CME307/MS&E311 Theory Summary

CME307/MS&E311 Theory Summary CME307/MS&E311 Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/

More information

NOTATION AND TERMINOLOGY

NOTATION AND TERMINOLOGY 15.053x, Optimization Methods in Business Analytics Fall, 2016 October 4, 2016 A glossary of notation and terms used in 15.053x Weeks 1, 2, 3, 4 and 5. (The most recent week's terms are in blue). NOTATION

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

1. Lecture notes on bipartite matching February 4th,

1. Lecture notes on bipartite matching February 4th, 1. Lecture notes on bipartite matching February 4th, 2015 6 1.1.1 Hall s Theorem Hall s theorem gives a necessary and sufficient condition for a bipartite graph to have a matching which saturates (or matches)

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

On the number of distinct directions of planes determined by n points in R 3

On the number of distinct directions of planes determined by n points in R 3 On the number of distinct directions of planes determined by n points in R 3 Rom Pinchasi August 27, 2007 Abstract We show that any set of n points in R 3, that is not contained in a plane, determines

More information

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov An Extension of the Multicut L-Shaped Method INEN 698 - Large-Scale Stochastic Optimization Semester project Svyatoslav Trukhanov December 13, 2005 1 Contents 1 Introduction and Literature Review 3 2 Formal

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 35 Quadratic Programming In this lecture, we continue our discussion on

More information