ADDENDUM TO THE SEDUMI USER GUIDE VERSION 1.1

Similar documents
SDLS: a Matlab package for solving conic least-squares problems

SDLS: a Matlab package for solving conic least-squares problems

CSDP User s Guide. Brian Borchers. August 15, 2006

CSDP User s Guide

CSDP User s Guide. Brian Borchers. November 1, 2006

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

CME307/MS&E311 Theory Summary

CME307/MS&E311 Optimization Theory Summary

CSDP 5.0 User s Guide

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS

Abstract Primal dual interior point methods and the HKM method in particular

Issues In Implementing The Primal-Dual Method for SDP. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM

Characterizing Improving Directions Unconstrained Optimization

USER S GUIDE FOR TOMLAB /PENOPT 1

A primal-dual Dikin affine scaling method for symmetric conic optimization

The State-of-the-Art in Conic Optimization Software

Research Interests Optimization:

Efficient Optimization for L -problems using Pseudoconvexity

SDPA Project: Solving Large-scale Semidefinite Programs

Implementation of a Primal-Dual Method for. SDP on a Shared Memory Parallel Architecture

Matrix-free IPM with GPU acceleration

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

Multivariate Numerical Optimization

Introduction to Optimization

Steplength Selection in Interior-Point Methods for Quadratic Programming

Appendix A MATLAB s Optimization Toolbox Algorithms

Computational issues in linear programming

Convex Optimization. Lijun Zhang Modification of

A Brief Overview of Optimization Problems. Steven G. Johnson MIT course , Fall 2008

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

A Brief Overview of Optimization Problems. Steven G. Johnson MIT course , Fall 2008

3 Interior Point Method

Exploiting Sparsity in SDP Relaxation for Sensor Network Localization. Sunyoung Kim, Masakazu Kojima, and Hayato Waki January 2008

Towards a practical simplex method for second order cone programming

Figure 6.1: Truss topology optimization diagram.

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

Introduction to CVX. Olivier Fercoq and Rachael Tappenden. 24 June pm, ICMS Newhaven Lecture Theatre

Crash-Starting the Simplex Method

Linear Optimization and Extensions: Theory and Algorithms

Math Models of OR: The Simplex Algorithm: Practical Considerations

A Conversion of an SDP Having Free Variables into the Standard Form SDP

Nonlinear Programming

Mathematical Programming and Research Methods (Part II)

Code Generation for Embedded Convex Optimization

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Convex Programs. COMPSCI 371D Machine Learning. COMPSCI 371D Machine Learning Convex Programs 1 / 21

Optimized Choice of Parameters in interior-point methods for linear programming. Luiz-Rafael dos Santos

COMS 4771 Support Vector Machines. Nakul Verma

AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM

Sparse Convex Optimization on GPUs

Some Advanced Topics in Linear Programming

Report of Linear Solver Implementation on GPU

Mathematical and Algorithmic Foundations Linear Programming and Matchings

LECTURE 18 LECTURE OUTLINE

Solution Methods Numerical Algorithms

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma

New Methods for Solving Large Scale Linear Programming Problems in the Windows and Linux computer operating systems

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Parallel Computing on Semidefinite Programs

Chapter II. Linear Programming

Programming, numerics and optimization

EXPLOITING SPARSITY IN SDP RELAXATION FOR SENSOR NETWORK LOCALIZATION

Introduction to Optimization Problems and Methods

Optimization under uncertainty: modeling and solution methods

A Sparse QP-Solver Implementation in CGAL. Yves Brise,

MOSEK Optimization Suite

Linear Programming Problems

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

Recap, and outline of Lecture 18

GeoLMI Toulouse, November 2009 Approximating convex problems with AGM iterations 1

Steplength selection in interior-point methods for quadratic programming

Tutorial on Convex Optimization for Engineers

Convex Optimization and Machine Learning

USER S GUIDE FOR TOMLAB /CONOPT 1

APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING

(Sparse) Linear Solvers

A Parallel Primal-Dual Interior-Point Method for Semidefinite Programs Using Positive Definite Matrix Completion

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Lec13p1, ORF363/COS323

Recent Advances In The New Mosek 8

Parallel implementation of a semidefinite programming solver based on CSDP on a distributed memory cluster

Convexity: an introduction

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points.

6 Randomized rounding of semidefinite programs

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year

Lecture 9. Introduction to Numerical Techniques

The MIP-Solving-Framework SCIP

AD-A "( oz OCT L September 30, 1992 E-LECTE W'%

Object-Oriented Software for Quadratic Programming

What's New In Gurobi 5.0. Dr. Edward Rothberg

Solving basis pursuit: infeasible-point subgradient algorithm, computational comparison, and improvements

Disciplined Convex Programming and CVX

Transcription:

ADDENDUM TO THE SEDUMI USER GUIDE VERSION 1.1 IMRE PÓLIK 1. Introduction The main goal of this reference guide is to give a summary of all the options in SeDuMi. The default value of the options is satisfactory for general use. If you experience difficulties solving some problems please report them at the SeDuMi Forum (http://sedumi.mcmaster.ca). If you need a longer description of SeDuMi and examples of use then consult the old (1.05R5) manual [3]. 2. Input format A full-featured SeDuMi call in Matlab is >> [x,y,info] = sedumi(a,b,c,k,pars); where most of the parts can be omitted. following primal-dual optimization problem: min c T x Ax = b With this call SeDuMi solves the max b T y A T y + s = c x K s K, where A R m n, x, s, c R n, y, b R m, K is a cone and K is its dual cone. Omitting K implies that all the variables are nonnegative. A, b and c contain the problem data, if either b or c is missing then it is treated as zero. The structure K defines the cone in the following way: it can have fields K.f, K.l, K.q, K.r and K.s, for Free, Linear, Quadratic, Rotated quadratic and Semidefinite. In addition, there are fields K.xcomplex, K.scomplex and K.ycomplex for complex-variables. (1) K.f is the number of FREE, i.e., UNRESTRICTED primal components. The dual components are restricted to be zero. E.g. if K.f = 2 then x(1:2) is unrestricted, and s(1:2)=0. These are ALWAYS the first components in x. (2) K.l is the number of NONNEGATIVE components. E.g., if K.f=2, K.l=8 then x(3:10)>=0. (3) K.q lists the dimensions of LORENTZ (quadratic, secondorder cone) constraints. E.g., if K.l=10 and K.q = [3 7] then x(11) >= norm(x(12:13)), x(14)>= norm(x(15:20)). These components ALWAYS immediately follow the K.l nonnegative ones. If the entries in A and/or c are COMPLEX, then the x-components in Date: June 16, 2005. Based on the original SeDuMi User Guide and the contents of the SeDuMi help. 1

2 IMRE PÓLIK norm(x(#,#)) take complex-values, whenever that is beneficial. Use K.ycomplex to impose constraints on the imaginary part of A*x. (4) K.r lists the dimensions of Rotated LORENTZ constraints. E.g., if K.l=10, K.q=[3,7] and K.r = [4 6], then 2*x(21)x(22) >= norm(x(23:24))^2 and 2*x(25)x(26)>=norm(x(27:30))^2. These components ALWAYS immediately follow the K.q ones. Just as for the K.q-variables, the variables in norm(x(#,#)) are allowed to be complex, if you provide complex data. Use K.ycomplex to impose constraints on the imaginary part of A*x. (5) K.s lists the dimensions of POSITIVE SEMI-DEFINITE (PSD) constraints. If K.l=10, K.q = [3 7] and K.s = [4 3], then mat( x(21:36),4 ) is PSD, mat( x(37:45),3 ) is PSD. These components are ALWAYS the last entries in x. K.xcomplex lists the components in f,l,q,r blocks that are allowed to have nonzero imaginary part in the primal. K.scomplex lists the PSD blocks that are Hermitian rather than real symmetric. Use K.ycomplex to impose constraints on the imaginary part of A*x. The dual multipliers y have analogous meaning as in the x>=0 case, except that instead of c-a *y>=0 resp. -A *y>=0, one should read that c-a *y resp. -A *y are in the cone that is described by K.l, K.q and K.s. In the above example, if z = c-a *y and mat(z(21:36),4) is not symmetric/hermitian, then positive semi-definiteness reflects the symmetric/hermitian parts, i.e. s + s is PSD. If the model contains COMPLEX data, then you may provide a list K.ycomplex, with the following meaning: y(i) is complex if ismember(i,k.ycomplex) y(i) is real otherwise The equality constraints in the primal are then as follows: A(i,:)*x = b(i) if imag(b(i)) ~= 0 or ismember(i,k.ycomplex) real(a(i,:)*x) = b(i) otherwise. Thus, equality constraints on both the real and imaginary part of A(i,:)*x should be listed in the field K.ycomplex. You may use EIGK(x,K) and EIGK(c-A *y,k) to check that x and c-a *y are in the cone K. 3. Options Most of the options in SeDuMi can be overridden by specifying them explicitly in the structure pars. This structure can have the following fields (the default value is in brackets). 3.1. General options. pars.fid (1): The output of SeDuMi is written to the file with handle fid. If fid=0, then SeDuMi runs quietly, i.e., there is no screen output. Use fopen to assign a handle to a file. pars.maxiter (150): The maximum number of iterations. In most cases SeDuMi stops after 20-30 iterations, and almost never needs more than 100 iterations.

SEDUMI REFERENCE GUIDE 3 pars.eps (10 8 ): Desired accuracy. If this accuracy is achieved then info.numerr is set to 0. If pars.eps=0 then SeDuMi runs as long as it can make progress. pars.bigeps (10 3 ): In case the desired accuracy pars.eps cannot be achieved, the solution is tagged as info.numerr=1 if it is accurate to pars.bigeps, otherwise it yields info.numerr=2. pars.alg (2): If pars.alg=0, then a first-order wide region algorithm is used, not recommended. If pars.alg=1, then SeDuMi uses the centeringpredictor-corrector algorithm with v-linearization. If pars.alg=2, then xzlinearization is used in the corrector, similar to Mehrotra s algorithm. The wide-region centering predictor-corrector algorithm was proposed in Chapter 7 of [2]. pars.theta (0.25), pars.beta (0.5): These are the wide region and neighborhood parameters. Valid choices are 0 < pars.theta 1 and 0 < pars.beta < 1. Setting pars.theta=1 would restrict the iterates to follow the central path in an N 2 (β)-neighbourhood. In practice, SeDuMi rounds pars.beta to be between 0.1 and 0.9 and pars.theta to be between 0.01 and 1. pars.stepdif (2): If pars.stepdif=0 then the primal-dual step differentiation is always disabled, while if pars.stepdif=1 then it is always enabled. Using step differentiation helps if the problem is ill-conditioned or the algorithm is close to convergence. Setting pars.stepdif=2 uses an adaptive scheme: it starts SeDuMi with step-differentiation disabled and enables it after 20 iterations, or if feasratio is between 0.9 and 1.1, or more than one conjugate gradient iterate is needed. pars.w (1, 1): If step-differentiation is enabled, SeDuMi weights the relative primal, dual and gap residuals as w(1):w(2):1 in order to find the optimal step differentiation. These number should be greater than 10 8. pars.numtol (5 10 7 ), pars.bignumtol (0.9), pars.numlvl (0): These options control some numerical behaviour, don t tamper with them. 3.2. Options controlling the preprocessing. pars.denf (10), pars.denq (0.75): Parameters used in deciding which columns are dense and sparse. pars.free (1): Specifies how SeDuMi handles free variables. If pars.free=0 then free variables are converted into the difference of two nonnegative variables. This method is numerically unstable and unnecessarily increases the number of variables. If pars.free=1 then free variables are placed inside a Lorentz cone whose first variable is unconstrained. This method is more stable and has also been suggested by Jos Sturm in [4]. Please note however, that this makes the problem nonlinear and it might take longer to solve it. If you experience such behaviour then change this setting to 0. pars.sdp (1): Enables the SDP preprocessing routines, such as detecting diagonal SDP blocks. 3.3. Options controlling the Cholesky factorization. These options are subfields of pars.chol. pars.chol.skip (1): Enables skipping bad pivots.

4 IMRE PÓLIK pars.chol.canceltol (10 12 ): Relative tolerance for detecting cancellation during Cholesky factorization. pars.chol.abstol (10 20 ): Skips pivots falling below this value. pars.chol.maxuden (500): Pivots in dense-column factorization so that these factors satisfy max(abs(lk)) maxuden. 3.4. Options controlling the Conjugate Gradient refinement. Various parameters for controlling the Preconditioned conjugate gradient method (CG), which is only used if results from Cholesky factorization are inaccurate. pars.cg.maxiter (49): Maximum number of CG-iterates (per solve). Theoretically needed is add +2* skip, the number of added and skipped pivots in Cholesky. pars.cg.refine (1): Number of refinement loops that are allowed. The maximum number of actual CG-steps will thus be 1+(1+pars.cg.refine)*pars.cg.maxiter. pars.cg.stagtol (5 10 14 ): Terminates if relative function progress is less than this number. pars.cg.restol (5 10 3 ): Terminates if residual is a pars.cg.restol fraction of the duality gap. Should be smaller than 1 in order to make progress. pars.cg.qprec (1): Stores CG-iterates in quadruple precision if pars.cg.qprec=1. 3.5. Debugging options. pars.vplot (0): If this field is 1, then SeDuMi produces a plot for research purposes. pars.stopat (-1): SeDuMi enters debugging mode at the iterations specified in this vector. pars.errors (0): If pars.errors=1 then SeDuMi outputs some error measures as outlined at the 7th DIMACS challenge. The errors are both displayed in the output and returned in info.err. 4. Output format 4.1. Description of the output fields. When calling SeDuMi with three output variables ([x,y,info]=sedumi(...)) some useful information is returned in the structure info. It has the following fields: info.cpusec: Total CPU (not wall clock) time in seconds spent in optimization. info.timing: Detailed CPU timing information in seconds. The three numbers give the time spent in preprocessing, IPM iterations and postprocessing, respectively. Although in most cases IPM iterations take more than 90% of the time, in some cases it can be as low as 50%. info.feasratio: The final value of the feasibility indicator. This indicator converges to 1 for problems with a complementary solution, and to -1 for strongly infeasible problems. If info.feasratio is somewhere in between, and the problem is not purely linear then the problem may be nasty (e.g., the optimum is not attained). Otherwise, if the problem is linear then the reason must lie in numerical problems: try to rescale the problem.

SEDUMI REFERENCE GUIDE 5 info.pinf, info.dinf: If info.pinf=1 then there cannot be an x 0 with Ax = b, and this is certified by y, viz. b T y > 0 and A T y 0 thus y is a Farkas solution. On the other hand if info.dinf=1 then there cannot be a y such that c A T y >= 0, and this is certified by x, viz. c T x < 0, Ax = 0, x 0. Thus x is a Farkas solution. If both info.pinf and info.dinf are 0 then the solution given by SeDuMi is both near primal and dual feasible. info.numerr: Indicates how accurate the solution is. If info.numerr = 0 then the desired accuracy (specified by pars.eps) is achieved. If info.numerr = 1 then only reduced accuracy (given by pars.bigeps) is achieved. Finally, if info.numerr = 2 indicates complete failure due to numerical problems. info.err: If the option pars.errors is set to 1 then SeDuMi outputs some error measures as described in [1]. References 1. H. D. Mittelmann, An independent benchmarking of SDP and SOCP solvers, Mathematical Programming (2003), no. 95, 407 430. 2. Jos F. Sturm, Primal-dual interior point approach to semidefinite programming, Ph.D. thesis, Tilburg University, The Netherlands, 1997, TIR 156, Thesis Publishers Amsterdam. 3., Using SeDuMi 1.02, a Matlab toolbox for optimization over symmetric cones, Optimization Methods and Software (1999), no. 11-12, 625 653. 4., Implementation of interior point methods for mixed semidefinite and second order cone optimization problems, EconPapers 73, Tilburg University, Center for Economic Research, August 2002. McMaster University, Advanced Optimization Lab E-mail address: poliki@mcmaster.ca