Aone-phaseinteriorpointmethodfornonconvexoptimization

Size: px
Start display at page:

Download "Aone-phaseinteriorpointmethodfornonconvexoptimization"

Transcription

1 Aone-phaseinteriorpointmethodfornonconvexoptimization Oliver Hinder, Yinyu Ye Department of Management Science and Engineering Stanford University April 28, 2018

2 Sections 1 Motivation 2 Why a one-phase method is desirable 3 Our one-phase IPM 4 Theory 5 Numerical results

3 Motivation

4 Motivation 1 Finding KKT points is an important problem 2 Detecting infeasibility is important

5 The problem Find KKT point using an interior point method: min f (x) x2r d subject to a(x) apple 0 where f : R d! R and a : R d! R m are di erentiable.

6 The problem Find KKT point using an interior point method: min f (x) x2r d subject to a(x) apple 0 where f : R d! R and a : R d! R m are di erentiable f (x) x x

7 Local solvers I Second-order methods. I Active set methods [MINOS, SNOPT]. Use on: sparse problems with few degrees of freedom, warm starting capabilities. Robust. I Interior point methods (IPMs) [IPOPT, KNITRO, LOQO]. Use on: huge-sparse problems. I First-order methods [LANCELOT]. Use on: well-conditioned or poor sparsity structure.

8 Real applications using local solvers I Engineering design I Orion heat shield design I High Speed Civil Transport (not commissioned) I 1995 America s cup (sail boat design) I Trajectory optimization I International Space Station maneuvered using zero fuel I Network operation I I I Drinking water Electricity Gas KKT points seem to be good solutions.

9 Real applications using local solvers I Engineering design I Orion heat shield design I High Speed Civil Transport (not commissioned) I 1995 America s cup (sail boat design) I Trajectory optimization I International Space Station maneuvered using zero fuel I Network operation I I I Drinking water Electricity Gas KKT points seem to be good solutions.

10 Real applications using local solvers I Engineering design I Orion heat shield design I High Speed Civil Transport (not commissioned) I 1995 America s cup (sail boat design) I Trajectory optimization I International Space Station maneuvered using zero fuel I Network operation I I I Drinking water Electricity Gas KKT points seem to be good solutions.

11 Drinking water network operation I Objective: minimizing operating cost. I Decision variables: pump levels I Constraints: meet demands and satisfy physics

12 Drinking water network operation I Objective: minimizing operating cost. I Decision variables: pump levels I Constraints: meet demands and satisfy physics Comments: I Constraints are nonconvex! I Problem is huge-scale (think of the number of pipes in a city) I Nice sparsity structure = use second-order method I User experiments with closing pipes! lots of problem instances that might be infeasible! I Similar issues for optimal power flow

13 Drinking water network operation I Objective: minimizing operating cost. I Decision variables: pump levels I Constraints: meet demands and satisfy physics Comments: I Constraints are nonconvex! I Problem is huge-scale (think of the number of pipes in a city) I Nice sparsity structure = use second-order method I User experiments with closing pipes! lots of problem instances that might be infeasible! I Similar issues for optimal power flow Takeaway: nonconvex optimization and infeasibility detection are important.

14 Why a one-phase method?

15 Why a one-phase method? 1 Best IPM(s) for linear programming are one-phase methods 2 Challenging to develop a one-phase nonconvex IPM

16 Problem we wish to solve: optimal infeasible

17 Problem we wish to solve: optimal infeasible min f (x) x2r d subject to a(x) apple 0

18 Problem we wish to solve: optimal min f (x) x2r d subject to a(x) apple 0 infeasible min max x2r d i a i (x)

19 Abridged history of IPM for linear programming

20 Abridged history of IPM for linear programming 1 Primal-dual feasible start [Monteiro and Adler, 1989]

21 Abridged history of IPM for linear programming 1 Primal-dual feasible start [Monteiro and Adler, 1989] 2 Phase-one, phase-two method

22 Abridged history of IPM for linear programming 1 Primal-dual feasible start [Monteiro and Adler, 1989] 2 Phase-one, phase-two method 3 Penalty method [McShane, Monma, and Shanno, 1989]

23 Abridged history of IPM for linear programming 1 Primal-dual feasible start [Monteiro and Adler, 1989] 2 Phase-one, phase-two method 3 Penalty method [McShane, Monma, and Shanno, 1989] 4 One-phase methods I Infeasible start IPM [Lustig [1990], Mehrotra [1992]] I Homogenous algorithm [Ye, Todd, and Mizuno [1994]]

24 Abridged history of IPM for linear programming 1 Primal-dual feasible start [Monteiro and Adler, 1989] 2 Phase-one, phase-two method 3 Penalty method [McShane, Monma, and Shanno, 1989] 4 One-phase methods I Infeasible start IPM [Lustig [1990], Mehrotra [1992]] I Homogenous algorithm [Ye, Todd, and Mizuno [1994]] Extended to convex [Andersen and Ye [1999]] and conic [Sturm [2002], Andersen, Roos, and Terlaky [2003]].

25 Abridged history of IPM for linear programming 1 Primal-dual feasible start [Monteiro and Adler, 1989] 2 Phase-one, phase-two method 3 Penalty method [McShane, Monma, and Shanno, 1989] 4 One-phase methods I Infeasible start IPM [Lustig [1990], Mehrotra [1992]] I Homogenous algorithm [Ye, Todd, and Mizuno [1994]] Extended to convex [Andersen and Ye [1999]] and conic [Sturm [2002], Andersen, Roos, and Terlaky [2003]]. Generalize IPM to nonconvex optimization: 1 Homogenous algorithm? 2 Lustig s IPM?

26 Abridged history of IPM for linear programming 1 Primal-dual feasible start [Monteiro and Adler, 1989] 2 Phase-one, phase-two method 3 Penalty method [McShane, Monma, and Shanno, 1989] 4 One-phase methods I Infeasible start IPM [Lustig [1990], Mehrotra [1992]] I Homogenous algorithm [Ye, Todd, and Mizuno [1994]] Extended to convex [Andersen and Ye [1999]] and conic [Sturm [2002], Andersen, Roos, and Terlaky [2003]]. Generalize IPM to nonconvex optimization: 1 Homogenous algorithm? 2 Lustig s IPM?

27 Failure of infeasible start IPM for nonconvex optimization Wächter and Biegler [2000] show an infeasible start IPM [Lustig, 1990] on: min x x 2 1 x 1, converges to infeasible point that is not a (local) certificate of infeasibility.

28 Solutions to this problem

29 Solutions to this problem I Two phases [IPOPT, Wächter and Biegler [2006]] I Main phase and feasibility restoration phase I Poor performance on infeasible problems

30 Solutions to this problem I Two phases [IPOPT, Wächter and Biegler [2006]] I Main phase and feasibility restoration phase I Poor performance on infeasible problems I Penalty (or big-m) method [Chen and Goldfarb [2006], Curtis [2012]] I Big penalties = slow convergence! Small penalties = failure!

31 Solutions to this problem I Two phases [IPOPT, Wächter and Biegler [2006]] I Main phase and feasibility restoration phase I Poor performance on infeasible problems I Penalty (or big-m) method [Chen and Goldfarb [2006], Curtis [2012]] I Big penalties = slow convergence! Small penalties = failure! I Compute two directions [KNITRO, Byrd, Nocedal, and Waltz [2006]] I Directions: 1 minimize constraint violation 2 improve optimality I Solve two di erent linear systems (with two di erent matrices) at each step. None of these methods reduce to a successful LP IPM!

32 Solutions to this problem I Two phases [IPOPT, Wächter and Biegler [2006]] I Main phase and feasibility restoration phase I Poor performance on infeasible problems I Penalty (or big-m) method [Chen and Goldfarb [2006], Curtis [2012]] I Big penalties = slow convergence! Small penalties = failure! I Compute two directions [KNITRO, Byrd, Nocedal, and Waltz [2006]] I Directions: 1 minimize constraint violation 2 improve optimality I Solve two di erent linear systems (with two di erent matrices) at each step. None of these methods reduce to a successful LP IPM! Why does Lustig s IPM fail?

33 What goes wrong with typical IPMs? IPMs generate sequence of approximate local minimizers to shifted log barrier: min f (x) µ X i log(r i a i (x)). Primal residual r implicitly defined (r = a(x)+s). Consider following example:

34 What goes wrong with typical IPMs? IPMs generate sequence of approximate local minimizers to shifted log barrier: min f (x) µ X i log(r i a i (x)). Primal residual r implicitly defined (r = a(x)+s). Inwards and outwards shift = bad!! " > $! % < $

35 What goes wrong with typical IPMs? IPMs generate sequence of approximate local minimizers to shifted log barrier: min f (x) µ X i log(r i a i (x)). Primal residual r implicitly defined (r = a(x)+s). Newton step fails!

36 What goes wrong with typical IPMs? IPMs generate sequence of approximate local minimizers to shifted log barrier: min f (x) µ X i log(r i a i (x)). Primal residual r implicitly defined (r = a(x)+s). We always shift constraints outwards:! % > $! " > $

37 Our one-phase IPM

38 Pictorial representation of our algorithm Recall: min f (x) µ X i log(r i a i (x)), with r i = µ 0. Shift and barrier parameter same size!!

39 Pictorial representation of our algorithm At each iteration: If shifted log barrier KKT then Aggressive step: reduce µ Else Stabilization step: min shifted log barrier End

40 Pictorial representation of our algorithm At each iteration: If shifted log barrier KKT then Aggressive step: reduce µ Else Stabilization step: min shifted log barrier End

41 Overview of key algorithm properties 1 Di erences between nonconvex programming literature are (1) Reduce constraint violation + µ at same rate (2) Nonlinear update slack variables (3) Carefully initialize slack variables 2 For LP reduces to IPM of style of Lustig [1990], Mehrotra [1992].

42 Primal-dual direction computation KKT system of log barrier + Newton s method ) IPM direction.

43 Primal-dual direction computation KKT system of log barrier + Newton s method ) IPM direction. For example, LOQO [Vanderbei, 1999]: r 2 xxl(x k, y k )+ k I ra(x k ) T 0 d 4 ra(x k x k ) 0 I 5 4d k 5 0 S k Y k y = 4 ds k 3 (rf (x k )+ra(x k ) T y k ) (a(x k )+s k ) 5 k µ k e Y k s k where k 2 [0, 1] is reduction factor in µ k.

44 Primal-dual direction computation KKT system of log barrier + Newton s method ) IPM direction. Our primal-dual direction: r 2 xxl(x k, y k )+ k I ra(x k ) T 0 d 4 ra(x k x k ) 0 I 5 4d k 5 0 S k Y k y = 4 ds k 3 (rf (x k )+ra(x k ) T y k ) (1 k )(a(x k )+s k ) 5 k µ k e Y k s k where k 2 [0, 1] is reduction factor in µ k and constraint violation (1). Common strategy in LP!

45 Primal-dual direction computation KKT system of log barrier + Newton s method ) IPM direction. Our primal-dual direction: r 2 xxl(x k, y k )+ k I ra(x k ) T 0 d 4 ra(x k x k ) 0 I 5 4d k 5 0 S k Y k y = 4 ds k 3 (rf (x k )+ra(x k ) T y k ) (1 k )(a(x k )+s k ) 5 k µ k e Y k s k where k 2 [0, 1] is reduction factor in µ k and constraint violation (1). Common strategy in LP! We set k = 1 for stabilization steps, k < 1 for aggressive steps.

46 Iterate update For step size k 2 [0, 1] update iterates as follows: µ k+1 µ k + k d k µ x k+1 y k+1 s k+1 x k + k d k x y k + k d k y s k + k d k s

47 Iterate update For step size k 2 [0, 1] update iterates as follows: µ k+1 µ k + k d k µ x k+1 y k+1 x k + k d k x y k + k dy k h (((((((( s k+1 hhhhhhh s k + k ds k! s k+1 k dµ k 1 µ k (a(x k )+s k ) a(x k+1 ) (2)

48 Iterate update For step size k 2 [0, 1] update iterates as follows: µ k+1 µ k + k d k µ x k+1 y k+1 x k + k d k x y k + k dy k h (((((((( s k+1 hhhhhhh s k + k ds k! s k+1 k dµ k 1 µ k (a(x k )+s k ) a(x k+1 ) (2) If we start with: then: a(x 1 )+s 1 = µ 1 e a(x k )+s k = µ k e (3) implicitly satisfied in Mehrotra [1992] IPM for linear programming.

49 Iterate update For step size k 2 [0, 1] update iterates as follows: µ k+1 µ k + k d k µ x k+1 y k+1 x k + k d k x y k + k dy k h (((((((( s k+1 hhhhhhh s k + k ds k! s k+1 k dµ k 1 µ k (a(x k )+s k ) a(x k+1 ) (2) If we start with: then: a(x 1 )+s 1 = µ 1 e a(x k )+s k = µ k e (3) implicitly satisfied in Mehrotra [1992] IPM for linear programming.

50 Iterate update For step size k 2 [0, 1] update iterates as follows: µ k+1 µ k + k d k µ x k+1 y k+1 x k + k d k x y k + k dy k h (((((((( s k+1 hhhhhhh s k + k ds k! s k+1 k dµ k 1 µ k (a(x k )+s k ) a(x k+1 )= s k + k d k s {z } when a is linear (2) If we start with: then: a(x 1 )+s 1 = µ 1 e a(x k )+s k = µ k e (3) implicitly satisfied in Mehrotra [1992] IPM for linear programming.

51 Recap: our one phase IPM 1 Di erences between nonconvex programming literature are (1) Reduce constraint violation + µ at same rate (2) Nonlinear update slack variables (3) Carefully initialize slack variables 2 For LP reduces to IPM of style of Lustig [1990], Mehrotra [1992].

52 Theory

53 Theory 1 The algorithm terminates at KKT point (for optimality, infeasibility, or unboundedness). 2 The dual multipliers are bounded under general conditions.

54 The algorithm (eventually) terminates Assume that the objective and constraints are di erentiable. Theorem ([Hinder and Ye, 2018]) After a finite number of iterations the one-phase algorithm terminates with either:

55 The algorithm (eventually) terminates Assume that the objective and constraints are di erentiable. Theorem ([Hinder and Ye, 2018]) After a finite number of iterations the one-phase algorithm terminates with either: I an approximate (scaled) KKT solution

56 The algorithm (eventually) terminates Assume that the objective and constraints are di erentiable. Theorem ([Hinder and Ye, 2018]) After a finite number of iterations the one-phase algorithm terminates with either: I an approximate (scaled) KKT solution I an approximate KKT solution to the problem of minimizing the (infinity norm) feasibility violation

57 The algorithm (eventually) terminates Assume that the objective and constraints are di erentiable. Theorem ([Hinder and Ye, 2018]) After a finite number of iterations the one-phase algorithm terminates with either: I an approximate (scaled) KKT solution I an approximate KKT solution to the problem of minimizing the (infinity norm) feasibility violation I an approximate certificate of the (shifted) feasible region being unbounded

58 Proof idea Approximately solve shifted log barrier sub-problem with point (x, s, y, µ).

59 Proof idea Approximately solve shifted log barrier sub-problem with point (x, s, y, µ). Let C 1/. At this point there are two possibilities:

60 Proof idea Approximately solve shifted log barrier sub-problem with point (x, s, y, µ). Let C 1/. At this point there are two possibilities: 1 s µ/c

61 Proof idea Approximately solve shifted log barrier sub-problem with point (x, s, y, µ). Let C 1/. At this point there are two possibilities: 1 s µ/c 2 9i s.t. s i apple µ/c

62 Proof idea Approximately solve shifted log barrier sub-problem with point (x, s, y, µ). Let C 1/. At this point there are two possibilities: 1 s µ/c I We are not too close to boundary ) can reduce the constraint violation. 2 9i s.t. s i apple µ/c

63 Proof idea Approximately solve shifted log barrier sub-problem with point (x, s, y, µ). Let C 1/. At this point there are two possibilities: 1 s µ/c I We are not too close to boundary ) can reduce the constraint violation. 2 9i s.t. s i apple µ/c

64 Proof idea Approximately solve shifted log barrier sub-problem with point (x, s, y, µ). Let C 1/. At this point there are two possibilities: 1 s µ/c I We are not too close to boundary ) can reduce the constraint violation. 2 9i s.t. s i apple µ/c I Since si y i µ we get y i ' C.

65 Proof idea Approximately solve shifted log barrier sub-problem with point (x, s, y, µ). Let C 1/. At this point there are two possibilities: 1 s µ/c I We are not too close to boundary ) can reduce the constraint violation. 2 9i s.t. s i apple µ/c I Since si y i µ we get y i ' C. I The (scaled) dual variables give an approximate KKT solution to minimizing the constraint violation.

66 Proof idea Approximately solve shifted log barrier sub-problem with point (x, s, y, µ). Let C 1/. At this point there are two possibilities: 1 s µ/c I We are not too close to boundary ) can reduce the constraint violation. 2 9i s.t. s i apple µ/c I Since si y i µ we get y i ' C. I The (scaled) dual variables give an approximate KKT solution to minimizing the constraint violation. I Critically we maintain a(x k )+s k 0.

67 Proof idea Approximately solve shifted log barrier sub-problem with point (x, s, y, µ). Let C 1/. At this point there are two possibilities: 1 s µ/c I We are not too close to boundary ) can reduce the constraint violation. 2 9i s.t. s i apple µ/c I Since si y i µ we get y i ' C. I The (scaled) dual variables give an approximate KKT solution to minimizing the constraint violation. I Critically we maintain a(x k )+s k 0. I Example of Wächter and Biegler [2000] IPMs go to a point with a 1(x)+s 1 < 0 a 2(x)+s 2 > 0.

68 Boundedness of dual multipliers The algorithm generates a sequence dual multipliers y k. If ky k k!1then numerical issues!

69 Boundedness of dual multipliers The algorithm generates a sequence dual multipliers y k. If ky k k!1then numerical issues! Theorem ([Haeser, Hinder, and Ye, 2017]) Assume that

70 Boundedness of dual multipliers The algorithm generates a sequence dual multipliers y k. If ky k k!1then numerical issues! Theorem ([Haeser, Hinder, and Ye, 2017]) Assume that I primal iterates are converging to (x, s )

71 Boundedness of dual multipliers The algorithm generates a sequence dual multipliers y k. If ky k k!1then numerical issues! Theorem ([Haeser, Hinder, and Ye, 2017]) Assume that I primal iterates are converging to (x, s ) I (x, s ) is a KKT point

72 Boundedness of dual multipliers The algorithm generates a sequence dual multipliers y k. If ky k k!1then numerical issues! Theorem ([Haeser, Hinder, and Ye, 2017]) Assume that I primal iterates are converging to (x, s ) I (x, s ) is a KKT point I su cient conditions for local optimality hold at (x, s ) (e.g. convexity)

73 Boundedness of dual multipliers The algorithm generates a sequence dual multipliers y k. If ky k k!1then numerical issues! Theorem ([Haeser, Hinder, and Ye, 2017]) Assume that I primal iterates are converging to (x, s ) I (x, s ) is a KKT point I su cient conditions for local optimality hold at (x, s ) (e.g. convexity) Then a subsequence of the dual multipliers is bounded.

74 Boundedness of dual multipliers The algorithm generates a sequence dual multipliers y k. If ky k k!1then numerical issues! Theorem ([Haeser, Hinder, and Ye, 2017]) Assume that I primal iterates are converging to (x, s ) I (x, s ) is a KKT point I su cient conditions for local optimality hold at (x, s ) (e.g. convexity) Then a subsequence of the dual multipliers is bounded. I Similar result already known for linear programming [Mizuno, Todd, and Ye, 1995].

75 Boundedness of dual multipliers The algorithm generates a sequence dual multipliers y k. If ky k k!1then numerical issues! Theorem ([Haeser, Hinder, and Ye, 2017]) Assume that I primal iterates are converging to (x, s ) I (x, s ) is a KKT point I su cient conditions for local optimality hold at (x, s ) (e.g. convexity) Then a subsequence of the dual multipliers is bounded. I Similar result already known for linear programming [Mizuno, Todd, and Ye, 1995]. I IPOPT ky k k!1[haeser, Hinder, and Ye, 2017].

76 Numerical results

77 Numerical results 1 More robust. 2 Better performance on infeasible problems. 3 Dual multipliers are better behaved.

78 Test problems I CUTEst test set I Selected problems with a. at least 100 variables and 100 constraints b. at most 10,000 total variables and constraints I Gives 238 problems I Compare against IPOPT with similar termination criterion ( = 10 6, max iterations = 3000, max time = 1 hour)

79 Comparison on problems where solver outputs are di erent

80 Iteration comparison The x-axis plots: iteration count of solver iteration count of fastest solver

81 Infeasible problems perturbed CUTEst (94 problems) NETLIB (28 problems)

82 Selected problems Table legend: I solver declared problem infeasible I # iterations (time in seconds) name # vars # cons one-phase IPOPT HVYCRASH (8.6 s) 333 (8.2 s) STEERING (1.7 s) 15 (0.2 s) READING9 10,001 5, (7.8 s) 130 (6.7 s) ROBOTARM (64.0 s) ERROR CATENARY (102.2 s) 2581 (909.4 s) WATER SUPPLY 15,935 23, (175.1 s) 1782 (369.9 s) ECON , (563.2 s) out of memory Code and results at

83 Boundedness of dual multipliers on NETLIB LP test set

84 Summary I One-phase IPM for nonconvex optimization I Builds on successful LP implementations [Lustig, 1990, Mehrotra, 1992] I No need for penalty or two-phase method I Circumvents example of Wächter and Biegler [2000] I Comparison against IPOPT on CUTEst test set I Better performance on infeasible problems I More reliable I Better behavior of dual multipliers I Code and test results can be found at

85 Papers 1 Haeser, G., Hinder, O. and Ye, Y., On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods. arxiv preprint arxiv: Under review at mathematical programming. 2 Hinder, O. and Ye, Y., A one-phase interior point method for nonconvex optimization. arxiv preprint arxiv: Under review at mathematical programming.

86 Additional slides 1 References 2 Why are local optima are sometimes global? 3 First-order vs. second-order methods 4 AGD for -strongly convex functions 5 Convex until proven guilty numerical results 6 Low tolerance results 7 Runtimes by element

87 References I Erling D Andersen and Yinyu Ye. On a homogeneous algorithm for the monotone complementarity problem. Mathematical Programming, 84(2): , Erling D Andersen, Cornelis Roos, and Tamas Terlaky. On implementing a primal-dual interior-point method for conic quadratic optimization. Mathematical Programming, 95(2): , Richard H Byrd, Jorge Nocedal, and Richard A Waltz. Knitro: An integrated package for nonlinear optimization. In Gianni Di Pillo and M. Roma, editors, Large-scale nonlinear optimization, pages Springer, Lifeng Chen and Donald Goldfarb. Interior-point 2-penalty methods for nonlinear programming with strong global convergence properties. Mathematical Programming, 108(1):1 36, Frank E Curtis. A penalty-interior-point algorithm for nonlinear constrained optimization. Mathematical Programming Computation, pages 1 29, Gabriel Haeser, Oliver Hinder, and Yinyu Ye. On the behavior of Lagrange multipliers in convex and non-convex infeasible interior point methods Oliver Hinder and Yinyu Ye. A one-phase interior point method for non-convex optimization

88 References II Irvin J Lustig. Feasibility issues in a primal-dual interior-point method for linear programming. Mathematical Programming, 49(1): , Kevin A McShane, Clyde L Monma, and David Shanno. An implementation of a primal-dual interior point method for linear programming. ORSA Journal on computing, 1(2):70 83, Sanjay Mehrotra. On the implementation of a primal-dual interior point method. SIAM Journal on optimization, 2(4): , Shinji Mizuno, Michael J Todd, and Yinyu Ye. A surface of analytic centers and primal-dual infeasible-interior-point algorithms for linear programming. Mathematics of Operations Research, 20(1): , Renato DC Monteiro and Ilan Adler. Interior path following primal-dual algorithms. part i: Linear programming. Mathematical programming, 44(1): 27 41, Jos F Sturm. Implementation of interior point methods for mixed semidefinite and second order cone optimization problems. Optimization Methods and Software, 17(6): , 2002.

89 References III Robert J Vanderbei. LOQO user s manual version Optimization Methods and Software, 11(1-4): , Andreas Wächter and Lorenz T Biegler. Failure of global convergence for a class of interior point methods for nonlinear programming. Mathematical Programming, 88(3): , Andreas Wächter and Lorenz T Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical Programming, 106(1):25 57, Yinyu Ye, Michael J Todd, and Shinji Mizuno. An o ( nl)-iteration homogeneous and self-dual linear programming algorithm. Mathematics of Operations Research, 19(1):53 67, 1994.

Steplength Selection in Interior-Point Methods for Quadratic Programming

Steplength Selection in Interior-Point Methods for Quadratic Programming Steplength Selection in Interior-Point Methods for Quadratic Programming Frank Curtis Jorge Nocedal December 9, 2005 Abstract We present a new strategy for choosing primal and dual steplengths in a primal-dual

More information

The AIMMS Outer Approximation Algorithm for MINLP

The AIMMS Outer Approximation Algorithm for MINLP The AIMMS Outer Approximation Algorithm for MINLP (using GMP functionality) By Marcel Hunting marcel.hunting@aimms.com November 2011 This document describes how to use the GMP variant of the AIMMS Outer

More information

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.

More information

The AIMMS Outer Approximation Algorithm for MINLP

The AIMMS Outer Approximation Algorithm for MINLP The AIMMS Outer Approximation Algorithm for MINLP (using GMP functionality) By Marcel Hunting Paragon Decision Technology BV An AIMMS White Paper November, 2011 Abstract This document describes how to

More information

Adaptive Barrier Strategies for Nonlinear Interior Methods

Adaptive Barrier Strategies for Nonlinear Interior Methods Adaptive Barrier Strategies for Nonlinear Interior Methods Jorge Nocedal Andreas Wächter Richard A. Waltz February 25, 2005 Abstract This paper considers strategies for selecting the barrier parameter

More information

Steplength selection in interior-point methods for quadratic programming

Steplength selection in interior-point methods for quadratic programming Applied Mathematics Letters 20 (2007) 516 523 www.elsevier.com/locate/aml Steplength selection in interior-point methods for quadratic programming Frank Curtis a, Jorge Nocedal b, a Department of Industrial

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization 6th WSEAS International Conference on SYSTEM SCIENCE and SIMULATION in ENGINEERING, Venice, Italy, November 21-23, 2007 18 Performance Evaluation of an Interior Point Filter Line Search Method for Constrained

More information

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms

More information

A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming

A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming Gianni Di Pillo (dipillo@dis.uniroma1.it) Giampaolo Liuzzi (liuzzi@iasi.cnr.it) Stefano Lucidi (lucidi@dis.uniroma1.it)

More information

Towards a practical simplex method for second order cone programming

Towards a practical simplex method for second order cone programming Towards a practical simplex method for second order cone programming Kartik Krishnan Department of Computing and Software McMaster University Joint work with Gábor Pataki (UNC), Neha Gupta (IIT Delhi),

More information

Interior Point I. Lab 21. Introduction

Interior Point I. Lab 21. Introduction Lab 21 Interior Point I Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen the discovery

More information

KNITRO NLP Solver. Todd Plantenga, Ziena Optimization, Inc. US-Mexico Workshop on Optimization January 2007, Huatulco, Mexico

KNITRO NLP Solver. Todd Plantenga, Ziena Optimization, Inc. US-Mexico Workshop on Optimization January 2007, Huatulco, Mexico KNITRO NLP Solver Todd Plantenga, Ziena Optimization, Inc. US-Mexico Workshop on Optimization January 2007, Huatulco, Mexico What is KNITRO? Commercial software for general NLP Fully embeddable software

More information

Optimized Choice of Parameters in interior-point methods for linear programming. Luiz-Rafael dos Santos

Optimized Choice of Parameters in interior-point methods for linear programming. Luiz-Rafael dos Santos Optimized Choice of Parameters in interior-point methods for linear programming Luiz-Rafael dos Santos Joint work with F. Villas-Bôas, A. Oliveira and C. Perin 1st Brazilian Workshop on Interior Point

More information

Preface. and Its Applications 81, ISBN , doi: / , Springer Science+Business Media New York, 2013.

Preface. and Its Applications 81, ISBN , doi: / , Springer Science+Business Media New York, 2013. Preface This book is for all those interested in using the GAMS technology for modeling and solving complex, large-scale, continuous nonlinear optimization problems or applications. Mainly, it is a continuation

More information

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS

A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS Yugoslav Journal of Operations Research Vol 19 (2009), Number 1, 123-132 DOI:10.2298/YUJOR0901123S A PRIMAL-DUAL EXTERIOR POINT ALGORITHM FOR LINEAR PROGRAMMING PROBLEMS Nikolaos SAMARAS Angelo SIFELARAS

More information

Detecting Infeasibility in Infeasible-Interior-Point. Methods for Optimization

Detecting Infeasibility in Infeasible-Interior-Point. Methods for Optimization FOCM 02 Infeasible Interior Point Methods 1 Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization Slide 1 Michael J. Todd, School of Operations Research and Industrial Engineering,

More information

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case Ellipsoid Algorithm 15-853:Algorithms in the Real World Linear and Integer Programming II Ellipsoid algorithm Interior point methods First polynomial-time algorithm for linear programming (Khachian 79)

More information

Lecture 15: Log Barrier Method

Lecture 15: Log Barrier Method 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 15: Log Barrier Method Scribes: Pradeep Dasigi, Mohammad Gowayyed Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Convex Optimization. Lijun Zhang Modification of

Convex Optimization. Lijun Zhang   Modification of Convex Optimization Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Modification of http://stanford.edu/~boyd/cvxbook/bv_cvxslides.pdf Outline Introduction Convex Sets & Functions Convex Optimization

More information

Gurobi Guidelines for Numerical Issues February 2017

Gurobi Guidelines for Numerical Issues February 2017 Gurobi Guidelines for Numerical Issues February 2017 Background Models with numerical issues can lead to undesirable results: slow performance, wrong answers or inconsistent behavior. When solving a model

More information

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma Support Vector Machines James McInerney Adapted from slides by Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

Linear programming and duality theory

Linear programming and duality theory Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

More information

SnapVX: A Network-Based Convex Optimization Solver

SnapVX: A Network-Based Convex Optimization Solver Journal of Machine Learning Research 18 (2017) 1-5 Submitted 9/15; Revised 10/16; Published 2/17 SnapVX: A Network-Based Convex Optimization Solver David Hallac Christopher Wong Steven Diamond Abhijit

More information

Adaptive Barrier Update Strategies for Nonlinear Interior Methods

Adaptive Barrier Update Strategies for Nonlinear Interior Methods Adaptive Barrier Update Strategies for Nonlinear Interior Methods Jorge Nocedal Andreas Wächter Richard A. Waltz July 21, 8 Abstract This paper considers strategies for selecting the barrier parameter

More information

CME307/MS&E311 Optimization Theory Summary

CME307/MS&E311 Optimization Theory Summary CME307/MS&E311 Optimization Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

A primal-dual Dikin affine scaling method for symmetric conic optimization

A primal-dual Dikin affine scaling method for symmetric conic optimization A primal-dual Dikin affine scaling method for symmetric conic optimization Ali Mohammad-Nezhad Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University July 15, 2015 A primal-dual

More information

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding B. O Donoghue E. Chu N. Parikh S. Boyd Convex Optimization and Beyond, Edinburgh, 11/6/2104 1 Outline Cone programming Homogeneous

More information

CME307/MS&E311 Theory Summary

CME307/MS&E311 Theory Summary CME307/MS&E311 Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/

More information

A Feasible Region Contraction Algorithm (Frca) for Solving Linear Programming Problems

A Feasible Region Contraction Algorithm (Frca) for Solving Linear Programming Problems A Feasible Region Contraction Algorithm (Frca) for Solving Linear Programming Problems E. O. Effanga Department of Mathematics/Statistics and Comp. Science University of Calabar P.M.B. 1115, Calabar, Cross

More information

New Directions in Linear Programming

New Directions in Linear Programming New Directions in Linear Programming Robert Vanderbei November 5, 2001 INFORMS Miami Beach NOTE: This is a talk mostly on pedagogy. There will be some new results. It is not a talk on state-of-the-art

More information

Assessing the Potential of Interior Methods for Nonlinear Optimization

Assessing the Potential of Interior Methods for Nonlinear Optimization Assessing the Potential of Interior Methods for Nonlinear Optimization José Luis Morales 1, Jorge Nocedal 2, Richard A. Waltz 2, Guanghui Liu 3, and Jean-Pierre Goux 2 1 Departamento de Matemáticas, Instituto

More information

Crash-Starting the Simplex Method

Crash-Starting the Simplex Method Crash-Starting the Simplex Method Ivet Galabova Julian Hall School of Mathematics, University of Edinburgh Optimization Methods and Software December 2017 Ivet Galabova, Julian Hall Crash-Starting Simplex

More information

Solution Methods Numerical Algorithms

Solution Methods Numerical Algorithms Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class

More information

Programs. Introduction

Programs. Introduction 16 Interior Point I: Linear Programs Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen

More information

5 Machine Learning Abstractions and Numerical Optimization

5 Machine Learning Abstractions and Numerical Optimization Machine Learning Abstractions and Numerical Optimization 25 5 Machine Learning Abstractions and Numerical Optimization ML ABSTRACTIONS [some meta comments on machine learning] [When you write a large computer

More information

A Numerical Study of Active-Set and Interior-Point Methods for Bound Constrained Optimization

A Numerical Study of Active-Set and Interior-Point Methods for Bound Constrained Optimization A Numerical Study of Active-Set and Interior-Point Methods for Bound Constrained Optimization Long Hei 1, Jorge Nocedal 2, Richard A. Waltz 2 1 Department of Industrial Engineering and Management Sciences,

More information

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014 5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38

More information

Summary Conclusions on Computational Experience and the Explanatory Value of Condition Measures for Linear Optimization

Summary Conclusions on Computational Experience and the Explanatory Value of Condition Measures for Linear Optimization [ Summary Conclusions on Computational Experience and the Explanatory Value of Condition Measures for Linear Optimiation Fernando Ordóñe and Robert M. Freund USC, Industrial and Systems Engineering Department,

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information

A Nonlinear Presolve Algorithm in AIMMS

A Nonlinear Presolve Algorithm in AIMMS A Nonlinear Presolve Algorithm in AIMMS By Marcel Hunting marcel.hunting@aimms.com November 2011 This paper describes the AIMMS presolve algorithm for nonlinear problems. This presolve algorithm uses standard

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Constrained Optimization Marc Toussaint U Stuttgart Constrained Optimization General constrained optimization problem: Let R n, f : R n R, g : R n R m, h : R n R l find min

More information

COMS 4771 Support Vector Machines. Nakul Verma

COMS 4771 Support Vector Machines. Nakul Verma COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron

More information

Code Generation for Embedded Convex Optimization

Code Generation for Embedded Convex Optimization Code Generation for Embedded Convex Optimization Jacob Mattingley Stanford University October 2010 Convex optimization Problems solvable reliably and efficiently Widely used in scheduling, finance, engineering

More information

An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection

An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection Paul Armand 1 Ngoc Nguyen Tran 2 Institut de Recherche XLIM Université de Limoges Journées annuelles

More information

Lecture 12: Feasible direction methods

Lecture 12: Feasible direction methods Lecture 12 Lecture 12: Feasible direction methods Kin Cheong Sou December 2, 2013 TMA947 Lecture 12 Lecture 12: Feasible direction methods 1 / 1 Feasible-direction methods, I Intro Consider the problem

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

A Simplex Based Parametric Programming Method for the Large Linear Programming Problem

A Simplex Based Parametric Programming Method for the Large Linear Programming Problem A Simplex Based Parametric Programming Method for the Large Linear Programming Problem Huang, Rujun, Lou, Xinyuan Abstract We present a methodology of parametric objective function coefficient programming

More information

On the Global Solution of Linear Programs with Linear Complementarity Constraints

On the Global Solution of Linear Programs with Linear Complementarity Constraints On the Global Solution of Linear Programs with Linear Complementarity Constraints J. E. Mitchell 1 J. Hu 1 J.-S. Pang 2 K. P. Bennett 1 G. Kunapuli 1 1 Department of Mathematical Sciences RPI, Troy, NY

More information

Characterizing Improving Directions Unconstrained Optimization

Characterizing Improving Directions Unconstrained Optimization Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not

More information

Introduction to Constrained Optimization

Introduction to Constrained Optimization Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR

More information

for Approximating the Analytic Center of a Polytope Abstract. The analytic center of a polytope P + = fx 0 : Ax = b e T x =1g log x j

for Approximating the Analytic Center of a Polytope Abstract. The analytic center of a polytope P + = fx 0 : Ax = b e T x =1g log x j A Lagrangian Relaxation Method for Approximating the Analytic Center of a Polytope Masakazu Kojima y Nimrod Megiddo z Shinji Mizuno x September 1992 Abstract. The analytic center of a polytope P + = fx

More information

Fast-Lipschitz Optimization

Fast-Lipschitz Optimization Fast-Lipschitz Optimization DREAM Seminar Series University of California at Berkeley September 11, 2012 Carlo Fischione ACCESS Linnaeus Center, Electrical Engineering KTH Royal Institute of Technology

More information

CSc 545 Lecture topic: The Criss-Cross method of Linear Programming

CSc 545 Lecture topic: The Criss-Cross method of Linear Programming CSc 545 Lecture topic: The Criss-Cross method of Linear Programming Wanda B. Boyer University of Victoria November 21, 2012 Presentation Outline 1 Outline 2 3 4 Please note: I would be extremely grateful

More information

ORF 307: Lecture 14. Linear Programming: Chapter 14: Network Flows: Algorithms

ORF 307: Lecture 14. Linear Programming: Chapter 14: Network Flows: Algorithms ORF 307: Lecture 14 Linear Programming: Chapter 14: Network Flows: Algorithms Robert J. Vanderbei April 10, 2018 Slides last edited on April 10, 2018 http://www.princeton.edu/ rvdb Agenda Primal Network

More information

Algorithms for convex optimization

Algorithms for convex optimization Algorithms for convex optimization Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University kocvara@utia.cas.cz http://www.utia.cas.cz/kocvara

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Lecture 18: March 23

Lecture 18: March 23 0-725/36-725: Convex Optimization Spring 205 Lecturer: Ryan Tibshirani Lecture 8: March 23 Scribes: James Duyck Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

Efficient Optimization for L -problems using Pseudoconvexity

Efficient Optimization for L -problems using Pseudoconvexity Efficient Optimization for L -problems using Pseudoconvexity Carl Olsson calle@maths.lth.se Anders P. Eriksson anderspe@maths.lth.se Centre for Mathematical Sciences Lund University, Sweden Fredrik Kahl

More information

Lecture 16 October 23, 2014

Lecture 16 October 23, 2014 CS 224: Advanced Algorithms Fall 2014 Prof. Jelani Nelson Lecture 16 October 23, 2014 Scribe: Colin Lu 1 Overview In the last lecture we explored the simplex algorithm for solving linear programs. While

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Supervised vs. Unsupervised Learning. Supervised vs. Unsupervised Learning. Supervised vs. Unsupervised Learning. Supervised vs. Unsupervised Learning

Supervised vs. Unsupervised Learning. Supervised vs. Unsupervised Learning. Supervised vs. Unsupervised Learning. Supervised vs. Unsupervised Learning Overview T7 - SVM and s Christian Vögeli cvoegeli@inf.ethz.ch Supervised/ s Support Vector Machines Kernels Based on slides by P. Orbanz & J. Keuchel Task: Apply some machine learning method to data from

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling

More information

The Ascendance of the Dual Simplex Method: A Geometric View

The Ascendance of the Dual Simplex Method: A Geometric View The Ascendance of the Dual Simplex Method: A Geometric View Robert Fourer 4er@ampl.com AMPL Optimization Inc. www.ampl.com +1 773-336-AMPL U.S.-Mexico Workshop on Optimization and Its Applications Huatulco

More information

Solving basis pursuit: infeasible-point subgradient algorithm, computational comparison, and improvements

Solving basis pursuit: infeasible-point subgradient algorithm, computational comparison, and improvements Solving basis pursuit: infeasible-point subgradient algorithm, computational comparison, and improvements Andreas Tillmann ( joint work with D. Lorenz and M. Pfetsch ) Technische Universität Braunschweig

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Ellipsoid Methods Barnabás Póczos & Ryan Tibshirani Outline Linear programs Simplex algorithm Running time: Polynomial or Exponential? Cutting planes & Ellipsoid methods for

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

INTERIOR POINT METHOD BASED CONTACT ALGORITHM FOR STRUCTURAL ANALYSIS OF ELECTRONIC DEVICE MODELS

INTERIOR POINT METHOD BASED CONTACT ALGORITHM FOR STRUCTURAL ANALYSIS OF ELECTRONIC DEVICE MODELS 11th World Congress on Computational Mechanics (WCCM XI) 5th European Conference on Computational Mechanics (ECCM V) 6th European Conference on Computational Fluid Dynamics (ECFD VI) E. Oñate, J. Oliver

More information

ADDENDUM TO THE SEDUMI USER GUIDE VERSION 1.1

ADDENDUM TO THE SEDUMI USER GUIDE VERSION 1.1 ADDENDUM TO THE SEDUMI USER GUIDE VERSION 1.1 IMRE PÓLIK 1. Introduction The main goal of this reference guide is to give a summary of all the options in SeDuMi. The default value of the options is satisfactory

More information

Composite Self-concordant Minimization

Composite Self-concordant Minimization Composite Self-concordant Minimization Volkan Cevher Laboratory for Information and Inference Systems-LIONS Ecole Polytechnique Federale de Lausanne (EPFL) volkan.cevher@epfl.ch Paris 6 Dec 11, 2013 joint

More information

Convex Optimization and Machine Learning

Convex Optimization and Machine Learning Convex Optimization and Machine Learning Mengliu Zhao Machine Learning Reading Group School of Computing Science Simon Fraser University March 12, 2014 Mengliu Zhao SFU-MLRG March 12, 2014 1 / 25 Introduction

More information

A primal-dual framework for mixtures of regularizers

A primal-dual framework for mixtures of regularizers A primal-dual framework for mixtures of regularizers Baran Gözcü baran.goezcue@epfl.ch Laboratory for Information and Inference Systems (LIONS) École Polytechnique Fédérale de Lausanne (EPFL) Switzerland

More information

Lec13p1, ORF363/COS323

Lec13p1, ORF363/COS323 Lec13 Page 1 Lec13p1, ORF363/COS323 This lecture: Semidefinite programming (SDP) Definition and basic properties Review of positive semidefinite matrices SDP duality SDP relaxations for nonconvex optimization

More information

What's New In Gurobi 5.0. Dr. Edward Rothberg

What's New In Gurobi 5.0. Dr. Edward Rothberg What's New In Gurobi 5.0 Dr. Edward Rothberg Our Main Goals for Gurobi 5.0 Expand the reach of the product: New problem types: Quadratic constraints: QCP, SOCP, MIQCP Massive numbers of constraints: Through

More information

An iteration of the simplex method (a pivot )

An iteration of the simplex method (a pivot ) Recap, and outline of Lecture 13 Previously Developed and justified all the steps in a typical iteration ( pivot ) of the Simplex Method (see next page). Today Simplex Method Initialization Start with

More information

25. NLP algorithms. ˆ Overview. ˆ Local methods. ˆ Constrained optimization. ˆ Global methods. ˆ Black-box methods.

25. NLP algorithms. ˆ Overview. ˆ Local methods. ˆ Constrained optimization. ˆ Global methods. ˆ Black-box methods. CS/ECE/ISyE 524 Introduction to Optimization Spring 2017 18 25. NLP algorithms ˆ Overview ˆ Local methods ˆ Constrained optimization ˆ Global methods ˆ Black-box methods ˆ Course wrap-up Laurent Lessard

More information

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints

ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints Instructor: Kevin Ross Scribe: Pritam Roy May 0, 2005 Outline of topics for the lecture We will discuss

More information

The State-of-the-Art in Conic Optimization Software

The State-of-the-Art in Conic Optimization Software The State-of-the-Art in Conic Optimization Software H. D. Mittelmann School of Mathematical and Statistical Sciences, Arizona State University, mittelmann@asu.edu Abstract This work gives an overview of

More information

Lecture 4 Duality and Decomposition Techniques

Lecture 4 Duality and Decomposition Techniques Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian

More information

The Alternating Direction Method of Multipliers

The Alternating Direction Method of Multipliers The Alternating Direction Method of Multipliers With Adaptive Step Size Selection Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein October 8, 2015 1 / 30 Introduction Presentation Outline 1 Convex

More information

Conic Duality. yyye

Conic Duality.  yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #02 1 Conic Duality Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

More information

AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM

AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM AN EXPERIMENTAL INVESTIGATION OF A PRIMAL- DUAL EXTERIOR POINT SIMPLEX ALGORITHM Glavelis Themistoklis Samaras Nikolaos Paparrizos Konstantinos PhD Candidate Assistant Professor Professor Department of

More information

Lecture 7: Support Vector Machine

Lecture 7: Support Vector Machine Lecture 7: Support Vector Machine Hien Van Nguyen University of Houston 9/28/2017 Separating hyperplane Red and green dots can be separated by a separating hyperplane Two classes are separable, i.e., each

More information

Linear Programming: Mathematics, Theory and Algorithms

Linear Programming: Mathematics, Theory and Algorithms Linear Programming: Mathematics, Theory and Algorithms Applied Optimization Volume 2 The titles published in this series are listed at the end of this volume. Linear Programming: Mathematics, Theory and

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Anti-Cycling Methods for Linear Programming

Anti-Cycling Methods for Linear Programming Anti-Cycling Methods for Linear Programming Xinyi Luo Mathematics Honors Thesis Advisor: Philip Gill University of California, San Diego June 6th, 28 Abstract A linear program (or a linear programming

More information

A Note on Smoothing Mathematical Programs with Equilibrium Constraints

A Note on Smoothing Mathematical Programs with Equilibrium Constraints Applied Mathematical Sciences, Vol. 3, 2009, no. 39, 1943-1956 A Note on Smoothing Mathematical Programs with Equilibrium Constraints M. A. Tawhid Department of Mathematics and Statistics School of Advanced

More information

A Brief History of Filter Methods

A Brief History of Filter Methods ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 A Brief History of Filter Methods Roger Fletcher, Sven Leyffer, and Philippe Toint Mathematics and Computer Science Division Preprint

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information