An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection

Size: px
Start display at page:

Download "An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection"

Transcription

1 An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection Paul Armand 1 Ngoc Nguyen Tran 2 Institut de Recherche XLIM Université de Limoges Journées annuelles 2017 des GdR MOA et MIA Institut de Mathématiques de Bordeaux Octobre, Phd Supervisor 2 Phd Student Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

2 1 Introduction 2 Algorithm 3 Global convergence analysis 4 Asymptotic analysis 5 Numerical experiments 6 Future work Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

3 Motivation Introduction We consider the equality constrained problem of the form min f (x) s.t. c(x) = 0, (P) where f : R n R, c : R n R m are smooth functions. Optimization algorithms try to find local solution of (P). In case the algorithm do not find a feasible solution, it should return a stationary point of the feasibility problem to avoid long sequences of iterations. min x R n c(x), Rapid infeasibility detection plays a central role in branch-and-bound methods for mixed-integer nonlinear programming, in parametric studies of optimization models, in searching global minimizers... Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

4 Introduction Introduction Instead of solving min f (x) s.t. c(x) = 0, (P) we consider the problem (P ρ ) min ρf (x) s.t. c(x) = 0, (P ρ ) where ρ > 0 is called the feasibility parameter. Any feasible solution is optimal for (P 0 ). Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

5 Optimality conditions Introduction Augmented Lagrangian associated to (P ρ ): L ρ,σ (x, λ) := ρf (x) + λ c(x) + 1 2σ c(x) 2, where σ > 0 is a quadratic penalty parameter, λ R m is the vector of Lagrange multipliers. (x, λ ) is a solution of (P ρ ) x is a strict local minimum of L ρ,σ (, λ ) for σ small enough. 1st order optimality conditions for minimizing L ρ,σ (, λ) : ( ) ρg(x) + A(x)y Φ(w, λ, ρ, σ) := = 0 c(x) + σ(λ y) where g := f, A := c and w := (x, y). Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

6 State of the art Introduction Augmented Lagrangian (AL) method was proposed independently by Hestenes (1969) and Powell (1969). Main idea: solve a sequence of unconstrained problems with an update strategy of the parameters. Implementations: LANCELOT (1992), ALGENCAN (2007; 2008) and SPDOPT (2017a; 2017b). Some AL algorithms with infeasibility detection capabilities: Martínez and Prudente (2012), Birgin et al. (2015), Birgin et al. (2014), Gonçalves et al. (2015), Armand and Omheni (2017a)). Fast local convergence in the infeasible case has not been established for these AL algorithms. Our goal is to propose a new algorithm rapidly converging in the infeasible case. Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

7 Introduction Main properties of our new algorithm Newton-type method: Solve the linear system J ρk,σ k,θ k (w k )(w + k w k) = Φ(w k, λ k+1, ρ k+1, σ k+1 ), (1) where ( ) 2 J ρ,σ,θ (w) = xx L ρ,σ(w) + θi A(x) A(x) σi and Φ(w, λ, ρ, σ) := ( ) ρg(x) + A(x)y. c(x) + σ(λ y) If the progress towards the feasibility is not sufficient, the algorithm progressively switches to the solution of by driving ρ and λ to zero. 1 min x R n 2 c(x) 2, Dynamic update of ρ to get a fast convergence to an infeasible stationary point. Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

8 Algorithm Algorithm: Outer iterations Choose ɛ > 0, a (0, 1), τ (0, 1). Set k = 0, i 0 = 0, F = 1. 1 If c(x k ) ɛ, set F = 0. 2 Choose ζ k > 0 such that {ζ k } 0. If c(x k ) a c(x ik ) + ζ k, go to Step 4. 3 If F = 1, choose 0 < ρ k+1 τρ k and set σ k+1 = σ k, else choose 0 < σ k+1 τσ k and set ρ k+1 = ρ k. Set λ k+1 = ρ k+1 ρ k λ k, i k+1 = i k and go to Step 5. 4 Choose 0 < σ k+1 σ k. Set λ k+1 = y k, ρ k+1 = ρ k, i k+1 = k. 5 Choose θ k > 0 such that Inertia(J ρk,σ k,θ k (w k )) = (n, m, 0) and compute w + k by solving the linear system (1). 6 Choose ε k > 0 such that {ε k } 0. If Φ(w + k, λ k+1, ρ k+1, σ k+1 ) ε k, set w k+1 = w + k. Otherwise, apply a sequence of inner iterations to find w k+1 such that Φ(w k+1, λ k+1, ρ k+1, σ k+1 ) ε k. Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

9 Algorithm Algorithm: Inner iterations The aim of the inner iterations is to minimize the merit function: ϕ(w) = ρf (x) + λ c(x) + 1 2σ c(x) 2 + ν 2σ c(x) σ(λ y) 2, where σ = σ k+1, ρ = ρ k+1, λ = λ k+1 are fixed and ν > 0. 1st order optimality conditions of min ϕ(w) : Φ(w, λ, ρ, σ) = 0. Similar to (Armand and Omheni, 2017a, Algorithm 2) Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

10 Global convergence analysis Stationary points Definition x R n is a Fritz-John (FJ) point of problem (P) if there exists (z, y) R + R m and (z, y) (0, 0) such that zg(x) + A(x)y = 0 and c(x) = 0. If z > 0, x is a Karush-Kuhn-Tucker point. If z = 0, x is a singular stationary point (LICQ does not hold). x R n is an infeasible stationary point of problem (P) if c(x) 0 and A(x)c(x) = 0. Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

11 Global convergence analysis Global convergence analysis: outer iterations Lemma Assume that Algorithm generates an infinite sequence {w k }. Let K N be the set of iteration indices for which the condition checking the feasibility in Step 2 is satisfied. (i) If K =, then the subsequence {c k } k K 0 and {ρ k } is eventually constant. (ii) If K <, then lim inf c k > 0, {σ k ρ k } 0 and {σ k λ k } 0. Theorem 1 Assume that Algorithm generates an infinite sequence {w k } such that the sequence {x k } lies in a compact set. (i) Any feasible limit point of {x k } is a FJ point of (P). (ii) If the sequence {x k } has no feasible limit point, then any limit point is an infeasible stationary point of problem (P). 1 Related results: Armand and Omheni (2017a), Armand and Omheni (2017b). Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

12 Global convergence analysis Example of infeasible stationary point min x 1 s.t. x 2 1 x = 0, x x = 0, x 0 = (5, 2), y ( 0 = (1, 1), 2x1 (2x A(x)c(x) = ) ), 2x 2 x = (0, 0), y = (2035.9, ). Example of infeasible stationary point L Ac c σ ρ Iterations Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

13 Asymptotic analysis Asymptotic result near an infeasible stationary point Assumptions 1 {w k } w = (x, y ), where x is an infeasible stationary point of problem (P). 2 2 f and 2 c are Lipschitz continuous over an open neighborhood of x. 3 The second order sufficient optimality conditions hold at x for the feasibility problem min x R n c(x) 2. 1 = k 0, k k 0, σ k = σ k0 and {ρ k } 0. For w := (x, y) R n+m, we define F (w) = (A(x)y, c(x) σ k0 y). Theorem (Rate of convergence in the infeasible case) Assume that all above assumptions hold. Let t (0, 2]. If the feasibility parameter is chosen so that ρ k+1 = O( F (w k ) t ), then w k+1 w = O( w k w t ). (2) In addition, if ρ k+1 = Θ( F k t ) and if ε k = Ω(ρ t k ) for 0 < t < t, then for k large enough there is no inner iterations, i.e., w k+1 = w + k. Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

14 Asymptotic analysis Rate of convergence in the infeasible case min x 1 s.t. x 2 1 x = 0, x x = 0, x 0 = (5, 2), y ( 0 = (1, 1), 2x1 (2x A(x)c(x) = ) ), 2x 2 x = (0, 0), y = (2035.9, ) F 10 6 t=2 t=1.6 t=1.2 t= Iterations Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

15 Numerical experiments Implementation of SPDOPT-ID The stopping criterion: Optimal solution: (g k + A k y k /ρ k, c k ) 10 8 Infeasible stationary point: (A k y k, c k σ k y k ) 10 8 and ρ k The update condition: SPDOPT-ID: Set η k = c(x k ) and check c(x k ) d max{η ij : (k l 1) + j k} + 10σ k ρ k. SPDOPT-IDOld: Set η k = c(x k ) + 10σ k ρ k and check c(x k ) d max{η ij : (k l 1) + j k}. Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

16 Numerical experiments Results on standard problems Results of 128 problems with only equality constraints from CUTEr 3 with 2 n 20192, 1 m Performance profiles comparing number of function evaluations and number of gradient evaluations for SPDOPT-AL (Armand and Omheni (2017a)), SPDOPT-ID and SPDOPT-IDOld. 1 Function evaluations 1 Gradient evaluations ρs(τ) 0.9 ρs(τ) τ SPDOPT AL SPDOPT ID SPDOPT IDOld τ SPDOPT AL SPDOPT ID SPDOPT IDOld 3 Gould, Orban and Toint Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

17 Numerical experiments Results on infeasible problems Results of 127 problems with only equality constraints from CUTEr with the additional infeasible constraint c 1(x) = 0. Performance profiles comparing number of function evaluations and number of gradient evaluations for SPDOPT-AL, SPDOPT-ID and SPDOPT-IDOld. 1 Function evaluations 1 Gradient evaluations ρs(τ) 0.5 ρs(τ) τ SPDOPT AL SPDOPT ID SPDOPT IDOld τ SPDOPT AL SPDOPT ID SPDOPT IDOld Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

18 Numerical experiments Results on both sets Results of 255 problems from the set of standard problems and infeasible problems. Performance profiles comparing number of function evaluations and number of gradient evaluations for SPDOPT-AL, SPDOPT-ID and SPDOPT-IDOld. 1 Function evaluations 1 Gradient evaluations ρs(τ) 0.5 ρs(τ) τ SPDOPT AL SPDOPT ID SPDOPT IDOld τ SPDOPT AL SPDOPT ID SPDOPT IDOld Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

19 Future work Future work We develop this work to the general nonlinear optimization problem min f (x) s.t. c(x) = 0, x 0. Specifically, we consider the unconstrained problem below min x R n x>0 ϕ λ,ρ,σ,µ (x) := ρf (x) + λ c(x) + 1 2σ c(x) 2 ρµ n ln[x] i, i=1 where λ R m is an estimate of the vector of Lagrange multipliers associated with the equality constraints, ρ > 0 is the feasibility parameter, σ > 0 is the quadratic penalty parameter and µ > 0 is the barrier parameter. With a suitable update of parameters, the sequence of iterates converges to an infeasible stationary point superlinearly. Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

20 References I Future work R. Andreani, E. G. Birgin, J. M. Martínez, and M. L. Schuverdt. On Augmented Lagrangian Methods with General Lower-Level Constraints. SIAM Journal on Optimization, 18(4): , R. Andreani, E. G. Birgin, J. M. Martínez, and M. L. Schuverdt. Augmented Lagrangian methods under the constant positive linear dependence constraint qualification. Mathematical Programming, 111(1):5 32, Paul Armand and Riadh Omheni. A globally and quadratically convergent primal-dual augmented lagrangian algorithm for equality constrained optimization. Optimization Methods and Software, 32(1):1 21, 2017a. Paul Armand and Riadh Omheni. A mixed logarithmic barrier-augmented lagrangian method for nonlinear optimization. Journal of Optimization Theory and Applications, 2017b. doi: /s x. E. G. Birgin, J. M. Martínez, and L. F. Prudente. Augmented lagrangians with possible infeasibility and finite termination for global nonlinear programming. Journal of Global Optimization, 58(2): , Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

21 References II Future work E. G. Birgin, J. M. Martínez, and L. F. Prudente. Optimality properties of an augmented lagrangian method on infeasible problems. Computational Optimization and Applications, 60(3): , A. R. Conn, N. I. M. Gould, and P. L. Toint. LANCELOT: A Fortran Package for Large-Scale Nonlinear Optimization (Release A). Number 17 in Springer Series in Computational Mathematics. Springer-Verlag, New York, M. L. N. Gonçalves, J. G. Melo, and L. F. Prudente. Augmented lagrangian methods for nonlinear programming with possible infeasibility. Journal of Global Optimization, 63(2): , M. R. Hestenes. Multiplier and gradient methods. Journal of Optimization Theory and Applications, 4(5): , Jose Mario Martínez and Leandro da Fonseca Prudente. Handling infeasibility in a large-scale nonlinear optimization algorithm. Numerical Algorithms, 60(2): , M. J. D. Powell. A method for nonlinear constraints in minimization problems. In Optimization (Sympos., Univ. Keele, Keele, 1968), pages Academic Press, London, Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

22 Future work Thank you for your attention! Paul Armand & Ngoc Nguyen Tran An augmented Lagrangian method with fast ID Octobre, / 22

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming

A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming Gianni Di Pillo (dipillo@dis.uniroma1.it) Giampaolo Liuzzi (liuzzi@iasi.cnr.it) Stefano Lucidi (lucidi@dis.uniroma1.it)

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.

More information

Characterizing Improving Directions Unconstrained Optimization

Characterizing Improving Directions Unconstrained Optimization Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not

More information

A derivative-free trust-region augmented Lagrangian algorithm

A derivative-free trust-region augmented Lagrangian algorithm A derivative-free trust-region augmented Lagrangian algorithm Charles Audet Sébastien Le Digabel Mathilde Peyrega July 5, 2016 Abstract We present a new derivative-free trust-region (DFTR) algorithm to

More information

Dipartimento di Ingegneria Informatica, Automatica e Gestionale A. Ruberti, SAPIENZA, Università di Roma, via Ariosto, Roma, Italy.

Dipartimento di Ingegneria Informatica, Automatica e Gestionale A. Ruberti, SAPIENZA, Università di Roma, via Ariosto, Roma, Italy. Data article Title: Data and performance profiles applying an adaptive truncation criterion, within linesearchbased truncated Newton methods, in large scale nonconvex optimization. Authors: Andrea Caliciotti

More information

Augmented Lagrangian Methods

Augmented Lagrangian Methods Augmented Lagrangian Methods Mário A. T. Figueiredo 1 and Stephen J. Wright 2 1 Instituto de Telecomunicações, Instituto Superior Técnico, Lisboa, Portugal 2 Computer Sciences Department, University of

More information

Augmented Lagrangian Methods

Augmented Lagrangian Methods Augmented Lagrangian Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Augmented Lagrangian IMA, August 2016 1 /

More information

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms

More information

Introduction to Constrained Optimization

Introduction to Constrained Optimization Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Crash-Starting the Simplex Method

Crash-Starting the Simplex Method Crash-Starting the Simplex Method Ivet Galabova Julian Hall School of Mathematics, University of Edinburgh Optimization Methods and Software December 2017 Ivet Galabova, Julian Hall Crash-Starting Simplex

More information

Optimization III: Constrained Optimization

Optimization III: Constrained Optimization Optimization III: Constrained Optimization CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization III: Constrained Optimization

More information

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization 6th WSEAS International Conference on SYSTEM SCIENCE and SIMULATION in ENGINEERING, Venice, Italy, November 21-23, 2007 18 Performance Evaluation of an Interior Point Filter Line Search Method for Constrained

More information

Structured minimal-memory inexact quasi-newton method and secant preconditioners for Augmented Lagrangian Optimization

Structured minimal-memory inexact quasi-newton method and secant preconditioners for Augmented Lagrangian Optimization Structured minimal-memory inexact quasi-newton method and secant preconditioners for Augmented Lagrangian Optimization E. G. Birgin J. M. Martínez June 19, 2006 Abstract Augmented Lagrangian methods for

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Constrained Optimization Marc Toussaint U Stuttgart Constrained Optimization General constrained optimization problem: Let R n, f : R n R, g : R n R m, h : R n R l find min

More information

COMS 4771 Support Vector Machines. Nakul Verma

COMS 4771 Support Vector Machines. Nakul Verma COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron

More information

Preface. and Its Applications 81, ISBN , doi: / , Springer Science+Business Media New York, 2013.

Preface. and Its Applications 81, ISBN , doi: / , Springer Science+Business Media New York, 2013. Preface This book is for all those interested in using the GAMS technology for modeling and solving complex, large-scale, continuous nonlinear optimization problems or applications. Mainly, it is a continuation

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

PIPA: A New Proximal Interior Point Algorithm for Large-Scale Convex Optimization

PIPA: A New Proximal Interior Point Algorithm for Large-Scale Convex Optimization PIPA: A New Proximal Interior Point Algorithm for Large-Scale Convex Optimization Marie-Caroline Corbineau 1, Emilie Chouzenoux 1,2, Jean-Christophe Pesquet 1 1 CVN, CentraleSupélec, Université Paris-Saclay,

More information

Lagrangian methods for the regularization of discrete ill-posed problems. G. Landi

Lagrangian methods for the regularization of discrete ill-posed problems. G. Landi Lagrangian methods for the regularization of discrete ill-posed problems G. Landi Abstract In many science and engineering applications, the discretization of linear illposed problems gives rise to large

More information

Optimized Choice of Parameters in interior-point methods for linear programming. Luiz-Rafael dos Santos

Optimized Choice of Parameters in interior-point methods for linear programming. Luiz-Rafael dos Santos Optimized Choice of Parameters in interior-point methods for linear programming Luiz-Rafael dos Santos Joint work with F. Villas-Bôas, A. Oliveira and C. Perin 1st Brazilian Workshop on Interior Point

More information

Computational Methods. Constrained Optimization

Computational Methods. Constrained Optimization Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on

More information

Lecture 7: Support Vector Machine

Lecture 7: Support Vector Machine Lecture 7: Support Vector Machine Hien Van Nguyen University of Houston 9/28/2017 Separating hyperplane Red and green dots can be separated by a separating hyperplane Two classes are separable, i.e., each

More information

A primal-dual framework for mixtures of regularizers

A primal-dual framework for mixtures of regularizers A primal-dual framework for mixtures of regularizers Baran Gözcü baran.goezcue@epfl.ch Laboratory for Information and Inference Systems (LIONS) École Polytechnique Fédérale de Lausanne (EPFL) Switzerland

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

Convex Optimization. Lijun Zhang Modification of

Convex Optimization. Lijun Zhang   Modification of Convex Optimization Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Modification of http://stanford.edu/~boyd/cvxbook/bv_cvxslides.pdf Outline Introduction Convex Sets & Functions Convex Optimization

More information

STRUCTURAL & MULTIDISCIPLINARY OPTIMIZATION

STRUCTURAL & MULTIDISCIPLINARY OPTIMIZATION STRUCTURAL & MULTIDISCIPLINARY OPTIMIZATION Pierre DUYSINX Patricia TOSSINGS Department of Aerospace and Mechanical Engineering Academic year 2018-2019 1 Course objectives To become familiar with the introduction

More information

Convex Optimization and Machine Learning

Convex Optimization and Machine Learning Convex Optimization and Machine Learning Mengliu Zhao Machine Learning Reading Group School of Computing Science Simon Fraser University March 12, 2014 Mengliu Zhao SFU-MLRG March 12, 2014 1 / 25 Introduction

More information

Algorithms for convex optimization

Algorithms for convex optimization Algorithms for convex optimization Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University kocvara@utia.cas.cz http://www.utia.cas.cz/kocvara

More information

A Note on Smoothing Mathematical Programs with Equilibrium Constraints

A Note on Smoothing Mathematical Programs with Equilibrium Constraints Applied Mathematical Sciences, Vol. 3, 2009, no. 39, 1943-1956 A Note on Smoothing Mathematical Programs with Equilibrium Constraints M. A. Tawhid Department of Mathematics and Statistics School of Advanced

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

Handling of constraints

Handling of constraints Handling of constraints MTH6418 S. Le Digabel, École Polytechnique de Montréal Fall 2015 (v3) MTH6418: Constraints 1/41 Plan Taxonomy of constraints Approaches The Progressive Barrier (PB) References MTH6418:

More information

Constrained and Unconstrained Optimization

Constrained and Unconstrained Optimization Constrained and Unconstrained Optimization Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Oct 10th, 2017 C. Hurtado (UIUC - Economics) Numerical

More information

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma Support Vector Machines James McInerney Adapted from slides by Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview 1. Overview of SVMs 2. Margin Geometry 3. SVM Optimization 4. Overlapping Distributions 5. Relationship to Logistic Regression 6. Dealing

More information

Lecture 15: Log Barrier Method

Lecture 15: Log Barrier Method 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 15: Log Barrier Method Scribes: Pradeep Dasigi, Mohammad Gowayyed Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Introduction to Modern Control Systems

Introduction to Modern Control Systems Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 1 3.1 Linearization and Optimization of Functions of Vectors 1 Problem Notation 2 Outline 3.1.1 Linearization 3.1.2 Optimization of Objective Functions 3.1.3 Constrained

More information

Numerical Method in Optimization as a Multi-stage Decision Control System

Numerical Method in Optimization as a Multi-stage Decision Control System Numerical Method in Optimization as a Multi-stage Decision Control System B.S. GOH Institute of Mathematical Sciences University of Malaya 50603 Kuala Lumpur MLYSI gohoptimum@gmail.com bstract: - Numerical

More information

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case Ellipsoid Algorithm 15-853:Algorithms in the Real World Linear and Integer Programming II Ellipsoid algorithm Interior point methods First polynomial-time algorithm for linear programming (Khachian 79)

More information

Solution Methods Numerical Algorithms

Solution Methods Numerical Algorithms Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class

More information

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP): Linköping University Optimization TAOP3(0) Department of Mathematics Examination Oleg Burdakov of 30 October 03 Assignment Consider the following linear programming problem (LP): max z = x + x s.t. x x

More information

A Lagrange method based L-curve for image restoration

A Lagrange method based L-curve for image restoration Journal of Physics: Conference Series OPEN ACCESS A Lagrange method based L-curve for image restoration To cite this article: G Landi 2013 J. Phys.: Conf. Ser. 464 012011 View the article online for updates

More information

Detecting Infeasibility in Infeasible-Interior-Point. Methods for Optimization

Detecting Infeasibility in Infeasible-Interior-Point. Methods for Optimization FOCM 02 Infeasible Interior Point Methods 1 Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization Slide 1 Michael J. Todd, School of Operations Research and Industrial Engineering,

More information

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal

More information

Lecture 4 Duality and Decomposition Techniques

Lecture 4 Duality and Decomposition Techniques Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian

More information

The Mesh Adaptive Direct Search Algorithm for Discrete Blackbox Optimization

The Mesh Adaptive Direct Search Algorithm for Discrete Blackbox Optimization The Mesh Adaptive Direct Search Algorithm for Discrete Blackbox Optimization Sébastien Le Digabel Charles Audet Christophe Tribes GERAD and École Polytechnique de Montréal ICCOPT, Tokyo 2016 08 08 BBO:

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

and 6.855J Lagrangian Relaxation I never missed the opportunity to remove obstacles in the way of unity. Mohandas Gandhi

and 6.855J Lagrangian Relaxation I never missed the opportunity to remove obstacles in the way of unity. Mohandas Gandhi 15.082 and 6.855J Lagrangian Relaxation I never missed the opportunity to remove obstacles in the way of unity. Mohandas Gandhi On bounding in optimization In solving network flow problems, we not only

More information

PROJECTION ONTO A POLYHEDRON THAT EXPLOITS SPARSITY

PROJECTION ONTO A POLYHEDRON THAT EXPLOITS SPARSITY PROJECTION ONTO A POLYHEDRON THAT EXPLOITS SPARSITY WILLIAM W. HAGER AND HONGCHAO ZHANG Abstract. An algorithm is developed for projecting a point onto a polyhedron. The algorithm solves a dual version

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information

Introduction to Optimization Problems and Methods

Introduction to Optimization Problems and Methods Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex

More information

Department of Electrical and Computer Engineering

Department of Electrical and Computer Engineering LAGRANGIAN RELAXATION FOR GATE IMPLEMENTATION SELECTION Yi-Le Huang, Jiang Hu and Weiping Shi Department of Electrical and Computer Engineering Texas A&M University OUTLINE Introduction and motivation

More information

The Alternating Direction Method of Multipliers

The Alternating Direction Method of Multipliers The Alternating Direction Method of Multipliers With Adaptive Step Size Selection Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein October 8, 2015 1 / 30 Introduction Presentation Outline 1 Convex

More information

Simulated Annealing Method for Regional Analysis

Simulated Annealing Method for Regional Analysis Simulated Annealing Method for Regional Analysis JAN PANUS, STANISLAVA SIMONOVA Institute of System Engineering and Informatics University of Pardubice Studentská 84, 532 10 Pardubice CZECH REPUBLIC http://www.upce.cz

More information

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report

More information

Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps

Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps Björn Geißler, Antonio Morsi, Lars Schewe, Martin Schmidt FAU Erlangen-Nürnberg, Discrete Optimization

More information

CME307/MS&E311 Theory Summary

CME307/MS&E311 Theory Summary CME307/MS&E311 Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/

More information

Gate Sizing by Lagrangian Relaxation Revisited

Gate Sizing by Lagrangian Relaxation Revisited Gate Sizing by Lagrangian Relaxation Revisited Jia Wang, Debasish Das, and Hai Zhou Electrical Engineering and Computer Science Northwestern University Evanston, Illinois, United States October 17, 2007

More information

Search direction improvement for gradient-based optimization problems

Search direction improvement for gradient-based optimization problems Computer Aided Optimum Design in Engineering IX 3 Search direction improvement for gradient-based optimization problems S Ganguly & W L Neu Aerospace and Ocean Engineering, Virginia Tech, USA Abstract

More information

Optimal Control Techniques for Dynamic Walking

Optimal Control Techniques for Dynamic Walking Optimal Control Techniques for Dynamic Walking Optimization in Robotics & Biomechanics IWR, University of Heidelberg Presentation partly based on slides by Sebastian Sager, Moritz Diehl and Peter Riede

More information

On the Global Solution of Linear Programs with Linear Complementarity Constraints

On the Global Solution of Linear Programs with Linear Complementarity Constraints On the Global Solution of Linear Programs with Linear Complementarity Constraints J. E. Mitchell 1 J. Hu 1 J.-S. Pang 2 K. P. Bennett 1 G. Kunapuli 1 1 Department of Mathematical Sciences RPI, Troy, NY

More information

LANCELOT naive, a simple interface for LANCELOT B by N. I. M. Gould 1, D. Orban 2 and Ph. L. Toint 3 Report 07/12 26th November 2007

LANCELOT naive, a simple interface for LANCELOT B by N. I. M. Gould 1, D. Orban 2 and Ph. L. Toint 3 Report 07/12 26th November 2007 LANCELOT naive, a simple interface for LANCELOT B by N. I. M. Gould 1, D. Orban 2 and Ph. L. Toint 3 Report 07/12 26th November 2007 1 Oxford University Computing Laboratory, Wolfson Building, Parks Road,

More information

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Peter Englert Machine Learning and Robotics Lab Universität Stuttgart Germany

More information

Solving Large-Scale Nonlinear Programming Problems by Constraint Partitioning

Solving Large-Scale Nonlinear Programming Problems by Constraint Partitioning Solving Large-Scale Nonlinear Programming Problems by Constraint Partitioning Benjamin W. Wah and Yixin Chen Department of Electrical and Computer Engineering and the Coordinated Science Laboratory, University

More information

APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING

APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING Second Edition P. Venkataraman Rochester Institute of Technology WILEY JOHN WILEY & SONS, INC. CONTENTS PREFACE xiii 1 Introduction 1 1.1. Optimization Fundamentals

More information

CME307/MS&E311 Optimization Theory Summary

CME307/MS&E311 Optimization Theory Summary CME307/MS&E311 Optimization Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/

More information

arxiv: v3 [math.oc] 3 Nov 2016

arxiv: v3 [math.oc] 3 Nov 2016 BILEVEL POLYNOMIAL PROGRAMS AND SEMIDEFINITE RELAXATION METHODS arxiv:1508.06985v3 [math.oc] 3 Nov 2016 JIAWANG NIE, LI WANG, AND JANE J. YE Abstract. A bilevel program is an optimization problem whose

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

A new mini-max, constrained optimization method for solving worst case problems

A new mini-max, constrained optimization method for solving worst case problems Carnegie Mellon University Research Showcase @ CMU Department of Electrical and Computer Engineering Carnegie Institute of Technology 1979 A new mini-max, constrained optimization method for solving worst

More information

Theoretical Concepts of Machine Learning

Theoretical Concepts of Machine Learning Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5

More information

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point

Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point Ethan Liu Supervisor: Professor Matthias Ehrgott Lancaster University Outline

More information

An augmented Lagrangian approach to non-convex SAO using diagonal quadratic approximations

An augmented Lagrangian approach to non-convex SAO using diagonal quadratic approximations Struct Multidisc Optim DOI 10.1007/s00158-008-0304- BRIEF NOTE An augmented Lagrangian approach to non-conve SAO using diagonal quadratic approimations Albert A. Groenwold L. F. P. Etman Schalk Kok Derren

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

Programs. Introduction

Programs. Introduction 16 Interior Point I: Linear Programs Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen

More information

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007 : Convex Non-Smooth Optimization April 2, 2007 Outline Lecture 19 Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials

More information

Bilinear Programming

Bilinear Programming Bilinear Programming Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida 32611-6595 Email address: artyom@ufl.edu

More information

Constrained Optimization COS 323

Constrained Optimization COS 323 Constrained Optimization COS 323 Last time Introduction to optimization objective function, variables, [constraints] 1-dimensional methods Golden section, discussion of error Newton s method Multi-dimensional

More information

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c ORIE 63 Mathematical Programming I November 13, 214 Lecturer: David P. Williamson Lecture 23 Scribe: Mukadder Sevi Baltaoglu 1 Interior Point Methods Consider the standard primal and dual linear programs:

More information

Kernel Methods & Support Vector Machines

Kernel Methods & Support Vector Machines & Support Vector Machines & Support Vector Machines Arvind Visvanathan CSCE 970 Pattern Recognition 1 & Support Vector Machines Question? Draw a single line to separate two classes? 2 & Support Vector

More information

A Hybrid Genetic Pattern Search Augmented Lagrangian Method for Constrained Global Optimization

A Hybrid Genetic Pattern Search Augmented Lagrangian Method for Constrained Global Optimization A Hybrid Genetic Pattern Search Augmented Lagrangian Method for Constrained Global Optimization Lino Costa a, Isabel A.C.P. Espírito Santo a, Edite M.G.P. Fernandes b a Department of Production and Systems

More information

A penalty based filters method in direct search optimization

A penalty based filters method in direct search optimization A penalty based filters method in direct search optimization ALDINA CORREIA CIICESI/ESTG P.PORTO Felgueiras PORTUGAL aic@estg.ipp.pt JOÃO MATIAS CM-UTAD Vila Real PORTUGAL j matias@utad.pt PEDRO MESTRE

More information

EARLY INTERIOR-POINT METHODS

EARLY INTERIOR-POINT METHODS C H A P T E R 3 EARLY INTERIOR-POINT METHODS An interior-point algorithm is one that improves a feasible interior solution point of the linear program by steps through the interior, rather than one that

More information

Adaptive Barrier Strategies for Nonlinear Interior Methods

Adaptive Barrier Strategies for Nonlinear Interior Methods Adaptive Barrier Strategies for Nonlinear Interior Methods Jorge Nocedal Andreas Wächter Richard A. Waltz February 25, 2005 Abstract This paper considers strategies for selecting the barrier parameter

More information

Machine Learning for Signal Processing Lecture 4: Optimization

Machine Learning for Signal Processing Lecture 4: Optimization Machine Learning for Signal Processing Lecture 4: Optimization 13 Sep 2015 Instructor: Bhiksha Raj (slides largely by Najim Dehak, JHU) 11-755/18-797 1 Index 1. The problem of optimization 2. Direct optimization

More information

Inter and Intra-Modal Deformable Registration:

Inter and Intra-Modal Deformable Registration: Inter and Intra-Modal Deformable Registration: Continuous Deformations Meet Efficient Optimal Linear Programming Ben Glocker 1,2, Nikos Komodakis 1,3, Nikos Paragios 1, Georgios Tziritas 3, Nassir Navab

More information

Convexity Theory and Gradient Methods

Convexity Theory and Gradient Methods Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality

More information

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING DAVID G. LUENBERGER Stanford University TT ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California London Don Mills, Ontario CONTENTS

More information

Lecture Notes: Constraint Optimization

Lecture Notes: Constraint Optimization Lecture Notes: Constraint Optimization Gerhard Neumann January 6, 2015 1 Constraint Optimization Problems In constraint optimization we want to maximize a function f(x) under the constraints that g i (x)

More information

Linear methods for supervised learning

Linear methods for supervised learning Linear methods for supervised learning LDA Logistic regression Naïve Bayes PLA Maximum margin hyperplanes Soft-margin hyperplanes Least squares resgression Ridge regression Nonlinear feature maps Sometimes

More information

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS GRADO EN A.D.E. GRADO EN ECONOMÍA GRADO EN F.Y.C. ACADEMIC YEAR 2011-12 INDEX UNIT 1.- AN INTRODUCCTION TO OPTIMIZATION 2 UNIT 2.- NONLINEAR PROGRAMMING

More information

PACBB: A Projected Adaptive Cyclic Barzilai-Borwein Method for Box Constrained Optimization*

PACBB: A Projected Adaptive Cyclic Barzilai-Borwein Method for Box Constrained Optimization* PACBB: A Projected Adaptive Cyclic Barzilai-Borwein Method for Box Constrained Optimization* Hongchao Zhang and William W. Hager Department of Mathematics, University of Florida, Gainesville, FL 32611,

More information

Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I

Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I CS/EE 5590 / ENG 401 Special Topics (17804, 17815, 17803) Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I Zhu Li Course Web: http://l.web.umkc.edu/lizhu/teaching/2016sp.video-communication/main.html

More information