Two-phase matrix splitting methods for asymmetric and symmetric LCP

Size: px
Start display at page:

Download "Two-phase matrix splitting methods for asymmetric and symmetric LCP"

Transcription

1 Two-phase matrix splitting methods for asymmetric and symmetric LCP Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University Joint work with Feng, Nocedal, and Pang National University of Singapore Complementarity and Its Extensions December 19, 2012

2

3

4 Outline 1 Motivation 2 Two-phase matrix splitting method for BQP 3 Two-phase matrix splitting method for LCP 4 Summary

5 Outline 1 Motivation 2 Two-phase matrix splitting method for BQP 3 Two-phase matrix splitting method for LCP 4 Summary

6 BQP (symmetric LCP) Given M R n n and q R n, solve LCP minimize x f(x) = 1 2 xt Mx + q T x subject to x 0 Given M R n n and q R n, find x such that x 0, Mx + q 0, x (Mx + q) = 0 symmetric positive definite M : x is minimizer of BQP if and only if x is a solution to LCP symmetric M : x is a first-order solution to BQP if and only if x is solution to LCP general M : no convenient relationship between BQP and LCP

7 BQP (symmetric LCP) Given M R n n and q R n, solve LCP minimize x f(x) = 1 2 xt Mx + q T x subject to x 0 Given M R n n and q R n, find x such that x 0, Mx + q 0, x (Mx + q) = 0 symmetric positive definite M : x is minimizer of BQP if and only if x is a solution to LCP symmetric M : x is a first-order solution to BQP if and only if x is solution to LCP general M : no convenient relationship between BQP and LCP

8 Bound-constrained quadratic program (BQP) The problem: minimize x f(x) def = 1 2 xt Mx + q T x subject to x 0 Applications: subproblems (e.g. Lancelot), contact problems without friction, sparse optimization ( minimize f(x) + x 1 ) Previous work: Moré and Toraldo (1989) (two-phase, gradient based, convex/nonconvex), Kočvara and Zowe (1994) (two-phase, matrix splitting based, strictly convex), Dostál and Schöberl (2005) (linear CG, projections, adaptive precision control) Linear complementarity problem (LCP) The problem: find x satisfying x 0, Mx + q 0, x (Mx + q) = 0 Applications: contact problems without friction, options pricing Previous work: Feng, Linetsky, Morales, Nocedal (2010) (two-phase, matrix splitting based, no convergence proof)

9 Bound-constrained quadratic program (BQP) The problem: minimize x f(x) def = 1 2 xt Mx + q T x subject to x 0 Applications: subproblems (e.g. Lancelot), contact problems without friction, sparse optimization ( minimize f(x) + x 1 ) Previous work: Moré and Toraldo (1989) (two-phase, gradient based, convex/nonconvex), Kočvara and Zowe (1994) (two-phase, matrix splitting based, strictly convex), Dostál and Schöberl (2005) (linear CG, projections, adaptive precision control) Linear complementarity problem (LCP) The problem: find x satisfying x 0, Mx + q 0, x (Mx + q) = 0 Applications: contact problems without friction, options pricing Previous work: Feng, Linetsky, Morales, Nocedal (2010) (two-phase, matrix splitting based, no convergence proof)

10 Bound-constrained quadratic program (BQP) The problem: minimize x f(x) def = 1 2 xt Mx + q T x subject to x 0 Applications: subproblems (e.g. Lancelot), contact problems without friction, sparse optimization ( minimize f(x) + x 1 ) Previous work: Moré and Toraldo (1989) (two-phase, gradient based, convex/nonconvex), Kočvara and Zowe (1994) (two-phase, matrix splitting based, strictly convex), Dostál and Schöberl (2005) (linear CG, projections, adaptive precision control) Linear complementarity problem (LCP) The problem: find x satisfying x 0, Mx + q 0, x (Mx + q) = 0 Applications: contact problems without friction, options pricing Previous work: Feng, Linetsky, Morales, Nocedal (2010) (two-phase, matrix splitting based, no convergence proof)

11 Outline 1 Motivation 2 Two-phase matrix splitting method for BQP 3 Two-phase matrix splitting method for LCP 4 Summary

12 BQP Given M R n n and q R n, solve minimize x Basic approach f(x) = 1 2 xt Mx + q T x subject to x 0 1 Predict active variables at a solution (cheap) - Projected SOR on system Mx = q - More generally, use splitting M = B + C with B 0 - Notation: y = FPI(x, p, B, C) means that y is the result of performing p steps of projected SOR with initial guess x and splitting M = B + C 2 Subspace phase - Accelerate convergence and pick up additional activities - For A := {i : x i = 0} and F = [1 : n]/a solve M FF x F = q F

13 BQP Given M R n n and q R n, solve minimize x Basic approach f(x) = 1 2 xt Mx + q T x subject to x 0 1 Predict active variables at a solution (cheap) - Projected SOR on system Mx = q - More generally, use splitting M = B + C with B 0 - Notation: y = FPI(x, p, B, C) means that y is the result of performing p steps of projected SOR with initial guess x and splitting M = B + C 2 Subspace phase - Accelerate convergence and pick up additional activities - For A := {i : x i = 0} and F = [1 : n]/a solve M FF x F = q F

14 BQP Given M R n n and q R n, solve minimize x Basic approach f(x) = 1 2 xt Mx + q T x subject to x 0 1 Predict active variables at a solution (cheap) - Projected SOR on system Mx = q - More generally, use splitting M = B + C with B 0 - Notation: y = FPI(x, p, B, C) means that y is the result of performing p steps of projected SOR with initial guess x and splitting M = B + C 2 Subspace phase - Accelerate convergence and pick up additional activities - For A := {i : x i = 0} and F = [1 : n]/a solve M FF x F = q F

15 Algorithm for BQP (recall: f(x) = 0.5x T Mx + q T x) 1 Compute single projected matrix splitting iteration (Mx = q) 2 Calculate Cauchy point x k,c 3 Additional matrix splitting iterations x k,f 4 Projected search in direction defined by step 3 x k,pf 5 Subspace step x k,s is computed as solution to minimize x f(x) subject to x A = 0 6 Projected search in direction defined by subspace step x k+1 z x k d k,c FPI(x k, 1, B, C) y x k,f = x k,pf x k,c x k,s x k+1 x

16 Algorithm for BQP (recall: f(x) = 0.5x T Mx + q T x) 1 Compute single projected matrix splitting iteration (Mx = q) 2 Calculate Cauchy point x k,c 3 Additional matrix splitting iterations x k,f 4 Projected search in direction defined by step 3 x k,pf 5 Subspace step x k,s is computed as solution to minimize x f(x) subject to x A = 0 6 Projected search in direction defined by subspace step x k+1 x k,s x k+1 x z x k,f = x k,pf x k,c x k d k,c FPI(x k, 1, B, C) y Other work Moré and Toraldo (more basic step 1 and modified step 2) Kočvara, Zowe (essentially use steps 3 and 5)

17 Lemma (Cauchy step is a descent direction) If M = B + C is a splitting of the symmetric matrix M R n n such that B 0, then d k,c, Mx k + q d k,c, Bd k,c 0, where d k,c is the Cauchy step. Moreover, if B is either symmetric or positive definite, and d k,c, Mx k + q = 0, then x k solves the LCP.

18 Convergence result for BQP If M = B + C is a symmetric matrix such that B 0, then every limit point of the iterates generated by our algorithm is a first-order solution to BQP. The single matrix splitting iteration in step 1 supplies a sufficient descent direction used to compute the Cauchy step in step 2 Steps 1 and 2 are needed to prove convergence Steps 3 6 generally improve performance Limit points are guaranteed if f has bounded level sets on x 0 Summary of our BQP algorithm Our method is a two-phase method based on matrix splitting iterations is provably convergent on both convex and nonconvex BQPs and the numerical results?

19 Convergence result for BQP If M = B + C is a symmetric matrix such that B 0, then every limit point of the iterates generated by our algorithm is a first-order solution to BQP. The single matrix splitting iteration in step 1 supplies a sufficient descent direction used to compute the Cauchy step in step 2 Steps 1 and 2 are needed to prove convergence Steps 3 6 generally improve performance Limit points are guaranteed if f has bounded level sets on x 0 Summary of our BQP algorithm Our method is a two-phase method based on matrix splitting iterations is provably convergent on both convex and nonconvex BQPs and the numerical results?

20 Convergence result for BQP If M = B + C is a symmetric matrix such that B 0, then every limit point of the iterates generated by our algorithm is a first-order solution to BQP. The single matrix splitting iteration in step 1 supplies a sufficient descent direction used to compute the Cauchy step in step 2 Steps 1 and 2 are needed to prove convergence Steps 3 6 generally improve performance Limit points are guaranteed if f has bounded level sets on x 0 Summary of our BQP algorithm Our method is a two-phase method based on matrix splitting iterations is provably convergent on both convex and nonconvex BQPs and the numerical results?

21 Randomly generated strictly convex problems: n = 10, 000 Projected Gradient Ours Dostál and Schöberl Cond iter nss Ax iter nsplit nss Ax iter Ax

22 Current work Dostál and Schöberl becomes superior for large condition number Uses an adaptive precision control during the subspace phase Our heuristic was to set a fixed number of CG iterations (5) and then dramatically increasing it when the active-set settles down fine if problems are well-conditioned fine if have a good pre-condioner for CG Currently combining our matrix splitting iterations with an adaptive recursive subspace phase Goal: be superior on convex problems for all condition numbers!

23 Outline 1 Motivation 2 Two-phase matrix splitting method for BQP 3 Two-phase matrix splitting method for LCP 4 Summary

24 LCP Given square matrix M R n n and vector q R n, find x such that Basic approach x 0, Mx + q 0, x (Mx + q) = 0 1 Predict active variables at a solution (cheap) - Assume splitting M = B + C is such that the fixed-point iterations are contractions, i.e., z 2 z 1 ρ z 1 z 0 for some ρ (0, 1) 2 Subspace phase - Accelerate convergence and pick up additional activities Convergence is guaranteed if the subspace step is not used (Cottle, Pang, Stone)

25 LCP Given square matrix M R n n and vector q R n, find x such that Basic approach x 0, Mx + q 0, x (Mx + q) = 0 1 Predict active variables at a solution (cheap) - Assume splitting M = B + C is such that the fixed-point iterations are contractions, i.e., z 2 z 1 ρ z 1 z 0 for some ρ (0, 1) 2 Subspace phase - Accelerate convergence and pick up additional activities Convergence is guaranteed if the subspace step is not used (Cottle, Pang, Stone)

26 LCP algorithm 1 Compute n f 2 matrix splitting iterations x k,f,n f 2 Subspace step x k,s is computed as before 3 Compute 2 additional fixed-point iterations x k,s,2 4 Update x k+1 as follows: If contraction is maintained, then x k+1 x k,s,2 If min(x, Mx + q) 2 is sufficiently small, then x k+1 x k,s,2 Otherwise, x k+1 x k z x k x k = x k,f,0 y x k,f,1 x k,s = x k,s,0 p s xk,s,1 p 1 p 2 x k,s,2 pd x k,s = x k,s,0 x k+1 = x k,s,ns x k,f,nf = FPI(x k, nf, B, C) ˆp 1 x k,f,nf 1 ˆp 2 x k,f,nf k,f,nf +1 x x

27 Convergence result for LCP Let M = B + C be a splitting of the matrix M such that the resulting matrix splitting iterations are contractions, then either x K is a solution to the LCP for some integer K 0 and the algorithm terminates, or lim inf min(x k, Mx k + q) = 0, k 0 and if the iterates are bounded, then there exists a limit point of the iterates that is a solution to LCP. contraction ensured if M is diagonally dominant more generally, contraction holds if M is an H-matrix with positive diagonals limit points are guaranteed if min(x, Mx + q) has bounded level curves on x 0 global convergence is based on contraction property of matrix splitting iteration in step 1 subspace steps accelerate convergence

28 Summary of our LCP algorithm Our method is a two-phase method based on matrix splitting iterations is provably convergent provided the matrix splitting iteration is convergent and the numerical results? American options pricing (Black-Scholes-Merton model) BSM Parameters Algorithm FLMN Our Algorithm σ T x l x u iter nsplit nss atm iter nsplit nss atm $ $ $ $ $ $ $ $24.44 Identical performance. This is good!

29 Summary of our LCP algorithm Our method is a two-phase method based on matrix splitting iterations is provably convergent provided the matrix splitting iteration is convergent and the numerical results? American options pricing (Black-Scholes-Merton model) BSM Parameters Algorithm FLMN Our Algorithm σ T x l x u iter nsplit nss atm iter nsplit nss atm $ $ $ $ $ $ $ $24.44 Identical performance. This is good!

30 Summary of our LCP algorithm Our method is a two-phase method based on matrix splitting iterations is provably convergent provided the matrix splitting iteration is convergent and the numerical results? American options pricing (Black-Scholes-Merton model) BSM Parameters Algorithm FLMN Our Algorithm σ T x l x u iter nsplit nss atm iter nsplit nss atm $ $ $ $ $ $ $ $24.44 Identical performance. This is good!

31 Current work Developing adaptive subspace phase ideas for LCP that are analogous to the ideas introduced by Dostál and Schöberl for the BQP case Can we formulate a two-phase method with subspace acceleration that has convergence guarantees under weaker assumptions (no contraction assumption)

32 Outline 1 Motivation 2 Two-phase matrix splitting method for BQP 3 Two-phase matrix splitting method for LCP 4 Summary

33 Summary Matrix splitting iterations may be used (in different ways) to efficiently solve BQPs and LCPs Our algorithm for BQP is a two-phase method based on matrix splitting iterations and is provably convergent on both convex and nonconvex problems Matrix splitting iterations for BQP are generally superior to simple gradient directions for identifying an optimal active set Our algorithm for LCP is a two-phase method based on matrix splitting iterations and is provably convergent (provided the matrix splitting iteration is convergent) The more sophisticated matrix splitting iterations, e.g. Gauss-Seidel, can be substantially more expensive than gradient directions Generally, the subspace phase greatly reduces the number of iterations required, but comes with additional cost Combining our ideas with those by Dostál and Schöberl seems promising for both BQP and LCP

34 My can we list Can we extend ideas presented here to the case of minimizing general quadratic programs? Can we weaken the assumptions needed to solve LCPs by considering semismooth Newton methods? Can we improve the subspace phase? Can we develop a rapidly adapting quadratic programming active-set method based on work by Hintermuler, Ito, and Kunisch for solving quadratic programs? Can we develop conditions/subproblems that allow very early termination of sequential quadratic programming methods? Can we solve nonsmooth optimization problems by combining sampling with bundle methods? Can we contribute to the new area of differential variational inqualities? Can we steer augmented Lagrangian methods? Can we mitigate degeneracy for general NLPs.

35 References Moré and Toraldo Algorithms for bound constrained quadratic programming problems Kočvara and Zowe An iterative two-step algorithm for linear complementarity problems Feng, Linetsky, Morales, and Nocedal On the solution of complementarity problems arising in American options pricing

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal

More information

Recent Developments in Model-based Derivative-free Optimization

Recent Developments in Model-based Derivative-free Optimization Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - C) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

Chapter 14: Matrix Iterative Methods

Chapter 14: Matrix Iterative Methods Chapter 14: Matrix Iterative Methods 14.1INTRODUCTION AND OBJECTIVES This chapter discusses how to solve linear systems of equations using iterative methods and it may be skipped on a first reading of

More information

AGloballyConvergentPrimal-DualActive-Set Framework for Large-Scale Convex Quadratic Optimization

AGloballyConvergentPrimal-DualActive-Set Framework for Large-Scale Convex Quadratic Optimization AGloballyConvergentPrimal-DualActive-Set Framewor for Large-Scale Convex Quadratic Optimization Fran E. Curtis, Zheng Han, and Daniel P. Robinson Lehigh Industrial and Systems Engineering COR@L Technical

More information

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms By:- Nitin Kamra Indian Institute of Technology, Delhi Advisor:- Prof. Ulrich Reude 1. Introduction to Linear

More information

A Globally Convergent Primal-Dual Active-Set Framework for Large-Scale Convex Quadratic Optimization

A Globally Convergent Primal-Dual Active-Set Framework for Large-Scale Convex Quadratic Optimization Noname manuscript No. (will be inserted by the editor) A Globally Convergent Primal-Dual Active-Set Framewor for Large-Scale Convex Quadratic Optimization Fran E. Curtis Zheng Han Daniel P. Robinson July

More information

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.

More information

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics 1. Introduction EE 546, Univ of Washington, Spring 2016 performance of numerical methods complexity bounds structural convex optimization course goals and topics 1 1 Some course info Welcome to EE 546!

More information

Optimization. Industrial AI Lab.

Optimization. Industrial AI Lab. Optimization Industrial AI Lab. Optimization An important tool in 1) Engineering problem solving and 2) Decision science People optimize Nature optimizes 2 Optimization People optimize (source: http://nautil.us/blog/to-save-drowning-people-ask-yourself-what-would-light-do)

More information

Crash-Starting the Simplex Method

Crash-Starting the Simplex Method Crash-Starting the Simplex Method Ivet Galabova Julian Hall School of Mathematics, University of Edinburgh Optimization Methods and Software December 2017 Ivet Galabova, Julian Hall Crash-Starting Simplex

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 2 Don t you just invert

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information

An Algorithm for the Fast Solution of Symmetric Linear Complementarity Problems

An Algorithm for the Fast Solution of Symmetric Linear Complementarity Problems An Algorithm for the Fast Solution of Symmetric Linear Complementarity Problems José Luis Morales Jorge Nocedal Mikhail Smelyanskiy August 23, 2008 Abstract This paper studies algorithms for the solution

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Conjugate Direction Methods Barnabás Póczos & Ryan Tibshirani Conjugate Direction Methods 2 Books to Read David G. Luenberger, Yinyu Ye: Linear and Nonlinear Programming Nesterov:

More information

THE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECH- NOLOGY ITERATIVE LINEAR SOLVERS

THE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECH- NOLOGY ITERATIVE LINEAR SOLVERS ITERATIVE LIEAR SOLVERS. Objectives The goals of the laboratory workshop are as follows: to learn basic properties of iterative methods for solving linear least squares problems, to study the properties

More information

SYSTEMS OF NONLINEAR EQUATIONS

SYSTEMS OF NONLINEAR EQUATIONS SYSTEMS OF NONLINEAR EQUATIONS Widely used in the mathematical modeling of real world phenomena. We introduce some numerical methods for their solution. For better intuition, we examine systems of two

More information

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case Ellipsoid Algorithm 15-853:Algorithms in the Real World Linear and Integer Programming II Ellipsoid algorithm Interior point methods First polynomial-time algorithm for linear programming (Khachian 79)

More information

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization 6th WSEAS International Conference on SYSTEM SCIENCE and SIMULATION in ENGINEERING, Venice, Italy, November 21-23, 2007 18 Performance Evaluation of an Interior Point Filter Line Search Method for Constrained

More information

Ensemble methods in machine learning. Example. Neural networks. Neural networks

Ensemble methods in machine learning. Example. Neural networks. Neural networks Ensemble methods in machine learning Bootstrap aggregating (bagging) train an ensemble of models based on randomly resampled versions of the training set, then take a majority vote Example What if you

More information

Convexity Theory and Gradient Methods

Convexity Theory and Gradient Methods Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality

More information

(Sparse) Linear Solvers

(Sparse) Linear Solvers (Sparse) Linear Solvers Ax = B Why? Many geometry processing applications boil down to: solve one or more linear systems Parameterization Editing Reconstruction Fairing Morphing 1 Don t you just invert

More information

Constrained and Unconstrained Optimization

Constrained and Unconstrained Optimization Constrained and Unconstrained Optimization Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Oct 10th, 2017 C. Hurtado (UIUC - Economics) Numerical

More information

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro

More information

Optimized Choice of Parameters in interior-point methods for linear programming. Luiz-Rafael dos Santos

Optimized Choice of Parameters in interior-point methods for linear programming. Luiz-Rafael dos Santos Optimized Choice of Parameters in interior-point methods for linear programming Luiz-Rafael dos Santos Joint work with F. Villas-Bôas, A. Oliveira and C. Perin 1st Brazilian Workshop on Interior Point

More information

Characterizing Improving Directions Unconstrained Optimization

Characterizing Improving Directions Unconstrained Optimization Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not

More information

Learning a classification of Mixed-Integer Quadratic Programming problems

Learning a classification of Mixed-Integer Quadratic Programming problems Learning a classification of Mixed-Integer Quadratic Programming problems CERMICS 2018 June 29, 2018, Fréjus Pierre Bonami 1, Andrea Lodi 2, Giulia Zarpellon 2 1 CPLEX Optimization, IBM Spain 2 Polytechnique

More information

DS Machine Learning and Data Mining I. Alina Oprea Associate Professor, CCIS Northeastern University

DS Machine Learning and Data Mining I. Alina Oprea Associate Professor, CCIS Northeastern University DS 4400 Machine Learning and Data Mining I Alina Oprea Associate Professor, CCIS Northeastern University September 20 2018 Review Solution for multiple linear regression can be computed in closed form

More information

Constrained optimization

Constrained optimization Constrained optimization A general constrained optimization problem has the form where The Lagrangian function is given by Primal and dual optimization problems Primal: Dual: Weak duality: Strong duality:

More information

A Note on Smoothing Mathematical Programs with Equilibrium Constraints

A Note on Smoothing Mathematical Programs with Equilibrium Constraints Applied Mathematical Sciences, Vol. 3, 2009, no. 39, 1943-1956 A Note on Smoothing Mathematical Programs with Equilibrium Constraints M. A. Tawhid Department of Mathematics and Statistics School of Advanced

More information

Algorithms for convex optimization

Algorithms for convex optimization Algorithms for convex optimization Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University kocvara@utia.cas.cz http://www.utia.cas.cz/kocvara

More information

Machine Learning for Signal Processing Lecture 4: Optimization

Machine Learning for Signal Processing Lecture 4: Optimization Machine Learning for Signal Processing Lecture 4: Optimization 13 Sep 2015 Instructor: Bhiksha Raj (slides largely by Najim Dehak, JHU) 11-755/18-797 1 Index 1. The problem of optimization 2. Direct optimization

More information

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3

1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3 6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require

More information

On the Global Solution of Linear Programs with Linear Complementarity Constraints

On the Global Solution of Linear Programs with Linear Complementarity Constraints On the Global Solution of Linear Programs with Linear Complementarity Constraints J. E. Mitchell 1 J. Hu 1 J.-S. Pang 2 K. P. Bennett 1 G. Kunapuli 1 1 Department of Mathematical Sciences RPI, Troy, NY

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application

Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application Proection onto the probability simplex: An efficient algorithm with a simple proof, and an application Weiran Wang Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science, University of

More information

Animation Lecture 10 Slide Fall 2003

Animation Lecture 10 Slide Fall 2003 Animation Lecture 10 Slide 1 6.837 Fall 2003 Conventional Animation Draw each frame of the animation great control tedious Reduce burden with cel animation layer keyframe inbetween cel panoramas (Disney

More information

Parallel Evaluation of Hopfield Neural Networks

Parallel Evaluation of Hopfield Neural Networks Parallel Evaluation of Hopfield Neural Networks Antoine Eiche, Daniel Chillet, Sebastien Pillement and Olivier Sentieys University of Rennes I / IRISA / INRIA 6 rue de Kerampont, BP 818 2232 LANNION,FRANCE

More information

Programs. Introduction

Programs. Introduction 16 Interior Point I: Linear Programs Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology

Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology Hausdorff Institute for Mathematics (HIM) Trimester: Mathematics of Signal Processing

More information

Theoretical Concepts of Machine Learning

Theoretical Concepts of Machine Learning Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5

More information

Alternating Direction Method of Multipliers

Alternating Direction Method of Multipliers Alternating Direction Method of Multipliers CS 584: Big Data Analytics Material adapted from Stephen Boyd (https://web.stanford.edu/~boyd/papers/pdf/admm_slides.pdf) & Ryan Tibshirani (http://stat.cmu.edu/~ryantibs/convexopt/lectures/21-dual-meth.pdf)

More information

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007 : Convex Non-Smooth Optimization April 2, 2007 Outline Lecture 19 Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials

More information

Lecture 6 - Multivariate numerical optimization

Lecture 6 - Multivariate numerical optimization Lecture 6 - Multivariate numerical optimization Björn Andersson (w/ Jianxin Wei) Department of Statistics, Uppsala University February 13, 2014 1 / 36 Table of Contents 1 Plotting functions of two variables

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

Multidimensional scaling

Multidimensional scaling Multidimensional scaling Lecture 5 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 Cinderella 2.0 2 If it doesn t fit,

More information

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008 LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following

More information

Bilinear Programming

Bilinear Programming Bilinear Programming Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida 32611-6595 Email address: artyom@ufl.edu

More information

Analyzing Stochastic Gradient Descent for Some Non- Convex Problems

Analyzing Stochastic Gradient Descent for Some Non- Convex Problems Analyzing Stochastic Gradient Descent for Some Non- Convex Problems Christopher De Sa Soon at Cornell University cdesa@stanford.edu stanford.edu/~cdesa Kunle Olukotun Christopher Ré Stanford University

More information

A generic column generation principle: derivation and convergence analysis

A generic column generation principle: derivation and convergence analysis A generic column generation principle: derivation and convergence analysis Torbjörn Larsson, Athanasios Migdalas and Michael Patriksson Linköping University Post Print N.B.: When citing this work, cite

More information

Online Learning. Lorenzo Rosasco MIT, L. Rosasco Online Learning

Online Learning. Lorenzo Rosasco MIT, L. Rosasco Online Learning Online Learning Lorenzo Rosasco MIT, 9.520 About this class Goal To introduce theory and algorithms for online learning. Plan Different views on online learning From batch to online least squares Other

More information

Convex optimization algorithms for sparse and low-rank representations

Convex optimization algorithms for sparse and low-rank representations Convex optimization algorithms for sparse and low-rank representations Lieven Vandenberghe, Hsiao-Han Chao (UCLA) ECC 2013 Tutorial Session Sparse and low-rank representation methods in control, estimation,

More information

The Alternating Direction Method of Multipliers

The Alternating Direction Method of Multipliers The Alternating Direction Method of Multipliers With Adaptive Step Size Selection Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein October 8, 2015 1 / 30 Introduction Presentation Outline 1 Convex

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 20: Sparse Linear Systems; Direct Methods vs. Iterative Methods Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 26

More information

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form

Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report

More information

Contents. F10: Parallel Sparse Matrix Computations. Parallel algorithms for sparse systems Ax = b. Discretized domain a metal sheet

Contents. F10: Parallel Sparse Matrix Computations. Parallel algorithms for sparse systems Ax = b. Discretized domain a metal sheet Contents 2 F10: Parallel Sparse Matrix Computations Figures mainly from Kumar et. al. Introduction to Parallel Computing, 1st ed Chap. 11 Bo Kågström et al (RG, EE, MR) 2011-05-10 Sparse matrices and storage

More information

Finding Euclidean Distance to a Convex Cone Generated by a Large Number of Discrete Points

Finding Euclidean Distance to a Convex Cone Generated by a Large Number of Discrete Points Submitted to Operations Research manuscript (Please, provide the manuscript number!) Finding Euclidean Distance to a Convex Cone Generated by a Large Number of Discrete Points Ali Fattahi Anderson School

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Clustering and EM Barnabás Póczos & Aarti Singh Contents Clustering K-means Mixture of Gaussians Expectation Maximization Variational Methods 2 Clustering 3 K-

More information

IDENTIFYING ACTIVE MANIFOLDS

IDENTIFYING ACTIVE MANIFOLDS Algorithmic Operations Research Vol.2 (2007) 75 82 IDENTIFYING ACTIVE MANIFOLDS W.L. Hare a a Department of Mathematics, Simon Fraser University, Burnaby, BC V5A 1S6, Canada. A.S. Lewis b b School of ORIE,

More information

Convexity: an introduction

Convexity: an introduction Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and

More information

Introduction to Optimization Problems and Methods

Introduction to Optimization Problems and Methods Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex

More information

The Chvátal-Gomory Closure of a Strictly Convex Body is a Rational Polyhedron

The Chvátal-Gomory Closure of a Strictly Convex Body is a Rational Polyhedron The Chvátal-Gomory Closure of a Strictly Convex Body is a Rational Polyhedron Juan Pablo Vielma Joint work with Daniel Dadush and Santanu S. Dey July, Atlanta, GA Outline Introduction Proof: Step Step

More information

A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation [Wen,Yin,Goldfarb,Zhang 2009]

A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation [Wen,Yin,Goldfarb,Zhang 2009] A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation [Wen,Yin,Goldfarb,Zhang 2009] Yongjia Song University of Wisconsin-Madison April 22, 2010 Yongjia Song

More information

STRUCTURAL & MULTIDISCIPLINARY OPTIMIZATION

STRUCTURAL & MULTIDISCIPLINARY OPTIMIZATION STRUCTURAL & MULTIDISCIPLINARY OPTIMIZATION Pierre DUYSINX Patricia TOSSINGS Department of Aerospace and Mechanical Engineering Academic year 2018-2019 1 Course objectives To become familiar with the introduction

More information

What is Multigrid? They have been extended to solve a wide variety of other problems, linear and nonlinear.

What is Multigrid? They have been extended to solve a wide variety of other problems, linear and nonlinear. AMSC 600/CMSC 760 Fall 2007 Solution of Sparse Linear Systems Multigrid, Part 1 Dianne P. O Leary c 2006, 2007 What is Multigrid? Originally, multigrid algorithms were proposed as an iterative method to

More information

An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection

An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection Paul Armand 1 Ngoc Nguyen Tran 2 Institut de Recherche XLIM Université de Limoges Journées annuelles

More information

CPSC 340: Machine Learning and Data Mining. Robust Regression Fall 2015

CPSC 340: Machine Learning and Data Mining. Robust Regression Fall 2015 CPSC 340: Machine Learning and Data Mining Robust Regression Fall 2015 Admin Can you see Assignment 1 grades on UBC connect? Auditors, don t worry about it. You should already be working on Assignment

More information

Delaunay-based Derivative-free Optimization via Global Surrogate. Pooriya Beyhaghi, Daniele Cavaglieri and Thomas Bewley

Delaunay-based Derivative-free Optimization via Global Surrogate. Pooriya Beyhaghi, Daniele Cavaglieri and Thomas Bewley Delaunay-based Derivative-free Optimization via Global Surrogate Pooriya Beyhaghi, Daniele Cavaglieri and Thomas Bewley May 23, 2014 Delaunay-based Derivative-free Optimization via Global Surrogate Pooriya

More information

Parallel and Distributed Sparse Optimization Algorithms

Parallel and Distributed Sparse Optimization Algorithms Parallel and Distributed Sparse Optimization Algorithms Part I Ruoyu Li 1 1 Department of Computer Science and Engineering University of Texas at Arlington March 19, 2015 Ruoyu Li (UTA) Parallel and Distributed

More information

Distributed Optimization of Deeply Nested Systems

Distributed Optimization of Deeply Nested Systems Distributed Optimization of Deeply Nested Systems Miguel Á. Carreira-Perpiñán and Weiran Wang Electrical Engineering and Computer Science University of California, Merced http://eecs.ucmerced.edu Nested

More information

Optimal Control Techniques for Dynamic Walking

Optimal Control Techniques for Dynamic Walking Optimal Control Techniques for Dynamic Walking Optimization in Robotics & Biomechanics IWR, University of Heidelberg Presentation partly based on slides by Sebastian Sager, Moritz Diehl and Peter Riede

More information

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search: Outline for today s lecture Informed Search II Informed Search Optimal informed search: A* (AIMA 3.5.2) Creating good heuristic functions Hill Climbing 2 Review: Greedy best-first search: f(n): estimated

More information

A FACTOR GRAPH APPROACH TO CONSTRAINED OPTIMIZATION. A Thesis Presented to The Academic Faculty. Ivan Dario Jimenez

A FACTOR GRAPH APPROACH TO CONSTRAINED OPTIMIZATION. A Thesis Presented to The Academic Faculty. Ivan Dario Jimenez A FACTOR GRAPH APPROACH TO CONSTRAINED OPTIMIZATION A Thesis Presented to The Academic Faculty By Ivan Dario Jimenez In Partial Fulfillment of the Requirements for the Degree B.S. in Computer Science with

More information

Multiview Stereo COSC450. Lecture 8

Multiview Stereo COSC450. Lecture 8 Multiview Stereo COSC450 Lecture 8 Stereo Vision So Far Stereo and epipolar geometry Fundamental matrix captures geometry 8-point algorithm Essential matrix with calibrated cameras 5-point algorithm Intersect

More information

CS 395T Lecture 12: Feature Matching and Bundle Adjustment. Qixing Huang October 10 st 2018

CS 395T Lecture 12: Feature Matching and Bundle Adjustment. Qixing Huang October 10 st 2018 CS 395T Lecture 12: Feature Matching and Bundle Adjustment Qixing Huang October 10 st 2018 Lecture Overview Dense Feature Correspondences Bundle Adjustment in Structure-from-Motion Image Matching Algorithm

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

A Brief Look at Optimization

A Brief Look at Optimization A Brief Look at Optimization CSC 412/2506 Tutorial David Madras January 18, 2018 Slides adapted from last year s version Overview Introduction Classes of optimization problems Linear programming Steepest

More information

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization? Linear and Integer Programming 15-853:Algorithms in the Real World Linear and Integer Programming I Introduction Geometric Interpretation Simplex Method Linear or Integer programming maximize z = c T x

More information

Convex Optimization. Lijun Zhang Modification of

Convex Optimization. Lijun Zhang   Modification of Convex Optimization Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Modification of http://stanford.edu/~boyd/cvxbook/bv_cvxslides.pdf Outline Introduction Convex Sets & Functions Convex Optimization

More information

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Peter Englert Machine Learning and Robotics Lab Universität Stuttgart Germany

More information

Composite Self-concordant Minimization

Composite Self-concordant Minimization Composite Self-concordant Minimization Volkan Cevher Laboratory for Information and Inference Systems-LIONS Ecole Polytechnique Federale de Lausanne (EPFL) volkan.cevher@epfl.ch Paris 6 Dec 11, 2013 joint

More information

Section 5 Convex Optimisation 1. W. Dai (IC) EE4.66 Data Proc. Convex Optimisation page 5-1

Section 5 Convex Optimisation 1. W. Dai (IC) EE4.66 Data Proc. Convex Optimisation page 5-1 Section 5 Convex Optimisation 1 W. Dai (IC) EE4.66 Data Proc. Convex Optimisation 1 2018 page 5-1 Convex Combination Denition 5.1 A convex combination is a linear combination of points where all coecients

More information

Notes on Robust Estimation David J. Fleet Allan Jepson March 30, 005 Robust Estimataion. The field of robust statistics [3, 4] is concerned with estimation problems in which the data contains gross errors,

More information

Computational Methods. Constrained Optimization

Computational Methods. Constrained Optimization Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on

More information

Lec13p1, ORF363/COS323

Lec13p1, ORF363/COS323 Lec13 Page 1 Lec13p1, ORF363/COS323 This lecture: Semidefinite programming (SDP) Definition and basic properties Review of positive semidefinite matrices SDP duality SDP relaxations for nonconvex optimization

More information

Classical Gradient Methods

Classical Gradient Methods Classical Gradient Methods Note simultaneous course at AMSI (math) summer school: Nonlin. Optimization Methods (see http://wwwmaths.anu.edu.au/events/amsiss05/) Recommended textbook (Springer Verlag, 1999):

More information

arxiv: v3 [math.oc] 3 Nov 2016

arxiv: v3 [math.oc] 3 Nov 2016 BILEVEL POLYNOMIAL PROGRAMS AND SEMIDEFINITE RELAXATION METHODS arxiv:1508.06985v3 [math.oc] 3 Nov 2016 JIAWANG NIE, LI WANG, AND JANE J. YE Abstract. A bilevel program is an optimization problem whose

More information

A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem

A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem Presented by: Ted Ralphs Joint work with: Leo Kopman Les Trotter Bill Pulleyblank 1 Outline of Talk Introduction Description

More information

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c ORIE 63 Mathematical Programming I November 13, 214 Lecturer: David P. Williamson Lecture 23 Scribe: Mukadder Sevi Baltaoglu 1 Interior Point Methods Consider the standard primal and dual linear programs:

More information

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING DAVID G. LUENBERGER Stanford University TT ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California London Don Mills, Ontario CONTENTS

More information

Efficient MR Image Reconstruction for Compressed MR Imaging

Efficient MR Image Reconstruction for Compressed MR Imaging Efficient MR Image Reconstruction for Compressed MR Imaging Junzhou Huang, Shaoting Zhang, and Dimitris Metaxas Division of Computer and Information Sciences, Rutgers University, NJ, USA 08854 Abstract.

More information

Convex Optimization MLSS 2015

Convex Optimization MLSS 2015 Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :

More information

Application of Proximal Algorithms to Three Dimensional Deconvolution Microscopy

Application of Proximal Algorithms to Three Dimensional Deconvolution Microscopy Application of Proximal Algorithms to Three Dimensional Deconvolution Microscopy Paroma Varma Stanford University paroma@stanford.edu Abstract In microscopy, shot noise dominates image formation, which

More information

COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS18. Lecture 2: Linear Regression Gradient Descent Non-linear basis functions

COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS18. Lecture 2: Linear Regression Gradient Descent Non-linear basis functions COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS18 Lecture 2: Linear Regression Gradient Descent Non-linear basis functions LINEAR REGRESSION MOTIVATION Why Linear Regression? Simplest

More information

Solving basis pursuit: infeasible-point subgradient algorithm, computational comparison, and improvements

Solving basis pursuit: infeasible-point subgradient algorithm, computational comparison, and improvements Solving basis pursuit: infeasible-point subgradient algorithm, computational comparison, and improvements Andreas Tillmann ( joint work with D. Lorenz and M. Pfetsch ) Technische Universität Braunschweig

More information

COMPUTATIONAL INTELLIGENCE (CS) (INTRODUCTION TO MACHINE LEARNING) SS16. Lecture 2: Linear Regression Gradient Descent Non-linear basis functions

COMPUTATIONAL INTELLIGENCE (CS) (INTRODUCTION TO MACHINE LEARNING) SS16. Lecture 2: Linear Regression Gradient Descent Non-linear basis functions COMPUTATIONAL INTELLIGENCE (CS) (INTRODUCTION TO MACHINE LEARNING) SS16 Lecture 2: Linear Regression Gradient Descent Non-linear basis functions LINEAR REGRESSION MOTIVATION Why Linear Regression? Regression

More information