Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual
|
|
- Liliana Chloe Nelson
- 6 years ago
- Views:
Transcription
1 Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal operator and its basic properties the proximal algorithm the proximal algorithm applied to the Lagrange dual 1 / 20
2 Gradient descent / forward Euler assume function f is convex, differentiable consider min f(x) gradient descent iteration (with step size c): x k+1 = x k c f(x k ) x k+1 minimizes the following local quadratic approximation of f: f(x k ) + f(x k ), x x k + 1 2c x xk 2 2 compare with forward Euler iteration, a.k.a. the explicit update: x(t + 1) = x(t) t f(x(t)) 2 / 20
3 Backward Euler / implicit gradient descent backward Euler iteration, also known as the implicit update: equivalent to: x(t + 1) solve x(t + 1) = x(t) t f (x(t + 1)). 1. u(t + 1) solve u = f (x(t) t u), 2. x(t + 1) = x(t) t u(t + 1). we can view it as the implicit gradient descent: (x k+1, u k+1 ) solve x = x k cu, u = f(x). c is the step size, very different from a standard step size. explicit (implicit) update uses the gradient at the start (end) point 3 / 20
4 Implicit gradient step = proximal operation proximal update: optimality condition: given input z, - prox cf (z) returns solution x - f(prox cf (z)) returns u prox cf (z) := arg min f(x) + 1 x 2c x z 2. 0 = c f(x ) + (x z). (x, u ) solve x = z cu, u = f(x). Proposition Proximal operator is equivalent to an implicit gradient (or backward Euler) step. 4 / 20
5 Proximal operator handles sub-differentiable f assume that f is closed, proper, sub-differentiable convex function f(x) is denoted as the subdifferential of f at x. Recall u f(x) if f(x ) f(x) + u, x x, x R n. f(x) is point-to-set, neither direction is unique prox is well-defined for sub-differentiable f; it is point-to-point, prox maps any input to a unique point 5 / 20
6 Proximal operator prox cf (z) := arg min f(x) + 1 x 2c x z 2. since objective is strongly convex, solution prox cf (z) is unique since f is proper, dom prox cf = R n the followings are equivalent prox cf z = x = arg min f(x) + 1 x 2c x z 2, x solve 0 c f(x) + (x z), (x, u ) solve x = z cu, u f(x). point x minimizes f if and only if x = prox f (x ). 6 / 20
7 Examples f(x) = ι x C f(x) = λ 2 x 2 2 f(x) = x 1 f(x) = i xg i 2 f(x) = X 7 / 20
8 Examples given prox f for function f, it is easy to derive prox g for g(x) = αf(x) + β g(x) = f(αx + b) g(x) = f(x) + a T x + β g(x) = f(x) + (ρ/2) x a 2 8 / 20
9 Resolvent of f f is a point-to-set mapping, so is I + c f in general, (I + c f) 1 is a point-to-set mapping however, we claim prox cf = (I + c f) 1 since prox cf (z) is always unique, (I + c f) 1 is a point-to-point mapping (I + c f) 1 is known as the resolvent of f with parameter c. by the way, f is the gradient operator, and (I c f) is the gradient-descent operator. 9 / 20
10 Moreau envelope idea: to smooth a closed, proper, nonsmooth convex function f definition: dom M cf = R n even if f is not M cf C 1 even if f is not; in fact, M cf (x) = inf y f(y) + 1 2c y x 2. M cf = ( (cf) + (1/2) 2) the dual of strongly convex function is differentiable (with Lipschitz gradient) relation with prox cf M cf (x) = (1/c)(x prox cf (x)) prox cf (x) = x c M cf (x), explicit gradient step of M cf prox f (x) = M f (x) example: the Huber function is M f where f(x) = x 1 c is not a usual step size. As c, (prox cf (x) x) (x x). 10 / 20
11 Proximal algorithm Assume that f has a minimizer, then iterate x k+1 = prox c k f (xk ) prox is firmly nonexpansive prox f (x) prox f (y) 2 prox f (x) prox f (y), x y, x, y R n It converges to the minimizer as long as c k > 0 and c k =. k=1 For example, one can fix c k c Step-sized iteration: fix c > 0 and pick α k (0, 2) uniformly away from 0 and 2: x k+1 = α k prox cf (x k ) + (1 α k )x k. The convergence takes a finite number of iterations if f is polyhedral (i.e. piece-wise linear) 11 / 20
12 Proximal algorithm Diminishing regularization x k+1 = arg min f(x) + 1 2c x xk 2 2 As x k x, f(x k ) 0 and thus f(x) becomes weaker. Hence, x k+1 x k tends to be smaller. Many algorithms use prox c k f (xk ), either entirely or as a part (but most of them were motivated through other means) Although prox c k f computation can sometimes be difficult to compute, it simplifies for some sub-differentiable functions for those rising in duality (our next focus) 12 / 20
13 Lagrange duality Convex problem min f(x) s.t. Ax = b. Relax the constraints and price their violation (pay a price if violated one way; get paid if violated the other way; payment is linear to the violation) L(x; y) := f(x) + y T (Ax b) For later use, define the augmented Lagrangian L A(x; y, c) := f(x) + y T (Ax b) + c Ax b 2 2 Minimize L for fixed price y: d(y) := min x L(x; y). Always, d(y) is convex The Lagrange dual problem min d(y) y Given dual solution y, recover x = min x L(x; y ) (under which conditions?) Question: how to compute the explicit/implicit gradients of d(y)? 13 / 20
14 Dual explicit gradient (ascent) algorithm Assume d(y) is differentiable (true if f(x) is strictly convex. Is this if-and-only-if?) Gradient descent iteration (if the maximizing dual is used, it is called gradient ascent): y k+1 = y k c f(y k ). It turns out to be relatively easy to compute d, via an unstrained subproblem: d(y) = b A x, where x = arg min L(x; y). x Dual gradient iteration 1. x k solve min x L(x; y k ); 2. y k+1 = y k c(b Ax k ). 14 / 20
15 Sub-gradient of d(y) Assume d(y) is sub-differentiable (which condition on primal can guarantee this?) Lemma Given dual point y and x = arg min x L(x; y), we have b A x d(y). Proof. Recall u d(y) if d(y ) d(y) + u, y y for all y ; d(y) := min x L(x; y). From (ii) and definition of x, d(y) + b A x, y y = L( x; y) + (b A x) T (y y ) = [f( x + y T (A x b)] + (b A x) T (y y ) = [f( x) + (y ) T (A x b)] = L( x; y ) d(y ). From (i), b A x d(y). 15 / 20
16 Dual explicit (sub)gradient iteration The iteration: 1. x k solve min x L(x; y k ); 2. y k+1 = y k c k (b Ax k ); Notes: (b Ax k ) d(y k ) as shown in the last slide it does not require d(y) to be differentiable convergence might require a careful choice of c k (e.g., a diminishing sequence) if d(y) is only sub-differentiable (or lacking Lipschitz continuous gradient) 16 / 20
17 Dual implicit gradient Goal: to descend using the (sub)gradient of d at the next point y k+1 : Following from the Lemma, we have b Ax k+1 d(y k+1 ), where x k+1 = arg min L(x; y k+1 ) x Since the implicit step is y k+1 = y k c(b Ax k+1 ), we can derive x k+1 = arg min L(x; y k+1 ) x 0 xl(x k+1 ; y k+1 ) = f(x k+1 ) + A T y k+1 = f(x k+1 ) + A T (y k c(b Ax k+1 )). Therefore, while x k+1 is a solution to min x L(x; y k+1 ); it is also a solution to min x L A(x; y k, c) = f(x) + (y k ) T (Ax b) + c 2 Ax b 2, which is independent of y k / 20
18 Dual implicit gradient Proposition Assuming y = y c(b Ax ), the followings are equivalent 1. x solve min x L(x; y ), 2. x solve min x L A(x; y, c). 18 / 20
19 Dual implicit gradient iteration The iteration y k+1 = prox cd (y k ) is commonly known as the augmented Lagrangian method or the method of multipliers. Implementation: 1. x k+1 solve min x L A(x; y k, c); 2. y k+1 = y k c(b Ax k+1 ). Proposition The followings are equivalent 1. the augmented Lagrangian iteration; 2. the implicit gradient iteration of d(y); 3. the proximal iteration y k+1 = prox cd (y k ). 19 / 20
20 Definitions: Dual explicit/implicit (sub)gradient computation L(x; y) = f(x) + y T (Ax b) L A(x; y, c) = L(x; y) + c Ax b 2 2 Objective: d(y) = min L(x; y). x Explicit (sub)gradient iteration: y k+1 = y k c d(y k ) or use a subgradient d(y k ) 1. x k+1 = arg min x L(x; y k ); 2. y k+1 = y k c(b Ax k+1 ). Implicit (sub)gradient step: y k+1 = prox cd y k 1. x k+1 = arg min x L A(x; y k, c); 2. y k+1 = y k c(b Ax k+1 ). The implicit iteration is more stable; step size c does not need to diminish. 20 / 20
Convexity Theory and Gradient Methods
Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality
More informationConvex Optimization and Machine Learning
Convex Optimization and Machine Learning Mengliu Zhao Machine Learning Reading Group School of Computing Science Simon Fraser University March 12, 2014 Mengliu Zhao SFU-MLRG March 12, 2014 1 / 25 Introduction
More informationLagrangian Relaxation: An overview
Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction
More informationLecture 19: Convex Non-Smooth Optimization. April 2, 2007
: Convex Non-Smooth Optimization April 2, 2007 Outline Lecture 19 Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials
More informationCharacterizing Improving Directions Unconstrained Optimization
Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms - C) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More informationLecture 4 Duality and Decomposition Techniques
Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian
More informationLecture 18: March 23
0-725/36-725: Convex Optimization Spring 205 Lecturer: Ryan Tibshirani Lecture 8: March 23 Scribes: James Duyck Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not
More informationConvex Optimization MLSS 2015
Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :
More informationConvex optimization algorithms for sparse and low-rank representations
Convex optimization algorithms for sparse and low-rank representations Lieven Vandenberghe, Hsiao-Han Chao (UCLA) ECC 2013 Tutorial Session Sparse and low-rank representation methods in control, estimation,
More informationMachine Learning for Signal Processing Lecture 4: Optimization
Machine Learning for Signal Processing Lecture 4: Optimization 13 Sep 2015 Instructor: Bhiksha Raj (slides largely by Najim Dehak, JHU) 11-755/18-797 1 Index 1. The problem of optimization 2. Direct optimization
More informationParallel and Distributed Graph Cuts by Dual Decomposition
Parallel and Distributed Graph Cuts by Dual Decomposition Presented by Varun Gulshan 06 July, 2010 What is Dual Decomposition? What is Dual Decomposition? It is a technique for optimization: usually used
More information1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics
1. Introduction EE 546, Univ of Washington, Spring 2016 performance of numerical methods complexity bounds structural convex optimization course goals and topics 1 1 Some course info Welcome to EE 546!
More informationLecture 19 Subgradient Methods. November 5, 2008
Subgradient Methods November 5, 2008 Outline Lecture 19 Subgradients and Level Sets Subgradient Method Convergence and Convergence Rate Convex Optimization 1 Subgradients and Level Sets A vector s is a
More informationConstrained optimization
Constrained optimization A general constrained optimization problem has the form where The Lagrangian function is given by Primal and dual optimization problems Primal: Dual: Weak duality: Strong duality:
More informationCOMS 4771 Support Vector Machines. Nakul Verma
COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron
More informationA primal-dual framework for mixtures of regularizers
A primal-dual framework for mixtures of regularizers Baran Gözcü baran.goezcue@epfl.ch Laboratory for Information and Inference Systems (LIONS) École Polytechnique Fédérale de Lausanne (EPFL) Switzerland
More informationAspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology
Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology Hausdorff Institute for Mathematics (HIM) Trimester: Mathematics of Signal Processing
More informationPRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction
PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity
More informationDepartment of Electrical and Computer Engineering
LAGRANGIAN RELAXATION FOR GATE IMPLEMENTATION SELECTION Yi-Le Huang, Jiang Hu and Weiping Shi Department of Electrical and Computer Engineering Texas A&M University OUTLINE Introduction and motivation
More informationLagrangian Relaxation
Lagrangian Relaxation Ş. İlker Birbil March 6, 2016 where Our general optimization problem in this lecture is in the maximization form: maximize f(x) subject to x F, F = {x R n : g i (x) 0, i = 1,...,
More informationThe Alternating Direction Method of Multipliers
The Alternating Direction Method of Multipliers With Adaptive Step Size Selection Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein October 8, 2015 1 / 30 Introduction Presentation Outline 1 Convex
More informationSurrogate Gradient Algorithm for Lagrangian Relaxation 1,2
Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.
More informationAdvanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material
More informationIntroduction to Constrained Optimization
Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR
More informationISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints
ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints Instructor: Kevin Ross Scribe: Pritam Roy May 0, 2005 Outline of topics for the lecture We will discuss
More informationConvex Optimization Lecture 2
Convex Optimization Lecture 2 Today: Convex Analysis Center-of-mass Algorithm 1 Convex Analysis Convex Sets Definition: A set C R n is convex if for all x, y C and all 0 λ 1, λx + (1 λ)y C Operations that
More informationA proximal center-based decomposition method for multi-agent convex optimization
Proceedings of the 47th IEEE Conference on Decision and Control Cancun, Mexico, Dec. 9-11, 2008 A proximal center-based decomposition method for multi-agent convex optimization Ion Necoara and Johan A.K.
More informationConvex Optimization / Homework 2, due Oct 3
Convex Optimization 0-725/36-725 Homework 2, due Oct 3 Instructions: You must complete Problems 3 and either Problem 4 or Problem 5 (your choice between the two) When you submit the homework, upload a
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June
More informationSolution Methods Numerical Algorithms
Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class
More informationLECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach
LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction
More informationTwo-phase matrix splitting methods for asymmetric and symmetric LCP
Two-phase matrix splitting methods for asymmetric and symmetric LCP Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University Joint work with Feng, Nocedal, and Pang
More informationNonlinear Programming
Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,
More informationA Derivative-Free Approximate Gradient Sampling Algorithm for Finite Minimax Problems
1 / 33 A Derivative-Free Approximate Gradient Sampling Algorithm for Finite Minimax Problems Speaker: Julie Nutini Joint work with Warren Hare University of British Columbia (Okanagan) III Latin American
More informationDistributed Alternating Direction Method of Multipliers
Distributed Alternating Direction Method of Multipliers Ermin Wei and Asuman Ozdaglar Abstract We consider a network of agents that are cooperatively solving a global unconstrained optimization problem,
More informationRevisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. Author: Martin Jaggi Presenter: Zhongxing Peng
Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization Author: Martin Jaggi Presenter: Zhongxing Peng Outline 1. Theoretical Results 2. Applications Outline 1. Theoretical Results 2. Applications
More informationGate Sizing by Lagrangian Relaxation Revisited
Gate Sizing by Lagrangian Relaxation Revisited Jia Wang, Debasish Das, and Hai Zhou Electrical Engineering and Computer Science Northwestern University Evanston, Illinois, United States October 17, 2007
More informationConvex Optimization. Lijun Zhang Modification of
Convex Optimization Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Modification of http://stanford.edu/~boyd/cvxbook/bv_cvxslides.pdf Outline Introduction Convex Sets & Functions Convex Optimization
More informationPROJECTION ONTO A POLYHEDRON THAT EXPLOITS SPARSITY
PROJECTION ONTO A POLYHEDRON THAT EXPLOITS SPARSITY WILLIAM W. HAGER AND HONGCHAO ZHANG Abstract. An algorithm is developed for projecting a point onto a polyhedron. The algorithm solves a dual version
More informationParallel and Distributed Sparse Optimization Algorithms
Parallel and Distributed Sparse Optimization Algorithms Part I Ruoyu Li 1 1 Department of Computer Science and Engineering University of Texas at Arlington March 19, 2015 Ruoyu Li (UTA) Parallel and Distributed
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i
More informationLecture 2 Optimization with equality constraints
Lecture 2 Optimization with equality constraints Constrained optimization The idea of constrained optimisation is that the choice of one variable often affects the amount of another variable that can be
More informationIDENTIFYING ACTIVE MANIFOLDS
Algorithmic Operations Research Vol.2 (2007) 75 82 IDENTIFYING ACTIVE MANIFOLDS W.L. Hare a a Department of Mathematics, Simon Fraser University, Burnaby, BC V5A 1S6, Canada. A.S. Lewis b b School of ORIE,
More informationCONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year
CONLIN & MMA solvers Pierre DUYSINX LTAS Automotive Engineering Academic year 2018-2019 1 CONLIN METHOD 2 LAY-OUT CONLIN SUBPROBLEMS DUAL METHOD APPROACH FOR CONLIN SUBPROBLEMS SEQUENTIAL QUADRATIC PROGRAMMING
More informationCollege of Computer & Information Science Fall 2007 Northeastern University 14 September 2007
College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions
More information15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs
15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest
More informationCS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm Instructor: Shaddin Dughmi Algorithms for Convex Optimization We will look at 2 algorithms in detail: Simplex and Ellipsoid.
More informationMath 5593 Linear Programming Lecture Notes
Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................
More informationThe Alternating Direction Method of Multipliers
The Alternating Direction Method of Multipliers Customizable software solver package Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein April 27, 2016 1 / 28 Background The Dual Problem Consider
More informationConvex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c)
Convex Optimization (for CS229) Erick Delage, and Ashutosh Saxena October 20, 2006 1 Convex Sets Definition: A set G R n is convex if every pair of point (x, y) G, the segment beteen x and y is in A. More
More informationConvergence of Nonconvex Douglas-Rachford Splitting and Nonconvex ADMM
Convergence of Nonconvex Douglas-Rachford Splitting and Nonconvex ADMM Shuvomoy Das Gupta Thales Canada 56th Allerton Conference on Communication, Control and Computing, 2018 1 What is this talk about?
More informationKernels and Constrained Optimization
Machine Learning 1 WS2014 Module IN2064 Sheet 8 Page 1 Machine Learning Worksheet 8 Kernels and Constrained Optimization 1 Kernelized k-nearest neighbours To classify the point x the k-nearest neighbours
More informationA fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation [Wen,Yin,Goldfarb,Zhang 2009]
A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation [Wen,Yin,Goldfarb,Zhang 2009] Yongjia Song University of Wisconsin-Madison April 22, 2010 Yongjia Song
More informationfor Approximating the Analytic Center of a Polytope Abstract. The analytic center of a polytope P + = fx 0 : Ax = b e T x =1g log x j
A Lagrangian Relaxation Method for Approximating the Analytic Center of a Polytope Masakazu Kojima y Nimrod Megiddo z Shinji Mizuno x September 1992 Abstract. The analytic center of a polytope P + = fx
More informationSupport Vector Machines. James McInerney Adapted from slides by Nakul Verma
Support Vector Machines James McInerney Adapted from slides by Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake
More informationPerceptron Learning Algorithm
Perceptron Learning Algorithm An iterative learning algorithm that can find linear threshold function to partition linearly separable set of points. Assume zero threshold value. 1) w(0) = arbitrary, j=1,
More informationONLY AVAILABLE IN ELECTRONIC FORM
MANAGEMENT SCIENCE doi 10.1287/mnsc.1070.0812ec pp. ec1 ec7 e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 2008 INFORMS Electronic Companion Customized Bundle Pricing for Information Goods: A Nonlinear
More informationREVIEW I MATH 254 Calculus IV. Exam I (Friday, April 29) will cover sections
REVIEW I MATH 254 Calculus IV Exam I (Friday, April 29 will cover sections 14.1-8. 1. Functions of multivariables The definition of multivariable functions is similar to that of functions of one variable.
More informationPart 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm
In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.
More informationNatural Language Processing
Natural Language Processing Classification III Dan Klein UC Berkeley 1 Classification 2 Linear Models: Perceptron The perceptron algorithm Iteratively processes the training set, reacting to training errors
More informationLECTURE 18 LECTURE OUTLINE
LECTURE 18 LECTURE OUTLINE Generalized polyhedral approximation methods Combined cutting plane and simplicial decomposition methods Lecture based on the paper D. P. Bertsekas and H. Yu, A Unifying Polyhedral
More informationComputational Optimization. Constrained Optimization Algorithms
Computational Optimization Constrained Optimization Algorithms Same basic algorithms Repeat Determine descent direction Determine step size Take a step Until Optimal But now must consider feasibility,
More informationSection 5 Convex Optimisation 1. W. Dai (IC) EE4.66 Data Proc. Convex Optimisation page 5-1
Section 5 Convex Optimisation 1 W. Dai (IC) EE4.66 Data Proc. Convex Optimisation 1 2018 page 5-1 Convex Combination Denition 5.1 A convex combination is a linear combination of points where all coecients
More informationA Multilevel Proximal Gradient Algorithm for a Class of Composite Optimization Problems
A Multilevel Proximal Gradient Algorithm for a Class of Composite Optimization Problems Panos Parpas May 9, 2017 Abstract Composite optimization models consist of the minimization of the sum of a smooth
More informationDistance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form
Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report
More informationComputational Methods. Constrained Optimization
Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on
More informationIntroduction to Optimization
Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,
More informationAlternating Direction Method of Multipliers
Alternating Direction Method of Multipliers CS 584: Big Data Analytics Material adapted from Stephen Boyd (https://web.stanford.edu/~boyd/papers/pdf/admm_slides.pdf) & Ryan Tibshirani (http://stat.cmu.edu/~ryantibs/convexopt/lectures/21-dual-meth.pdf)
More informationUnconstrained Optimization Principles of Unconstrained Optimization Search Methods
1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization
More informationLagrangean Methods bounding through penalty adjustment
Lagrangean Methods bounding through penalty adjustment thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline Brief introduction How to perform Lagrangean relaxation Subgradient techniques
More informationProximal operator and methods
Proximal operator and methods Master 2 Data Science, Univ. Paris Saclay Robert M. Gower Optimization Sum of Terms A Datum Function Finite Sum Training Problem The Training Problem Convergence GD I Theorem
More informationIncremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey. Chapter 4 : Optimization for Machine Learning
Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey Chapter 4 : Optimization for Machine Learning Summary of Chapter 2 Chapter 2: Convex Optimization with Sparsity
More informationSelected Topics in Column Generation
Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a
More informationTemplates for Convex Cone Problems with Applications to Sparse Signal Recovery
Mathematical Programming Computation manuscript No. (will be inserted by the editor) Templates for Convex Cone Problems with Applications to Sparse Signal Recovery Stephen R. Becer Emmanuel J. Candès Michael
More informationLinear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS
Linear Programming Larry Blume Cornell University & The Santa Fe Institute & IHS Linear Programs The general linear program is a constrained optimization problem where objectives and constraints are all
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 1 3.1 Linearization and Optimization of Functions of Vectors 1 Problem Notation 2 Outline 3.1.1 Linearization 3.1.2 Optimization of Objective Functions 3.1.3 Constrained
More informationGeneralized Nash Equilibrium Problem: existence, uniqueness and
Generalized Nash Equilibrium Problem: existence, uniqueness and reformulations Univ. de Perpignan, France CIMPA-UNESCO school, Delhi November 25 - December 6, 2013 Outline of the 7 lectures Generalized
More informationDavid G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer
David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms
More informationIN THIS paper, we are concerned with problems of the following
766 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 5, MAY 2006 Utility Maximization for Communication Networks With Multipath Routing Xiaojun Lin, Member, IEEE, Ness B. Shroff, Senior Member, IEEE
More informationDemo 1: KKT conditions with inequality constraints
MS-C5 Introduction to Optimization Solutions 9 Ehtamo Demo : KKT conditions with inequality constraints Using the Karush-Kuhn-Tucker conditions, see if the points x (x, x ) (, 4) or x (x, x ) (6, ) are
More informationOptimality certificates for convex minimization and Helly numbers
Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge October 20, 2016 Abstract We consider the problem of minimizing
More informationLecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem
Computational Learning Theory Fall Semester, 2012/13 Lecture 10: SVM Lecturer: Yishay Mansour Scribe: Gitit Kehat, Yogev Vaknin and Ezra Levin 1 10.1 Lecture Overview In this lecture we present in detail
More informationLecture 7: Support Vector Machine
Lecture 7: Support Vector Machine Hien Van Nguyen University of Houston 9/28/2017 Separating hyperplane Red and green dots can be separated by a separating hyperplane Two classes are separable, i.e., each
More informationProjection-Based Methods in Optimization
Projection-Based Methods in Optimization Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences University of Massachusetts Lowell Lowell, MA
More informationCS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36
CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the
More informationA generic column generation principle: derivation and convergence analysis
A generic column generation principle: derivation and convergence analysis Torbjörn Larsson, Athanasios Migdalas and Michael Patriksson Linköping University Post Print N.B.: When citing this work, cite
More informationLecture 2 September 3
EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give
More informationCME307/MS&E311 Theory Summary
CME307/MS&E311 Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/
More informationLecture Notes: Constraint Optimization
Lecture Notes: Constraint Optimization Gerhard Neumann January 6, 2015 1 Constraint Optimization Problems In constraint optimization we want to maximize a function f(x) under the constraints that g i (x)
More informationNetwork Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini
In the name of God Network Flows 7. Multicommodity Flows Problems 7.2 Lagrangian Relaxation Approach Fall 2010 Instructor: Dr. Masoud Yaghini The multicommodity flow problem formulation: We associate nonnegative
More informationToday. Gradient descent for minimization of functions of real variables. Multi-dimensional scaling. Self-organizing maps
Today Gradient descent for minimization of functions of real variables. Multi-dimensional scaling Self-organizing maps Gradient Descent Derivatives Consider function f(x) : R R. The derivative w.r.t. x
More informationDecomposition in Integer Linear Programming
Decomposition in Integer Linear Programming T.K. Ralphs M.V. Galati December 3, 00 Abstract Both cutting plane methods and traditional decomposition methods are procedures that compute a bound on the optimal
More informationInteger Programming Theory
Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x
More informationORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c
ORIE 63 Mathematical Programming I November 13, 214 Lecturer: David P. Williamson Lecture 23 Scribe: Mukadder Sevi Baltaoglu 1 Interior Point Methods Consider the standard primal and dual linear programs:
More informationA Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming
A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming Gianni Di Pillo (dipillo@dis.uniroma1.it) Giampaolo Liuzzi (liuzzi@iasi.cnr.it) Stefano Lucidi (lucidi@dis.uniroma1.it)
More informationLagrangean relaxation - exercises
Lagrangean relaxation - exercises Giovanni Righini Set covering We start from the following Set Covering Problem instance: min z = x + 2x 2 + x + 2x 4 + x 5 x + x 2 + x 4 x 2 + x x 2 + x 4 + x 5 x + x
More informationContents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.
page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5
More informationApproximation Algorithms
Approximation Algorithms Group Members: 1. Geng Xue (A0095628R) 2. Cai Jingli (A0095623B) 3. Xing Zhe (A0095644W) 4. Zhu Xiaolu (A0109657W) 5. Wang Zixiao (A0095670X) 6. Jiao Qing (A0095637R) 7. Zhang
More information