Parallel and Distributed Graph Cuts by Dual Decomposition

Size: px
Start display at page:

Download "Parallel and Distributed Graph Cuts by Dual Decomposition"

Transcription

1 Parallel and Distributed Graph Cuts by Dual Decomposition Presented by Varun Gulshan 06 July, 2010

2 What is Dual Decomposition?

3 What is Dual Decomposition? It is a technique for optimization: usually used for approximate optimization.

4 What is Dual Decomposition? It is a technique for optimization: usually used for approximate optimization. Dual: There is dual involved somewhere.

5 What is Dual Decomposition? It is a technique for optimization: usually used for approximate optimization. Dual: There is dual involved somewhere. Decomposition: Some kind of decomposition involved.

6 Optimization Problem Consider an optimization problem of the form: min f 1 y + f 2 y y

7 Optimization Problem Consider an optimization problem of the form: min f 1 y + f 2 y y Usually f y = f 1 y + f 2 y is hard to optimize.

8 Optimization Problem Consider an optimization problem of the form: min f 1 y + f 2 y y Usually f y = f 1 y + f 2 y is hard to optimize. Each of f 1 y and f 2 y can be easily optimized separately.

9 Optimization Problem Consider an optimization problem of the form: min f 1 y + f 2 y y Usually f y = f 1 y + f 2 y is hard to optimize. Each of f 1 y and f 2 y can be easily optimized separately. Dual decomposition idea: Decompose original problem into optimizable subproblems and combine their solution in a principled way.

10 Optimization Problem Consider an optimization problem of the form: min f 1 y + f 2 y y Usually f y = f 1 y + f 2 y is hard to optimize. Each of f 1 y and f 2 y can be easily optimized separately. Dual decomposition idea: Decompose original problem into optimizable subproblems and combine their solution in a principled way. Use variable duplication and duality to achieve the above.

11 Optimization Problem Original problem: min f 1 y + f 2 y y

12 Optimization Problem Original problem: Duplicate variables: min f 1 y + f 2 y y min f 1 y 1 + f 2 y 2 y 1,y 2 s.t y 1 = y 2

13 Optimization Problem Duplicate variables: min f 1 y 1 + f 2 y 2 y 1,y 2 s.t y 1 y 2 = 0

14 Optimization Problem Duplicate variables: Write the lagrangian dual: min f 1 y 1 + f 2 y 2 y 1,y 2 s.t y 1 y 2 = 0 gλ = min y 1,y 2 f 1 y 1 + f 2 y 2 + λ T y 1 y 2

15 Optimization Problem The lagrangian dual: gλ = min y 1,y 2 f 1 y 1 + f 2 y 2 + λ T y 1 y 2

16 Optimization Problem The lagrangian dual: gλ = min y 1,y 2 f 1 y 1 + f 2 y 2 + λ T y 1 y 2 Decompose the lagrangian dual: gλ = min f 1 y 1 + λ T y 1 + min f 2 y 2 λ T y 2 y 1 y 2 = g 1 λ + g 2 λ

17 Optimization Problem The lagrangian dual: gλ = min y 1,y 2 f 1 y 1 + f 2 y 2 + λ T y 1 y 2 Decompose the lagrangian dual: gλ = min f 1 y 1 + λ T y 1 + min f 2 y 2 λ T y 2 y 1 y 2 = g 1 λ + g 2 λ where: g 1 λ = min f 1 y 1 + λ T y 1 y 1 g 2 λ = min f 2 y 2 λ T y 2 y 2

18 Maximizing Dual λ, gλ is a lower bound on the optimal of the primal. So maximize the lower bound: max gλ = max g 1 λ + g 2 λ λ λ

19 Maximizing Dual λ, gλ is a lower bound on the optimal of the primal. So maximize the lower bound: max gλ = max g 1 λ + g 2 λ λ λ The lagrangian dual gλ is always a concave function of λ, so gλ is convex, and hence is minimized easily.

20 Maximizing Dual λ, gλ is a lower bound on the optimal of the primal. So maximize the lower bound: max gλ = max g 1 λ + g 2 λ λ λ The lagrangian dual gλ is always a concave function of λ, so gλ is convex, and hence is minimized easily. Sub-gradient descent is used to optimize wrt λ.

21 Maximizing Dual λ, gλ is a lower bound on the optimal of the primal. So maximize the lower bound: max gλ = max g 1 λ + g 2 λ λ λ The lagrangian dual gλ is always a concave function of λ, so gλ is convex, and hence is minimized easily. Sub-gradient descent is used to optimize wrt λ. Subgradient of g 1 λ = min y1 f 1 y 1 + y T λ is given by 1 g 1 λ = ȳ 1 = arg min y1 f 1 y 1 + y T λ. So computing 1 subgradient essentially involves solving this minimization.

22 Maximizing Dual: Algorithm max g 1 λ+g 2 λ λ [ ] = max min f 1 y 1 +λ T y 1 +min f 2 y 2 λ T y 2 λ y 1 y 2

23 Maximizing Dual: Algorithm max g 1 λ+g 2 λ λ 1 Initialize λcan be arbritary. [ ] = max min f 1 y 1 +λ T y 1 +min f 2 y 2 λ T y 2 λ y 1 y 2

24 Maximizing Dual: Algorithm max g 1 λ+g 2 λ λ 1 Initialize λcan be arbritary. [ ] = max min f 1 y 1 +λ T y 1 +min f 2 y 2 λ T y 2 λ y 1 y 2 2 Compute subgradient of g 1 λ and g 2 λ. Computing subgradient involves solving min y1 f 1 y 1 + λ T y 1 which is doable because f 1 y and f 2 y are minimizable.

25 Maximizing Dual: Algorithm max g 1 λ+g 2 λ λ 1 Initialize λcan be arbritary. [ ] = max min f 1 y 1 +λ T y 1 +min f 2 y 2 λ T y 2 λ y 1 y 2 2 Compute subgradient of g 1 λ and g 2 λ. Computing subgradient involves solving min y1 f 1 y 1 + λ T y 1 which is doable because f 1 y and f 2 y are minimizable. 3 Use subgradient to update value of λ usual gradient descent update rule. λ t+1 = λ t + α t ȳ 1 ȳ 2.

26 Maximizing Dual: Algorithm max g 1 λ+g 2 λ λ 1 Initialize λcan be arbritary. [ ] = max min f 1 y 1 +λ T y 1 +min f 2 y 2 λ T y 2 λ y 1 y 2 2 Compute subgradient of g 1 λ and g 2 λ. Computing subgradient involves solving min y1 f 1 y 1 + λ T y 1 which is doable because f 1 y and f 2 y are minimizable. 3 Use subgradient to update value of λ usual gradient descent update rule. λ t+1 = λ t + α t ȳ 1 ȳ 2. 4 Goto Step 2, repeat until convergence of gλ.

27 Maximizing Dual: Algorithm max g 1 λ+g 2 λ λ 1 Initialize λcan be arbritary. [ ] = max min f 1 y 1 +λ T y 1 +min f 2 y 2 λ T y 2 λ y 1 y 2 2 Compute subgradient of g 1 λ and g 2 λ. Computing subgradient involves solving min y1 f 1 y 1 + λ T y 1 which is doable because f 1 y and f 2 y are minimizable. 3 Use subgradient to update value of λ usual gradient descent update rule. λ t+1 = λ t + α t ȳ 1 ȳ 2. 4 Goto Step 2, repeat until convergence of gλ. 5 Now we have the optimal λ, we need to recover the primal variables y 1 and y 2.

28 Recovering primal variables By solving the dual we get the globally optimal dual variable λ = arg max λ gλ. But we need the primal variables y 1, y 2 as that's what we are interested in optimizing originally.

29 Recovering primal variables By solving the dual we get the globally optimal dual variable λ = arg max λ gλ. But we need the primal variables y 1, y 2 as that's what we are interested in optimizing originally. Note that while optimizing the dual, we computed subgradients which involved minimization over primal variables. We use those solutions as our primal variables, i.e:

30 Recovering primal variables By solving the dual we get the globally optimal dual variable λ = arg max λ gλ. But we need the primal variables y 1, y 2 as that's what we are interested in optimizing originally. Note that while optimizing the dual, we computed subgradients which involved minimization over primal variables. We use those solutions as our primal variables, i.e: y 1 = arg min f 1 y 1 + λ T y 1 y 1 y 2 = arg min f 2 y 2 λ T y 2 y 2

31 Infeasible primal solution The obtained primal solutions y and 1 y will in general not 2 satisfy y = 1 y its only satised when duality gap is 0. 2

32 Infeasible primal solution The obtained primal solutions y 1 and y 2 will in general not satisfy y = 1 y its only satised when duality gap is 0. 2 One way is choose one of y 1 or y 2 whichever one has lower primal value.

33 Infeasible primal solution The obtained primal solutions y 1 and y 2 will in general not satisfy y = 1 y its only satised when duality gap is 0. 2 One way is choose one of y 1 or y 2 whichever one has lower primal value. Currently I havent gured out how people nd a good feasible primal solution Victor help!.

34 Examples N. Komodakis, N. Paragios and G. Tziritas. MRF Optimization via Dual Decomposition: Message-Passing Revisited. ICCV They use dual decomposition to nd MAP solution to a MRF with loops. f y = θ ac y a, y c +θ ad y a, y d +θ ab y a, y b +θ bc y b, y c +θ bd y b, y d y {1, 2,, L}

35 Examples f y = f 1 y + f 2 y

36 Other applications Applications to optimization of higher order potentials i.e energies involving more than just pairwise terms. Parallelization of optimization next..

37 The missing parts Its easy to extend the discussion to decomposition into > 2 parts. There can be many choices for decompsing, which is a good choice? Theoretical properties, what guarantees can we get on the primal solution.

Convex Optimization and Machine Learning

Convex Optimization and Machine Learning Convex Optimization and Machine Learning Mengliu Zhao Machine Learning Reading Group School of Computing Science Simon Fraser University March 12, 2014 Mengliu Zhao SFU-MLRG March 12, 2014 1 / 25 Introduction

More information

Lecture 4 Duality and Decomposition Techniques

Lecture 4 Duality and Decomposition Techniques Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian

More information

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal

More information

Constrained optimization

Constrained optimization Constrained optimization A general constrained optimization problem has the form where The Lagrangian function is given by Primal and dual optimization problems Primal: Dual: Weak duality: Strong duality:

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

COMS 4771 Support Vector Machines. Nakul Verma

COMS 4771 Support Vector Machines. Nakul Verma COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron

More information

A MRF Shape Prior for Facade Parsing with Occlusions Supplementary Material

A MRF Shape Prior for Facade Parsing with Occlusions Supplementary Material A MRF Shape Prior for Facade Parsing with Occlusions Supplementary Material Mateusz Koziński, Raghudeep Gadde, Sergey Zagoruyko, Guillaume Obozinski and Renaud Marlet Université Paris-Est, LIGM (UMR CNRS

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

Gate Sizing by Lagrangian Relaxation Revisited

Gate Sizing by Lagrangian Relaxation Revisited Gate Sizing by Lagrangian Relaxation Revisited Jia Wang, Debasish Das, and Hai Zhou Electrical Engineering and Computer Science Northwestern University Evanston, Illinois, United States October 17, 2007

More information

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year CONLIN & MMA solvers Pierre DUYSINX LTAS Automotive Engineering Academic year 2018-2019 1 CONLIN METHOD 2 LAY-OUT CONLIN SUBPROBLEMS DUAL METHOD APPROACH FOR CONLIN SUBPROBLEMS SEQUENTIAL QUADRATIC PROGRAMMING

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

Characterizing Improving Directions Unconstrained Optimization

Characterizing Improving Directions Unconstrained Optimization Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not

More information

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:

More information

Optimization of Discrete Markov Random Fields via Dual Decomposition

Optimization of Discrete Markov Random Fields via Dual Decomposition LABORATOIRE MATHEMATIQUES APPLIQUEES AUX SYSTEMES Optimization of Discrete Markov Random Fields via Dual Decomposition Nikos Komodakis Nikos Paragios Georgios Tziritas N 0705 April 2007 Projet Orasis Optimization

More information

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis Vijay Gupta Ignacio E. Grossmann Department of Chemical Engineering Carnegie Mellon University, Pittsburgh Bora Tarhan ExxonMobil

More information

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms

More information

Lecture 18: March 23

Lecture 18: March 23 0-725/36-725: Convex Optimization Spring 205 Lecturer: Ryan Tibshirani Lecture 8: March 23 Scribes: James Duyck Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not

More information

Lagrangean relaxation - exercises

Lagrangean relaxation - exercises Lagrangean relaxation - exercises Giovanni Righini Set covering We start from the following Set Covering Problem instance: min z = x + 2x 2 + x + 2x 4 + x 5 x + x 2 + x 4 x 2 + x x 2 + x 4 + x 5 x + x

More information

Inter and Intra-Modal Deformable Registration:

Inter and Intra-Modal Deformable Registration: Inter and Intra-Modal Deformable Registration: Continuous Deformations Meet Efficient Optimal Linear Programming Ben Glocker 1,2, Nikos Komodakis 1,3, Nikos Paragios 1, Georgios Tziritas 3, Nassir Navab

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2018 04 24 Lecture 9 Linear and integer optimization with applications

More information

Kernels and Constrained Optimization

Kernels and Constrained Optimization Machine Learning 1 WS2014 Module IN2064 Sheet 8 Page 1 Machine Learning Worksheet 8 Kernels and Constrained Optimization 1 Kernelized k-nearest neighbours To classify the point x the k-nearest neighbours

More information

ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints

ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints Instructor: Kevin Ross Scribe: Pritam Roy May 0, 2005 Outline of topics for the lecture We will discuss

More information

Part 5: Structured Support Vector Machines

Part 5: Structured Support Vector Machines Part 5: Structured Support Vector Machines Sebastian Nowozin and Christoph H. Lampert Providence, 21st June 2012 1 / 34 Problem (Loss-Minimizing Parameter Learning) Let d(x, y) be the (unknown) true data

More information

Planar Cycle Covering Graphs

Planar Cycle Covering Graphs Planar Cycle Covering Graphs Julian Yarkony, Alexander T. Ihler, Charless C. Fowlkes Department of Computer Science University of California, Irvine {jyarkony,ihler,fowlkes}@ics.uci.edu Abstract We describe

More information

A Bundle Approach To Efficient MAP-Inference by Lagrangian Relaxation (corrected version)

A Bundle Approach To Efficient MAP-Inference by Lagrangian Relaxation (corrected version) A Bundle Approach To Efficient MAP-Inference by Lagrangian Relaxation (corrected version) Jörg Hendrik Kappes 1 1 IPA and 2 HCI, Heidelberg University, Germany kappes@math.uni-heidelberg.de Bogdan Savchynskyy

More information

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma

Support Vector Machines. James McInerney Adapted from slides by Nakul Verma Support Vector Machines James McInerney Adapted from slides by Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake

More information

ONLY AVAILABLE IN ELECTRONIC FORM

ONLY AVAILABLE IN ELECTRONIC FORM MANAGEMENT SCIENCE doi 10.1287/mnsc.1070.0812ec pp. ec1 ec7 e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 2008 INFORMS Electronic Companion Customized Bundle Pricing for Information Goods: A Nonlinear

More information

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING DAVID G. LUENBERGER Stanford University TT ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California London Don Mills, Ontario CONTENTS

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Introduction to Constrained Optimization

Introduction to Constrained Optimization Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Selected Topics in Column Generation

Selected Topics in Column Generation Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a

More information

Convex Optimization Lecture 2

Convex Optimization Lecture 2 Convex Optimization Lecture 2 Today: Convex Analysis Center-of-mass Algorithm 1 Convex Analysis Convex Sets Definition: A set C R n is convex if for all x, y C and all 0 λ 1, λx + (1 λ)y C Operations that

More information

LECTURE 18 LECTURE OUTLINE

LECTURE 18 LECTURE OUTLINE LECTURE 18 LECTURE OUTLINE Generalized polyhedral approximation methods Combined cutting plane and simplicial decomposition methods Lecture based on the paper D. P. Bertsekas and H. Yu, A Unifying Polyhedral

More information

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points.

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points. 5.93 Optimization Methods Final Examination Instructions:. There are 5 problems each w i t h 2 p o i n ts for a maximum of points. 2. You are allowed to use class notes, your homeworks, solutions to homework

More information

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS Linear Programming Larry Blume Cornell University & The Santa Fe Institute & IHS Linear Programs The general linear program is a constrained optimization problem where objectives and constraints are all

More information

Solution Methods Numerical Algorithms

Solution Methods Numerical Algorithms Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i

More information

Computational Optimization. Constrained Optimization Algorithms

Computational Optimization. Constrained Optimization Algorithms Computational Optimization Constrained Optimization Algorithms Same basic algorithms Repeat Determine descent direction Determine step size Take a step Until Optimal But now must consider feasibility,

More information

Machine Learning for Signal Processing Lecture 4: Optimization

Machine Learning for Signal Processing Lecture 4: Optimization Machine Learning for Signal Processing Lecture 4: Optimization 13 Sep 2015 Instructor: Bhiksha Raj (slides largely by Najim Dehak, JHU) 11-755/18-797 1 Index 1. The problem of optimization 2. Direct optimization

More information

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007 : Convex Non-Smooth Optimization April 2, 2007 Outline Lecture 19 Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials

More information

Biomedical Image Analysis Using Markov Random Fields & Efficient Linear Programing

Biomedical Image Analysis Using Markov Random Fields & Efficient Linear Programing Biomedical Image Analysis Using Markov Random Fields & Efficient Linear Programing Nikos Komodakis Ahmed Besbes Ben Glocker Nikos Paragios Abstract Computer-aided diagnosis through biomedical image analysis

More information

CONVEX OPTIMIZATION: A SELECTIVE OVERVIEW

CONVEX OPTIMIZATION: A SELECTIVE OVERVIEW 1! CONVEX OPTIMIZATION: A SELECTIVE OVERVIEW Dimitri Bertsekas! M.I.T.! Taiwan! May 2010! 2! OUTLINE! Convexity issues in optimization! Common geometrical framework for duality and minimax! Unifying framework

More information

A Joint Congestion Control, Routing, and Scheduling Algorithm in Multihop Wireless Networks with Heterogeneous Flows

A Joint Congestion Control, Routing, and Scheduling Algorithm in Multihop Wireless Networks with Heterogeneous Flows 1 A Joint Congestion Control, Routing, and Scheduling Algorithm in Multihop Wireless Networks with Heterogeneous Flows Phuong Luu Vo, Nguyen H. Tran, Choong Seon Hong, *KiJoon Chae Kyung H University,

More information

Link Dimensioning and LSP Optimization for MPLS Networks Supporting DiffServ EF and BE Classes

Link Dimensioning and LSP Optimization for MPLS Networks Supporting DiffServ EF and BE Classes Link Dimensioning and LSP Optimization for MPLS Networks Supporting DiffServ EF and BE Classes Kehang Wu Douglas S. Reeves Capacity Planning for QoS DiffServ + MPLS QoS in core networks DiffServ provides

More information

Tightening MRF Relaxations with Planar Subproblems

Tightening MRF Relaxations with Planar Subproblems Tightening MRF Relaxations with Planar Subproblems Julian Yarkony, Ragib Morshed, Alexander T. Ihler, Charless C. Fowlkes Department of Computer Science University of California, Irvine {jyarkony,rmorshed,ihler,fowlkes}@ics.uci.edu

More information

Linear methods for supervised learning

Linear methods for supervised learning Linear methods for supervised learning LDA Logistic regression Naïve Bayes PLA Maximum margin hyperplanes Soft-margin hyperplanes Least squares resgression Ridge regression Nonlinear feature maps Sometimes

More information

Convex Optimization MLSS 2015

Convex Optimization MLSS 2015 Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :

More information

LaGO - A solver for mixed integer nonlinear programming

LaGO - A solver for mixed integer nonlinear programming LaGO - A solver for mixed integer nonlinear programming Ivo Nowak June 1 2005 Problem formulation MINLP: min f(x, y) s.t. g(x, y) 0 h(x, y) = 0 x [x, x] y [y, y] integer MINLP: - n

More information

Lagrangean Methods bounding through penalty adjustment

Lagrangean Methods bounding through penalty adjustment Lagrangean Methods bounding through penalty adjustment thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline Brief introduction How to perform Lagrangean relaxation Subgradient techniques

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. Author: Martin Jaggi Presenter: Zhongxing Peng

Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization. Author: Martin Jaggi Presenter: Zhongxing Peng Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization Author: Martin Jaggi Presenter: Zhongxing Peng Outline 1. Theoretical Results 2. Applications Outline 1. Theoretical Results 2. Applications

More information

An Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation

An Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation Dhaka Univ. J. Sci. 61(2): 135-140, 2013 (July) An Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation M. Babul Hasan and Md. Toha De epartment of Mathematics, Dhaka

More information

Crash-Starting the Simplex Method

Crash-Starting the Simplex Method Crash-Starting the Simplex Method Ivet Galabova Julian Hall School of Mathematics, University of Edinburgh Optimization Methods and Software December 2017 Ivet Galabova, Julian Hall Crash-Starting Simplex

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information

Lagrangian Relaxation

Lagrangian Relaxation Lagrangian Relaxation Ş. İlker Birbil March 6, 2016 where Our general optimization problem in this lecture is in the maximization form: maximize f(x) subject to x F, F = {x R n : g i (x) 0, i = 1,...,

More information

Convex Programs. COMPSCI 371D Machine Learning. COMPSCI 371D Machine Learning Convex Programs 1 / 21

Convex Programs. COMPSCI 371D Machine Learning. COMPSCI 371D Machine Learning Convex Programs 1 / 21 Convex Programs COMPSCI 371D Machine Learning COMPSCI 371D Machine Learning Convex Programs 1 / 21 Logistic Regression! Support Vector Machines Support Vector Machines (SVMs) and Convex Programs SVMs are

More information

Cosegmentation Revisited: Models and Optimization

Cosegmentation Revisited: Models and Optimization Cosegmentation Revisited: Models and Optimization Sara Vicente 1, Vladimir Kolmogorov 1, and Carsten Rother 2 1 University College London 2 Microsoft Research Cambridge Abstract. The problem of cosegmentation

More information

The Alternating Direction Method of Multipliers

The Alternating Direction Method of Multipliers The Alternating Direction Method of Multipliers With Adaptive Step Size Selection Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein October 8, 2015 1 / 30 Introduction Presentation Outline 1 Convex

More information

Parallel and Distributed Vision Algorithms Using Dual Decomposition

Parallel and Distributed Vision Algorithms Using Dual Decomposition Parallel and Distributed Vision Algorithms Using Dual Decomposition Petter Strandmark, Fredrik Kahl, Thomas Schoenemann Centre for Mathematical Sciences Lund University, Sweden Abstract We investigate

More information

Alternating Direction Method of Multipliers

Alternating Direction Method of Multipliers Alternating Direction Method of Multipliers CS 584: Big Data Analytics Material adapted from Stephen Boyd (https://web.stanford.edu/~boyd/papers/pdf/admm_slides.pdf) & Ryan Tibshirani (http://stat.cmu.edu/~ryantibs/convexopt/lectures/21-dual-meth.pdf)

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

DETERMINISTIC OPERATIONS RESEARCH

DETERMINISTIC OPERATIONS RESEARCH DETERMINISTIC OPERATIONS RESEARCH Models and Methods in Optimization Linear DAVID J. RADER, JR. Rose-Hulman Institute of Technology Department of Mathematics Terre Haute, IN WILEY A JOHN WILEY & SONS,

More information

Symbolic Subdifferentiation in Python

Symbolic Subdifferentiation in Python Symbolic Subdifferentiation in Python Maurizio Caló and Jaehyun Park EE 364B Project Final Report Stanford University, Spring 2010-11 June 2, 2011 1 Introduction 1.1 Subgradient-PY We implemented a Python

More information

Benders in a nutshell Matteo Fischetti, University of Padova

Benders in a nutshell Matteo Fischetti, University of Padova Benders in a nutshell Matteo Fischetti, University of Padova ODS 2017, Sorrento, September 2017 1 Benders decomposition The original Benders decomposition from the 1960s uses two distinct ingredients for

More information

Assignment 5. Machine Learning, Summer term 2014, Ulrike von Luxburg To be discussed in exercise groups on May 19-21

Assignment 5. Machine Learning, Summer term 2014, Ulrike von Luxburg To be discussed in exercise groups on May 19-21 Assignment 5 Machine Learning, Summer term 204, Ulrike von Luxburg To be discussed in exercise groups on May 9-2 Exercise (Primal hard margin SVM problem, +3 points) Given training data (X i, Y i ) i=,...,n

More information

Modern Benders (in a nutshell)

Modern Benders (in a nutshell) Modern Benders (in a nutshell) Matteo Fischetti, University of Padova (based on joint work with Ivana Ljubic and Markus Sinnl) Lunteren Conference on the Mathematics of Operations Research, January 17,

More information

ME 555: Distributed Optimization

ME 555: Distributed Optimization ME 555: Distributed Optimization Duke University Spring 2015 1 Administrative Course: ME 555: Distributed Optimization (Spring 2015) Instructor: Time: Location: Office hours: Website: Soomin Lee (email:

More information

Solving the Master Linear Program in Column Generation Algorithms for Airline Crew Scheduling using a Subgradient Method.

Solving the Master Linear Program in Column Generation Algorithms for Airline Crew Scheduling using a Subgradient Method. Solving the Master Linear Program in Column Generation Algorithms for Airline Crew Scheduling using a Subgradient Method Per Sjögren November 28, 2009 Abstract A subgradient method for solving large linear

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Two common formulations of linear programming (LP) problems are: min Subject to: 1,,, 1,2,,;, max Subject to: 1,,, 1,2,,;, Linear Programming Problems The standard LP problem

More information

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem Computational Learning Theory Fall Semester, 2012/13 Lecture 10: SVM Lecturer: Yishay Mansour Scribe: Gitit Kehat, Yogev Vaknin and Ezra Levin 1 10.1 Lecture Overview In this lecture we present in detail

More information

Lecture Notes: Constraint Optimization

Lecture Notes: Constraint Optimization Lecture Notes: Constraint Optimization Gerhard Neumann January 6, 2015 1 Constraint Optimization Problems In constraint optimization we want to maximize a function f(x) under the constraints that g i (x)

More information

Lecture 7: Support Vector Machine

Lecture 7: Support Vector Machine Lecture 7: Support Vector Machine Hien Van Nguyen University of Houston 9/28/2017 Separating hyperplane Red and green dots can be separated by a separating hyperplane Two classes are separable, i.e., each

More information

Graph cut based image segmentation with connectivity priors

Graph cut based image segmentation with connectivity priors Graph cut based image segmentation with connectivity priors Sara Vicente Vladimir Kolmogorov University College London {s.vicente,vnk}@adastral.ucl.ac.uk Carsten Rother Microsoft Research Cambridge carrot@microsoft.com

More information

Today. Gradient descent for minimization of functions of real variables. Multi-dimensional scaling. Self-organizing maps

Today. Gradient descent for minimization of functions of real variables. Multi-dimensional scaling. Self-organizing maps Today Gradient descent for minimization of functions of real variables. Multi-dimensional scaling Self-organizing maps Gradient Descent Derivatives Consider function f(x) : R R. The derivative w.r.t. x

More information

Column Generation Based Primal Heuristics

Column Generation Based Primal Heuristics Column Generation Based Primal Heuristics C. Joncour, S. Michel, R. Sadykov, D. Sverdlov, F. Vanderbeck University Bordeaux 1 & INRIA team RealOpt Outline 1 Context Generic Primal Heuristics The Branch-and-Price

More information

Visual Recognition: Examples of Graphical Models

Visual Recognition: Examples of Graphical Models Visual Recognition: Examples of Graphical Models Raquel Urtasun TTI Chicago March 6, 2012 Raquel Urtasun (TTI-C) Visual Recognition March 6, 2012 1 / 64 Graphical models Applications Representation Inference

More information

Revisiting MAP Estimation, Message Passing and Perfect Graphs

Revisiting MAP Estimation, Message Passing and Perfect Graphs James R. Foulds Nicholas Navaroli Padhraic Smyth Alexander Ihler Department of Computer Science University of California, Irvine {jfoulds,nnavarol,smyth,ihler}@ics.uci.edu Abstract Given a graphical model,

More information

Multi-stage Stochastic Programming, Stochastic Decomposition, and Connections to Dynamic Programming: An Algorithmic Perspective

Multi-stage Stochastic Programming, Stochastic Decomposition, and Connections to Dynamic Programming: An Algorithmic Perspective Multi-stage Stochastic Programming, Stochastic Decomposition, and Connections to Dynamic Programming: An Algorithmic Perspective Suvrajeet Sen Data Driven Decisions Lab, ISE Department Ohio State University

More information

On the Global Solution of Linear Programs with Linear Complementarity Constraints

On the Global Solution of Linear Programs with Linear Complementarity Constraints On the Global Solution of Linear Programs with Linear Complementarity Constraints J. E. Mitchell 1 J. Hu 1 J.-S. Pang 2 K. P. Bennett 1 G. Kunapuli 1 1 Department of Mathematical Sciences RPI, Troy, NY

More information

A primal-dual framework for mixtures of regularizers

A primal-dual framework for mixtures of regularizers A primal-dual framework for mixtures of regularizers Baran Gözcü baran.goezcue@epfl.ch Laboratory for Information and Inference Systems (LIONS) École Polytechnique Fédérale de Lausanne (EPFL) Switzerland

More information

Convex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c)

Convex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c) Convex Optimization (for CS229) Erick Delage, and Ashutosh Saxena October 20, 2006 1 Convex Sets Definition: A set G R n is convex if every pair of point (x, y) G, the segment beteen x and y is in A. More

More information

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini In the name of God Network Flows 7. Multicommodity Flows Problems 7.2 Lagrangian Relaxation Approach Fall 2010 Instructor: Dr. Masoud Yaghini The multicommodity flow problem formulation: We associate nonnegative

More information

to the Traveling Salesman Problem 1 Susanne Timsj Applied Optimization and Modeling Group (TOM) Department of Mathematics and Physics

to the Traveling Salesman Problem 1 Susanne Timsj Applied Optimization and Modeling Group (TOM) Department of Mathematics and Physics An Application of Lagrangian Relaxation to the Traveling Salesman Problem 1 Susanne Timsj Applied Optimization and Modeling Group (TOM) Department of Mathematics and Physics M lardalen University SE-721

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Optimization for Machine Learning

Optimization for Machine Learning Optimization for Machine Learning (Problems; Algorithms - C) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html

More information

Solutions for Operations Research Final Exam

Solutions for Operations Research Final Exam Solutions for Operations Research Final Exam. (a) The buffer stock is B = i a i = a + a + a + a + a + a 6 + a 7 = + + + + + + =. And the transportation tableau corresponding to the transshipment problem

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Eric Xing Lecture 14, February 29, 2016 Reading: W & J Book Chapters Eric Xing @

More information

Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I

Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I CS/EE 5590 / ENG 401 Special Topics (17804, 17815, 17803) Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I Zhu Li Course Web: http://l.web.umkc.edu/lizhu/teaching/2016sp.video-communication/main.html

More information

10. Network dimensioning

10. Network dimensioning Partly based on slide material by Samuli Aalto and Jorma Virtamo ELEC-C7210 Modeling and analysis of communication networks 1 Contents Introduction Parameters: topology, routing and traffic Dimensioning

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Constrained Optimization Marc Toussaint U Stuttgart Constrained Optimization General constrained optimization problem: Let R n, f : R n R, g : R n R m, h : R n R l find min

More information

Lagrangian Decompositions for the Two-Level FTTx Network Design Problem

Lagrangian Decompositions for the Two-Level FTTx Network Design Problem Lagrangian Decompositions for the Two-Level FTTx Network Design Problem Andreas Bley Ivana Ljubić Olaf Maurer Institute for Mathematics, TU Berlin, Germany Preprint 2013/19, July 25, 2013 Abstract We consider

More information

Linear programming and duality theory

Linear programming and duality theory Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

More information

Dual Subgradient Methods Using Approximate Multipliers

Dual Subgradient Methods Using Approximate Multipliers Dual Subgradient Methods Using Approximate Multipliers Víctor Valls, Douglas J. Leith Trinity College Dublin Abstract We consider the subgradient method for the dual problem in convex optimisation with

More information

Algorithmic innovations and software for the dual decomposition method applied to stochastic mixed-integer programs

Algorithmic innovations and software for the dual decomposition method applied to stochastic mixed-integer programs Noname manuscript No. (will be inserted by the editor) Algorithmic innovations and software for the dual decomposition method applied to stochastic mixed-integer programs Kibaek Kim Victor M. Zavala Submitted:

More information

Getting Feasible Variable Estimates From Infeasible Ones: MRF Local Polytope Study

Getting Feasible Variable Estimates From Infeasible Ones: MRF Local Polytope Study 2013 IEEE International Conference on Computer Vision Workshops Getting Feasible Variable Estimates From Infeasible Ones: MRF Local Polytope Study Bogdan Savchynskyy University of Heidelberg, Germany bogdan.savchynskyy@iwr.uni-heidelberg.de

More information

Gate Sizing and Vth Assignment for Asynchronous Circuits Using Lagrangian Relaxation. Gang Wu, Ankur Sharma and Chris Chu Iowa State University

Gate Sizing and Vth Assignment for Asynchronous Circuits Using Lagrangian Relaxation. Gang Wu, Ankur Sharma and Chris Chu Iowa State University 1 Gate Sizing and Vth Assignment for Asynchronous Circuits Using Lagrangian Relaxation Gang Wu, Ankur Sharma and Chris Chu Iowa State University Introduction 2 Asynchronous Gate Selection Problem: Selecting

More information

Bilinear Programming

Bilinear Programming Bilinear Programming Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida 32611-6595 Email address: artyom@ufl.edu

More information