Gate Sizing by Lagrangian Relaxation Revisited
|
|
- James Conley
- 5 years ago
- Views:
Transcription
1 Gate Sizing by Lagrangian Relaxation Revisited Jia Wang, Debasish Das, and Hai Zhou Electrical Engineering and Computer Science Northwestern University Evanston, Illinois, United States October 17, / 41
2 Outline Introduction to Gate Sizing Generalized Convex Sizing The Dual Problems The DualFD Algorithm Experiments Conclusions 2 / 41
3 Gate Sizing A mathematical programming formulation. Timing constraints: delays, arrival times. Objective function: clock period, total weighted area, etc. Trade off performance and cost. Optimize performance. Optimize cost under performance constraint. 3 / 41
4 Previous Works General Convex Optimization TILOS [Fishburn & Dunlop, 85], [Sapatnekar et al., 93], etc. Transform sizing into a convex programming problem. Assume convex delays (after variable transformation). Convexity through geometric programming commonly. Apply general convex optimization techniques. Studied for decades. High running time. 4 / 41
5 Previous Works Lagrangian Relaxation [Chen, Chu, & Wong, 99], [Tennakoon & Sechen, 02, 05] Special structure: timing constraints are system of difference inequalities. The Lagrangian dual problem is simplified. Objective function (Lagrangian dual function): available through Lagrangian subproblems. Constraints: flow conservation on Lagrangian multipliers. 5 / 41
6 Previous Works Lagrangian Relaxation [Chen, Chu, & Wong, 99], [Tennakoon & Sechen, 02, 05] Special structure: timing constraints are system of difference inequalities. The Lagrangian dual problem is simplified. Objective function (Lagrangian dual function): available through Lagrangian subproblems. Constraints: flow conservation on Lagrangian multipliers. Lagrangian dual function is not differentiable in general. Apply subgradient optimizations. Difficult to choose good initial solution and step sizes. Pre-processing can help choosing initial solutions for some delay models. No universal pre-processing method for all convex delays. 5 / 41
7 Contribution Combine gate sizing with sequential optimization. Revisit Lagrangian relaxation. Correct misunderstandings. Check primal feasibility in dual problems. Identify a class of problems with differentiable dual functions. The DualFD algorithm: use gradient and network flow. 6 / 41
8 Outline Introduction to Gate Sizing Generalized Convex Sizing The Dual Problems The DualFD Algorithm Experiments Conclusions 7 / 41
9 Generalized Convex Sizing (GCS) Minimize C(x) s.t. t i + d i,j (x) t j 0, (i, j) E, x Ω. G = (V, E): a directed graph, the system structure. x: the system parameters belonging to t: (relative) arrival times. Ω = {x : l k x k u k, 1 k n}. C(x), d i,j (x): objective function and delays, twice differentiable and convex. 8 / 41
10 The GCS Formulation Timing specification in sequential circuits. Feasible timing No positive cycle Prefer edge delays to vertex delays for accuracy. Timing arcs in one gate/cell. Rise/fall delay and slew. Flexibility in choosing x. Logarithms of sizes (gate, wire, transistor, etc. )for Elmore and posynomial delays. Clock skews and clock period. 9 / 41
11 GCS is Convex GCS formulation is convex. Objective function and feasible set are convex. Not necessary to establish convexity through geometric programming. 10 / 41
12 GCS is Convex GCS formulation is convex. Objective function and feasible set are convex. Not necessary to establish convexity through geometric programming. Non-convex formulations can be transformed into convex ones. Properties of convex formulations are not necessarily hold for equivalent non-convex ones. 10 / 41
13 Proper GCS Problems Definition A GCS is proper iff: x Ω, z 0, z H C (x)z 0. With differentiable dual functions (shown later). di,j (x) are irrelevant. Optimizing total positive weighted area is proper. 11 / 41
14 Simultaneous Sizing and Clock Skew Optimization Minimize total area under performance bound while allowing clock skew optimization. Handle long path conditions (setup condition). Assume post-processing for repairing of violated short path conditions (hold condition). Not proper: C s = 0, clock skew variable s. Cancel s. Transform to a proper one. 12 / 41
15 Simultaneous Sizing and Clock Skew Optimization Make a Non-Proper Problem Proper Constraints concerning clock skew s k : (t I + s k t Qk ) (t Dk s k t O ) (s k s k s + k ) Cancel s k : [t Dk t O, t Qk t I ] [s k, s+ k ]. Equivalently: (t Dk s + k t O) (t I + s k t Q k ) (t Dk t Qk T ) 13 / 41
16 Outline Introduction to Gate Sizing Generalized Convex Sizing The Dual Problems The DualFD Algorithm Experiments Conclusions 14 / 41
17 Lagrangian Relaxation Overview Lagrangian function: L (x, t, f) = C(x) + f i,j (t i + d i,j (x) t j ) (i,j) E Lagrangian subproblem: L(f) = inf{l (x, t, f) : x Ω, t R V } Lagrangian dual problem (D-GCS): Maximize L(f) s.t. f N. N : non-negative f. 15 / 41
18 Duality Gap P = inf{c(x) : x X }, D = sup{l(f) : f N }. X : feasible x. Zero duality gap is necessary: P = D. 16 / 41
19 Duality Gap P = inf{c(x) : x X }, D = sup{l(f) : f N }. X : feasible x. Zero duality gap is necessary: P = D. Establish P = D through Strong Duality Theorem. Strictly feasible solution exists saddle point (x, t, f): D = L(f) = L (x, t, f) = C(x) = P. 16 / 41
20 Duality Gap P = inf{c(x) : x X }, D = sup{l(f) : f N }. X : feasible x. Zero duality gap is necessary: P = D. Establish P = D through Strong Duality Theorem. Strictly feasible solution exists saddle point (x, t, f): D = L(f) = L (x, t, f) = C(x) = P. Misunderstanding: previous work applied Strong Duality Theorem to original non-convex formulation (transformation was necessary for convexity). GCS is convex without transformation. 16 / 41
21 Duality Gap What if there is no strictly feasible solution? 17 / 41
22 Duality Gap What if there is no strictly feasible solution? Regularity condition zero duality gap. [Rockafellear 1971] More general than Strong Duality Theorem for zero duality gap. No guarantee for saddle points as Strong Duality Theorem. f N, L(f) < D. 17 / 41
23 Simplify the Dual Problem Flow conservation on G: F = {f : f i,k = f k,j, k V }. (i,k) E (k,j) E L(f) = for f / F, since f / F, M R, x Ω, t R V, L (x, t, f) < M. Simplify D-GCS into FD-GCS: Maximize L(f) s.t. f F N. 18 / 41
24 Simplify the Dual Problem Flow conservation on G: F = {f : f i,k = f k,j, k V }. (i,k) E (k,j) E L(f) = for f / F, since f / F, M R, x Ω, t R V, L (x, t, f) < M. Simplify D-GCS into FD-GCS: Maximize L(f) s.t. f F N. Misunderstanding: previous works obtained FD-GCS by the Karush-Kuhn-Tucker (KKT) conditions L t k = 0. However, KKT conditions are not necessary conditions. 18 / 41
25 A trivial example that is not trivial. Will the extreme cases happen for sizing? Optimal solutions do NOT satisfy KKT conditions. No saddle point. f F N, L(f) < D.. 19 / 41
26 A trivial example that is not trivial. Optimize a single inverter with size e x and fixed driver/load: Minimize e x s.t. t 1 + e x t 2, t 2 + e x t 3, t 3 t 1 + 2, ln 2 x ln 2. Single feasible solution x = 0. Optimal but not strictly feasible. KKT conditions cannot be satisfied by x = 0: 0 = L x = e x + f 1,2 e x f 2,3 e x = 1 + f 1,2 f 2,3, 0 = L t 2 = f 2,3 f 1,2. 20 / 41
27 A trivial example that is not trivial. Since f F N, assume β = f 1,2 = f 2,3 = f 3,1 0. Compute L(f): L(f) = q(β) = { 1+β 2, if 0 β < /β+1, if β 1 3 q(β) increases from 1 2 to 1 when β increases from 0 to +. Therefore, f F N, L(f) < 1 = D. 21 / 41
28 Further Simplification of the Dual Problem Simplify L(f) given f F: P f (x) Q(f) = C(x) + (i,j) E f i,j d i,j (x), = inf{p f (x) : x Ω}. L(f) = Q(f), f F. SD-GCS: Maximize Q(f) s.t. f F N. SD-GCS is different from FD-GCS. D-GCS, FD-GCS, SD-GCS are equivalent. 22 / 41
29 Infeasibility for GCS GCS can be infeasible: clock period is too small. C(x) is continuous and Ω is compact: U, x Ω, C(x) U. Assume zero duality gap, GCS is feasible iff f F N, Q(f) U. 23 / 41
30 Outline Introduction to Gate Sizing Generalized Convex Sizing The Dual Problems The DualFD Algorithm Experiments Conclusions 24 / 41
31 Differentiable Dual Objective Q(f) Sufficient condition (from textbook): x f Ω, x Ω, Q(f) = P f (x f ) < P f (x). Assume the condition is NOT satisfied. x x Ω, Q(f) = P f (x ) = P f (x ) 0 γ 1, Q(f) = P f ((1 γ)x + γx ) (x x ) H Pf ( x + x )(x x ) = 0 2 (x x ) H C ( x + x )(x x ) = 0 2 GCS is NOT proper! For proper GCS problems, Q(f) is differentiable, and Q(f) f i,j = d i,j (x f ). 25 / 41
32 Improving Feasible Direction Method of feasible directions: Find f, an ascent direction that is also feasible (w.r.t. SD-GCS). How to find? λ > 0, (Q(f + λ f) > Q(f)) (f + λ f F N ). 26 / 41
33 Improving Feasible Direction Method of feasible directions: Find f, an ascent direction that is also feasible (w.r.t. SD-GCS). How to find? λ > 0, (Q(f + λ f) > Q(f)) (f + λ f F N ). d(x f ) is the gradient of Q(f): f d(x f ) > 0 λ > 0, Q(f + λ f) > Q(f). Flow conservation: f + f F N 0 λ min f i,j, f + λ f F N. f i,j <0 f i,j 26 / 41
34 Improving Feasible Direction The direction finding (DF) problem: Maximize f d(x f ) s.t. f + f F N, u f i,j u, (i, j) E. f are decision variables, u is positive constant. f d(x f ): first order approx. of Q(f + f) Q(f). DF is a min-cost network flow problem. For optimal f, f d(x f ) 0. And f d(x f ) = 0 x f is optimal for GCS. 27 / 41
35 The DualFD Algorithm Iterative algorithm. Q(f) is increasing every iteration. Q(f) is convex local maximal is global maximum. For subgradient optimizations, Q(f) may decrease. Each iteration, Solve DF for f. Perform a line search to find Q(f + λ f) > Q(f) Check GCS infeasibility when computing Q. Claim optimality if the changes are marginal. 28 / 41
36 Outline Introduction to Gate Sizing Generalized Convex Sizing The Dual Problems The DualFD Algorithm Experiments Conclusions 29 / 41
37 Experimental Setup Minimize total area under performance bound with Elmore delay model. ISCAS89 sequential circuits, 29 totally. Largest: vertices, edges, variables. Implement DualFD algorithm in C++. Use the CS2 min-cost network flow solver for DF. [Goldberg] Compare to subgradient optimizations: SubGrad. 30 / 41
38 Experimental Results 15 largest benchmarks. Clock period achieved T vs. target clock period T 0 s838 is NOT feasible. 31 / 41
39 Experimental Results 15 largest benchmarks. Compare area, dual objective, and running time. DualFD SubGrad name area dual t(s) area dual t(s) s s K s K s K s s K s s s K s K s K s K s M s K s K / 41
40 DualFD vs. SubGrad 29 benchmarks totally. SubGrad never dominates DualFD. 33 / 41
41 How far away are we from the optimals? Collect feasible solutions to estimate the duality gap. 15 out of 29 benchmarks. 34 / 41
42 Convergence of s / 41
43 Outline Introduction to Gate Sizing Generalized Convex Sizing The Dual Problems The DualFD Algorithm Experiments Conclusions 36 / 41
44 Conclusions Formulate GCS problems to handle sequential optimization. Correct misunderstandings when applying Lagrangian relaxation. Show how to detect infeasibility in GCS. Prove gradient exists for proposed proper GCS problems. Design the DualFD algorithm to solve proper GCS problems. 37 / 41
45 Q & A 38 / 41
46 Thank you! 39 / 41
47 L(f) vs. Q(f) L(f) = Q(f), f F N, L(f) =, f N F, Q(f), f N F. L(f) is convex but not differentiable for f N. Q(f) is convex and differentiable for f N. 40 / 41
48 Proper GCS Problems Posynomial objective functions of e x k : C(x) = l c i e b i i=1 x, c i > 0, 1 i l Let B = (b 1, b 2,..., b l ), rank(b) = n proper. Full row rank. 41 / 41
Gate Sizing by Lagrangian Relaxation Revisited
Gate Sizing by Lagrangian Relaxation Revisited Abstract In this paper, we formulate the Generalized Convex Sizing (GCS) problem that unifies and generalizes the sizing problems. We revisit the approach
More informationDepartment of Electrical and Computer Engineering
LAGRANGIAN RELAXATION FOR GATE IMPLEMENTATION SELECTION Yi-Le Huang, Jiang Hu and Weiping Shi Department of Electrical and Computer Engineering Texas A&M University OUTLINE Introduction and motivation
More informationConvex Optimization and Machine Learning
Convex Optimization and Machine Learning Mengliu Zhao Machine Learning Reading Group School of Computing Science Simon Fraser University March 12, 2014 Mengliu Zhao SFU-MLRG March 12, 2014 1 / 25 Introduction
More informationGate Sizing and Vth Assignment for Asynchronous Circuits Using Lagrangian Relaxation. Gang Wu, Ankur Sharma and Chris Chu Iowa State University
1 Gate Sizing and Vth Assignment for Asynchronous Circuits Using Lagrangian Relaxation Gang Wu, Ankur Sharma and Chris Chu Iowa State University Introduction 2 Asynchronous Gate Selection Problem: Selecting
More informationCOMS 4771 Support Vector Machines. Nakul Verma
COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron
More information15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs
15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest
More informationLagrangean relaxation - exercises
Lagrangean relaxation - exercises Giovanni Righini Set covering We start from the following Set Covering Problem instance: min z = x + 2x 2 + x + 2x 4 + x 5 x + x 2 + x 4 x 2 + x x 2 + x 4 + x 5 x + x
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June
More informationMVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg
MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:
More informationOptimization III: Constrained Optimization
Optimization III: Constrained Optimization CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization III: Constrained Optimization
More informationNonlinear Programming
Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,
More informationDepartment of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):
Linköping University Optimization TAOP3(0) Department of Mathematics Examination Oleg Burdakov of 30 October 03 Assignment Consider the following linear programming problem (LP): max z = x + x s.t. x x
More informationLagrangian Relaxation: An overview
Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction
More informationSupport Vector Machines. James McInerney Adapted from slides by Nakul Verma
Support Vector Machines James McInerney Adapted from slides by Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake
More informationIntroduction to Constrained Optimization
Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR
More informationLecture 4 Duality and Decomposition Techniques
Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian
More informationONLY AVAILABLE IN ELECTRONIC FORM
MANAGEMENT SCIENCE doi 10.1287/mnsc.1070.0812ec pp. ec1 ec7 e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 2008 INFORMS Electronic Companion Customized Bundle Pricing for Information Goods: A Nonlinear
More informationParallel and Distributed Graph Cuts by Dual Decomposition
Parallel and Distributed Graph Cuts by Dual Decomposition Presented by Varun Gulshan 06 July, 2010 What is Dual Decomposition? What is Dual Decomposition? It is a technique for optimization: usually used
More informationConvex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c)
Convex Optimization (for CS229) Erick Delage, and Ashutosh Saxena October 20, 2006 1 Convex Sets Definition: A set G R n is convex if every pair of point (x, y) G, the segment beteen x and y is in A. More
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity
More informationA primal-dual framework for mixtures of regularizers
A primal-dual framework for mixtures of regularizers Baran Gözcü baran.goezcue@epfl.ch Laboratory for Information and Inference Systems (LIONS) École Polytechnique Fédérale de Lausanne (EPFL) Switzerland
More informationCharacterizing Improving Directions Unconstrained Optimization
Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not
More informationSparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual
Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal
More informationCS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36
CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the
More informationCollege of Computer & Information Science Fall 2007 Northeastern University 14 September 2007
College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions
More informationLecture 7: Support Vector Machine
Lecture 7: Support Vector Machine Hien Van Nguyen University of Houston 9/28/2017 Separating hyperplane Red and green dots can be separated by a separating hyperplane Two classes are separable, i.e., each
More informationOutline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014
5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38
More informationKernel Methods & Support Vector Machines
& Support Vector Machines & Support Vector Machines Arvind Visvanathan CSCE 970 Pattern Recognition 1 & Support Vector Machines Question? Draw a single line to separate two classes? 2 & Support Vector
More informationIntroduction to Machine Learning
Introduction to Machine Learning Maximum Margin Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574
More informationSimultaneous Slack Matching, Gate Sizing and Repeater Insertion for Asynchronous Circuits
Simultaneous Slack Matching, Gate Sizing and Repeater Insertion for Asynchronous Circuits Gang Wu and Chris Chu Department of Electrical and Computer Engineering, Iowa State University, IA Email: {gangwu,
More informationConvex Programs. COMPSCI 371D Machine Learning. COMPSCI 371D Machine Learning Convex Programs 1 / 21
Convex Programs COMPSCI 371D Machine Learning COMPSCI 371D Machine Learning Convex Programs 1 / 21 Logistic Regression! Support Vector Machines Support Vector Machines (SVMs) and Convex Programs SVMs are
More informationA Short SVM (Support Vector Machine) Tutorial
A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange
More informationUnconstrained Optimization Principles of Unconstrained Optimization Search Methods
1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i
More informationLinear methods for supervised learning
Linear methods for supervised learning LDA Logistic regression Naïve Bayes PLA Maximum margin hyperplanes Soft-margin hyperplanes Least squares resgression Ridge regression Nonlinear feature maps Sometimes
More informationSection Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017
Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures
More informationIntroduction to Optimization
Introduction to Optimization Constrained Optimization Marc Toussaint U Stuttgart Constrained Optimization General constrained optimization problem: Let R n, f : R n R, g : R n R m, h : R n R l find min
More informationAdvanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material
More informationConstrained optimization
Constrained optimization A general constrained optimization problem has the form where The Lagrangian function is given by Primal and dual optimization problems Primal: Dual: Weak duality: Strong duality:
More informationLinear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS
Linear Programming Larry Blume Cornell University & The Santa Fe Institute & IHS Linear Programs The general linear program is a constrained optimization problem where objectives and constraints are all
More informationTheoretical Concepts of Machine Learning
Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5
More informationMathematical and Algorithmic Foundations Linear Programming and Matchings
Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms
MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2018 04 24 Lecture 9 Linear and integer optimization with applications
More informationCS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29
CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 29 CS 473: Algorithms, Spring 2018 Simplex and LP Duality Lecture 19 March 29, 2018
More informationIntroduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs
Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling
More informationSurrogate Gradient Algorithm for Lagrangian Relaxation 1,2
Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.
More informationLinear programming and duality theory
Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables
More informationLecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)
COMPSCI 632: Approximation Algorithms September 18, 2017 Lecturer: Debmalya Panigrahi Lecture 7 Scribe: Xiang Wang 1 Overview In this lecture, we will use Primal-Dual method to design approximation algorithms
More informationSubmodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar
Submodularity Reading Group Matroid Polytopes, Polymatroid M. Pawan Kumar http://www.robots.ox.ac.uk/~oval/ Outline Linear Programming Matroid Polytopes Polymatroid Polyhedron Ax b A : m x n matrix b:
More informationPart 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm
In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.
More informationMathematical Programming and Research Methods (Part II)
Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types
More informationLecture 2 September 3
EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give
More informationConstrained Optimization COS 323
Constrained Optimization COS 323 Last time Introduction to optimization objective function, variables, [constraints] 1-dimensional methods Golden section, discussion of error Newton s method Multi-dimensional
More informationCONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year
CONLIN & MMA solvers Pierre DUYSINX LTAS Automotive Engineering Academic year 2018-2019 1 CONLIN METHOD 2 LAY-OUT CONLIN SUBPROBLEMS DUAL METHOD APPROACH FOR CONLIN SUBPROBLEMS SEQUENTIAL QUADRATIC PROGRAMMING
More information9. Support Vector Machines. The linearly separable case: hard-margin SVMs. The linearly separable case: hard-margin SVMs. Learning objectives
Foundations of Machine Learning École Centrale Paris Fall 25 9. Support Vector Machines Chloé-Agathe Azencot Centre for Computational Biology, Mines ParisTech Learning objectives chloe agathe.azencott@mines
More informationLecture 15: Log Barrier Method
10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 15: Log Barrier Method Scribes: Pradeep Dasigi, Mohammad Gowayyed Note: LaTeX template courtesy of UC Berkeley EECS dept.
More informationLinear Bilevel Programming With Upper Level Constraints Depending on the Lower Level Solution
Linear Bilevel Programming With Upper Level Constraints Depending on the Lower Level Solution Ayalew Getachew Mersha and Stephan Dempe October 17, 2005 Abstract Focus in the paper is on the definition
More informationIntroduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer
More informationSolving the Master Linear Program in Column Generation Algorithms for Airline Crew Scheduling using a Subgradient Method.
Solving the Master Linear Program in Column Generation Algorithms for Airline Crew Scheduling using a Subgradient Method Per Sjögren November 28, 2009 Abstract A subgradient method for solving large linear
More informationLecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem
Computational Learning Theory Fall Semester, 2012/13 Lecture 10: SVM Lecturer: Yishay Mansour Scribe: Gitit Kehat, Yogev Vaknin and Ezra Levin 1 10.1 Lecture Overview In this lecture we present in detail
More informationLECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach
LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction
More informationISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints
ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints Instructor: Kevin Ross Scribe: Pritam Roy May 0, 2005 Outline of topics for the lecture We will discuss
More informationLinear Programming. Course review MS-E2140. v. 1.1
Linear Programming MS-E2140 Course review v. 1.1 Course structure Modeling techniques Linear programming theory and the Simplex method Duality theory Dual Simplex algorithm and sensitivity analysis Integer
More informationPrograms. Introduction
16 Interior Point I: Linear Programs Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen
More informationLagrangian Relaxation
Lagrangian Relaxation Ş. İlker Birbil March 6, 2016 where Our general optimization problem in this lecture is in the maximization form: maximize f(x) subject to x F, F = {x R n : g i (x) 0, i = 1,...,
More informationArtificial Intelligence
Artificial Intelligence Combinatorial Optimization G. Guérard Department of Nouvelles Energies Ecole Supérieur d Ingénieurs Léonard de Vinci Lecture 1 GG A.I. 1/34 Outline 1 Motivation 2 Geometric resolution
More information3 Interior Point Method
3 Interior Point Method Linear programming (LP) is one of the most useful mathematical techniques. Recent advances in computer technology and algorithms have improved computational speed by several orders
More information4 Integer Linear Programming (ILP)
TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that
More informationDemo 1: KKT conditions with inequality constraints
MS-C5 Introduction to Optimization Solutions 9 Ehtamo Demo : KKT conditions with inequality constraints Using the Karush-Kuhn-Tucker conditions, see if the points x (x, x ) (, 4) or x (x, x ) (6, ) are
More informationIn other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function.
1 The Lagrange multipliers is a mathematical method for performing constrained optimization of differentiable functions. Recall unconstrained optimization of differentiable functions, in which we want
More informationMATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS
MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS GRADO EN A.D.E. GRADO EN ECONOMÍA GRADO EN F.Y.C. ACADEMIC YEAR 2011-12 INDEX UNIT 1.- AN INTRODUCCTION TO OPTIMIZATION 2 UNIT 2.- NONLINEAR PROGRAMMING
More informationLecture 14: Linear Programming II
A Theorist s Toolkit (CMU 18-859T, Fall 013) Lecture 14: Linear Programming II October 3, 013 Lecturer: Ryan O Donnell Scribe: Stylianos Despotakis 1 Introduction At a big conference in Wisconsin in 1948
More information1 Linear programming relaxation
Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching
More informationLinear Programming Problems
Linear Programming Problems Two common formulations of linear programming (LP) problems are: min Subject to: 1,,, 1,2,,;, max Subject to: 1,,, 1,2,,;, Linear Programming Problems The standard LP problem
More informationLink Dimensioning and LSP Optimization for MPLS Networks Supporting DiffServ EF and BE Classes
Link Dimensioning and LSP Optimization for MPLS Networks Supporting DiffServ EF and BE Classes Kehang Wu Douglas S. Reeves Capacity Planning for QoS DiffServ + MPLS QoS in core networks DiffServ provides
More informationFast-Lipschitz Optimization
Fast-Lipschitz Optimization DREAM Seminar Series University of California at Berkeley September 11, 2012 Carlo Fischione ACCESS Linnaeus Center, Electrical Engineering KTH Royal Institute of Technology
More informationConvex Optimization. Lijun Zhang Modification of
Convex Optimization Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Modification of http://stanford.edu/~boyd/cvxbook/bv_cvxslides.pdf Outline Introduction Convex Sets & Functions Convex Optimization
More information6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality
6.854 Advanced Algorithms Scribes: Jay Kumar Sundararajan Lecturer: David Karger Duality This lecture covers weak and strong duality, and also explains the rules for finding the dual of a linear program,
More informationConvex Optimization MLSS 2015
Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :
More informationNetwork Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini
In the name of God Network Flows 7. Multicommodity Flows Problems 7.2 Lagrangian Relaxation Approach Fall 2010 Instructor: Dr. Masoud Yaghini The multicommodity flow problem formulation: We associate nonnegative
More information11 Linear Programming
11 Linear Programming 11.1 Definition and Importance The final topic in this course is Linear Programming. We say that a problem is an instance of linear programming when it can be effectively expressed
More informationAn O(log n/ log log n)-approximation Algorithm for the Asymmetric Traveling Salesman Problem
An O(log n/ log log n)-approximation Algorithm for the Asymmetric Traveling Salesman Problem and more recent developments CATS @ UMD April 22, 2016 The Asymmetric Traveling Salesman Problem (ATSP) Problem
More informationCME307/MS&E311 Theory Summary
CME307/MS&E311 Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/
More informationDavid G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer
David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms
More informationCS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi
CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I Instructor: Shaddin Dughmi Announcements Posted solutions to HW1 Today: Combinatorial problems
More informationLinear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.
University of Southern California Viterbi School of Engineering Daniel J. Epstein Department of Industrial and Systems Engineering ISE 330: Introduction to Operations Research - Deterministic Models Fall
More informationAn Introduction to Dual Ascent Heuristics
An Introduction to Dual Ascent Heuristics Introduction A substantial proportion of Combinatorial Optimisation Problems (COPs) are essentially pure or mixed integer linear programming. COPs are in general
More informationSymmetrical Buffered Clock-Tree Synthesis with Supply-Voltage Alignment
Symmetrical Buffered Clock-Tree Synthesis with Supply-Voltage Alignment Xin-Wei Shih, Tzu-Hsuan Hsu, Hsu-Chieh Lee, Yao-Wen Chang, Kai-Yuan Chao 2013.01.24 1 Outline 2 Clock Network Synthesis Clock network
More informationA Generic Separation Algorithm and Its Application to the Vehicle Routing Problem
A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem Presented by: Ted Ralphs Joint work with: Leo Kopman Les Trotter Bill Pulleyblank 1 Outline of Talk Introduction Description
More informationCME307/MS&E311 Optimization Theory Summary
CME307/MS&E311 Optimization Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/
More informationAsynchronous Circuit Placement by Lagrangian Relaxation
Asynchronous Circuit Placement by Lagrangian Relaxation Gang Wu, Tao Lin, Hsin-Ho Huang, Chris Chu and Peter A. Beerel Department of Electrical and Computer Engineering, Iowa State University, IA Ming
More informationApproximation Algorithms
Approximation Algorithms Group Members: 1. Geng Xue (A0095628R) 2. Cai Jingli (A0095623B) 3. Xing Zhe (A0095644W) 4. Zhu Xiaolu (A0109657W) 5. Wang Zixiao (A0095670X) 6. Jiao Qing (A0095637R) 7. Zhang
More informationMarginal and Sensitivity Analyses
8.1 Marginal and Sensitivity Analyses Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Consider LP in standard form: min z = cx, subject to Ax = b, x 0 where A m n and rank m. Theorem:
More informationAdvanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material
More informationSome Advanced Topics in Linear Programming
Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,
More informationImproving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis
Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis Vijay Gupta Ignacio E. Grossmann Department of Chemical Engineering Carnegie Mellon University, Pittsburgh Bora Tarhan ExxonMobil
More informationOptimal Network Flow Allocation. EE 384Y Almir Mutapcic and Primoz Skraba 27/05/2004
Optimal Network Flow Allocation EE 384Y Almir Mutapcic and Primoz Skraba 27/05/2004 Problem Statement Optimal network flow allocation Find flow allocation which minimizes certain performance criterion
More informationSelected Topics in Column Generation
Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a
More informationHeuristic Optimization Today: Linear Programming. Tobias Friedrich Chair for Algorithm Engineering Hasso Plattner Institute, Potsdam
Heuristic Optimization Today: Linear Programming Chair for Algorithm Engineering Hasso Plattner Institute, Potsdam Linear programming Let s first define it formally: A linear program is an optimization
More informationLP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008
LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following
More information