Lagrangean Methods bounding through penalty adjustment

Similar documents
Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Column Generation: Cutting Stock

Lagrangian Relaxation: An overview

Benders Decomposition

Lagrangean relaxation - exercises

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis

Integer Programming Chapter 9

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Algorithms for Integer Programming

The Size Robust Multiple Knapsack Problem

Selected Topics in Column Generation

Convex Optimization and Machine Learning

Final Exam Spring 2003

Two-stage column generation

Solution Methods Numerical Algorithms

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

LaGO - A solver for mixed integer nonlinear programming

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Column Generation Based Primal Heuristics

56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997

and 6.855J Lagrangian Relaxation I never missed the opportunity to remove obstacles in the way of unity. Mohandas Gandhi

Outline. Alternative Benders Decomposition Explanation explaining Benders without projections. Primal-Dual relations

Integer Programming: Algorithms - 2

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI

Discrete Optimization. Lecture Notes 2

An Introduction to Dual Ascent Heuristics

Applied Lagrange Duality for Constrained Optimization

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual

ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints

56:272 Integer Programming & Network Flows Final Examination -- December 14, 1998

Lagrangean Relaxation of the Hull-Reformulation of Linear Generalized Disjunctive Programs and its use in Disjunctive Branch and Bound

Lagrangian Relaxation

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

LECTURE NOTES Non-Linear Programming

COMP9334: Capacity Planning of Computer Systems and Networks

Lecture 2 Optimization with equality constraints

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

Characterizing Improving Directions Unconstrained Optimization

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

EC5555 Economics Masters Refresher Course in Mathematics September Lecture 6 Optimization with equality constraints Francesco Feri

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini

Manpower Planning: Task Scheduling. Anders Høeg Dohn

TIM 206 Lecture Notes Integer Programming

Recursive column generation for the Tactical Berth Allocation Problem

Solutions for Operations Research Final Exam

Gate Sizing by Lagrangian Relaxation Revisited

Introduction to Optimization

COLUMN GENERATION IN LINEAR PROGRAMMING

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year

Some Advanced Topics in Linear Programming

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

An Improved Subgradiend Optimization Technique for Solving IPs with Lagrangean Relaxation

Financial Optimization ISE 347/447. Lecture 13. Dr. Ted Ralphs

Department of Electrical and Computer Engineering

Parallel and Distributed Graph Cuts by Dual Decomposition

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

Crash-Starting the Simplex Method

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Decomposition in Integer Linear Programming

February 19, Integer programming. Outline. Problem formulation. Branch-andbound

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

4 Integer Linear Programming (ILP)

Fundamentals of Integer Programming

Combinatorial Auctions: A Survey by de Vries and Vohra

A Stabilised Scenario Decomposition Algorithm Applied to Stochastic Unit Commitment Problems

Lecture 4 Duality and Decomposition Techniques

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

to the Traveling Salesman Problem 1 Susanne Timsj Applied Optimization and Modeling Group (TOM) Department of Mathematics and Physics

Solving a Challenging Quadratic 3D Assignment Problem

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Wireless frequency auctions: Mixed Integer Programs and Dantzig-Wolfe decomposition

Decomposition in Integer Linear Programming

3 INTEGER LINEAR PROGRAMMING

Constrained Optimization COS 323

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Solving a Challenging Quadratic 3D Assignment Problem

Computational study of the step size parameter of the subgradient optimization method

DM515 Spring 2011 Weekly Note 7

Graph Coloring via Constraint Programming-based Column Generation

A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation [Wen,Yin,Goldfarb,Zhang 2009]

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Chapter II. Linear Programming

Treatment Planning Optimization for VMAT, Tomotherapy and Cyberknife

Size of a problem instance: Bigger instances take

The SYMPHONY Callable Library for Mixed-Integer Linear Programming

we wish to minimize this function; to make life easier, we may minimize

Investigating Mixed-Integer Hulls using a MIP-Solver

1. Lecture notes on bipartite matching February 4th,

Demo 1: KKT conditions with inequality constraints

MATH Lagrange multipliers in 3 variables Fall 2016

Integer Programming. Xi Chen. Department of Management Science and Engineering International Business School Beijing Foreign Studies University

Alternating Direction Method of Multipliers

Transcription:

Lagrangean Methods bounding through penalty adjustment thst@man.dtu.dk DTU-Management Technical University of Denmark 1

Outline Brief introduction How to perform Lagrangean relaxation Subgradient techniques Example: Setcover Decomposition techniques and Branch and Bound Dual Ascent 2

Introduction Lagrangean Relaxation is a technique which has been known for many years: Lagrange relaxation is invented by (surprise!) Lagrange in 1797! This technique has been very usefull in conjunction with Branch and Bound methods. Since 1970 this has been the bounding decomposition technique of choice...... until the beginning of the 90 ies (branch-and-price) 3

The Beasley Note This lecture is based on the (excellent!) Beasley note. The note has a practical approach to the problem: Emphasis on examples. Only little theory. Good practical advices. All in all: This note is a good place to start if you later need to apply Lagrangean relaxation. 4

Given a Linear program Min: s.t.: cx Ax b Bx d x {0,1} How can we calculate lower bounds? We can use heuristics to generate upper bounds, but getting (good) lower bounds is often much harder! The classical approach is to create a relaxation. 5

Requirements for relaxation The program: min{f(x) x Γ R n } is a relaxation of: min{g(x) x Γ R n } if: Γ Γ For x Γ : f(x) g(x) 6

Relaxation Graphically Objective g(x) f(x) Def g(x) Solutions Def f(x) 7

Example of relaxation: LP When we perform the LP relaxation: f(x) =g(x) Γ (= Z) Γ(= R) The classical branch-and-bound algorithm use the LP relaxation. It has the nice feature of being general, i.e. applicable to all MIP models. 8

A very simple example I g(x): Min: s.t.: 5x x 3 x 10 x R + 9

A very simple example II f(x): (relaxation of g(x) Min: 5x + λ(3 x) s.t.: x 10 x R + 10

Relaxation by removal of constraints Given: Min: s.t.: cx Ax b Bx d x {0,1} What if we instead of relaxing the domain constraints, relax another set of constraints? (this also goes for integer variables i.e. x Z 11

Lagrangean Relaxation Min: s.t.: cx + λ(b Ax) Bx d x {0,1} λ R + This is called the Lagrangean Lower Bound Program (LLBP) or the Lagrangean dual program. 12

Lagrangean Relaxation First: IS IT A RELAXATION? Well the feasible domain has been increased: Γ (Ax b, Bx d) Γ(Bx d) Regarding the objective: Inside the original domain: f(x) =g(x)+λ(b Ax) and since we know λ(b Ax) 0 f(x) = g(x) Outside, no guarantee, but that is not a problem! 13

Lagrangean Relaxation What can this be used for? Primary usage: Bounding! Because it is a relaxation, the optimal value will bound the optimal value of the real problem! Lagrangean heuristics, i.e. generate a good solution based on a solution to the relaxed problem. Problem reduction, i.e. reduce the original problem based on the solution to the relaxed problem. 14

Two Problems Facing a problem we need to decide: Which constraints to relax (strategic choice) How to find the lagrangean multipliers, (tactical choice) 15

Which constraints to relax Which constraints to relax depends on two things: Computational effort: Number of Lagrangian multipliers Hardness of problem to solve Integrality of relaxed problem: If it is integral, we can only do as good as the straightforward LP relaxation! The integrality point will be dealt with theoretically next time! And we will see an example here. 16

Multiplier adjustment In Beasley two different types are given: Subgradient optimisation Multiplier adjustment Of these, subgradient optimisation is the method of choice. This is general method which nearly always works! Hence, here we will only consider this method. Since the Beasley note more efficient (but much more complicated) adjustment methods has been suggested. 17

Lagrangean Relaxation We had: Min: s.t.: cx + λ(b Ax) Bx d x {0,1} λ R + 18

Problem reformulation Min: s.t.: (c j x j + j i λ i (b i a ij x ij )) Bx d x j {0,1} λ i R + Remember we want to obtain the best possible bounding, hence we want to maximize the λ bound. 19

The subgradient We define the subgradient: G i = b i j a ijx j If subgradient G i is positive, decrease λ i if G i is negative, increase λ i 20

Subgradient Optimisation The sub-gradient optimisation algorithm is now: Initialise π ]0, 2] Initialise λ values repeat Solve LLBP given λ values get Z LB, X j Calc. the subgradients G i = b i j a ijx j Calc. step size T = π(z UB Z LB ) i G2 i Update λ i = max(0,λ i +TG i ) until we get bored... 21

Example: Setcover Min: s.t.: 2x 1 +3x 2 +4x 3 +5x 4 x 1 + x 3 1 x 1 + x 4 1 x 2 + x 3 + x 4 1 x 1,x 2,x 3,x 4 {0,1} 22

Relaxed Setcover Min: 2x 1 +3x 2 +4x 3 +5x 4 + λ 1 (1 x 1 x 3 ) + λ 2 (1 x 1 x 4 ) + λ 3 (1 x 2 x 3 x 4 ) s.t.: x 1,x 2,x 3,x 4 {0,1} λ 0 How can we solve this problem to optimality??? 23

Optimization Algorithm The answer is so simple that we are reluctant calling it an optimization algorithm: Choose all x es with negative coefficients! What does this tell us about the strength of the relaxation? 24

Rewritten: Relaxed Setcover Min: s.t.: C 1 x 1 + C 2 x 2 + C 3 x 3 + C 4 x 4 + λ 1 + λ 2 + λ 3 x 1,x 2,x 3,x 4 {0,1} λ 0 C 1 = (2 λ 1 λ 2 ) C 2 = (3 λ 3 ) C 3 = (4 λ 1 λ 3 ) C 4 = (5 λ 2 λ 3 ) 25

GAMS for Lagrange Rel. for Setcover WHILE( counter < max_it, CC(j)= C(j)-SUM(i, A(i,j)*lambda(i)); x.l(j)=0 + 1\$(CC(j)<0); Z_LB = SUM(j, CC(j)*x.L(j)) + SUM(i, lambd G(i)=1 - SUM(j,A(i,j)*x.L(j)); T = pi * (Z_UB-Z_LB)/SUM(i,G(i)*G(i)); lambda(i)=max(0,lambda(i)+t*g(i)); counter=counter+1; lambda_sum = SUM(i,ABS(lambda(i))); put counter, Z_LB, G( 1 ), lambda( 1 ), la ); 26

Lower bound "lag.txt" us 2 4 2 0-2 -4 0 50 100 150 200 250 300 350 400 450 500 27

Comments Comments to the algorithm: This is actually quite interesting: The algorithm is very simple, but a good lower bound is found quickly! This relied a lot on the very simple LLBP optimization algorithm. Usually the LLBP requires much more work...... but according to Beasley, the subgradient algorithm very often works... 28

So whats the use? This is all wery nice, but how can we solve our problem? We may be lucky that the lowerbound is also a feasible and optimal solution (like integer solutions to LP formulations). We may reduce the problem, performing Lagrangean problem reduction, next week. We may generate heuristic solutions based on the LLBP, next week. We may use LLBP in lower bound calculations for a Branch and Bound algorithm. 29

In a Branch and Bound Method Why has Lagrangean relaxation become so important? Because it is usefull in Branch and Bound methods. x1=0 S x1=1 x2=0 S 0 S 1 S 00 S 01 S 10 S 11 x3=0 S 000 S 001 S 010 S 011 S 100 S 101 S 110 S 111 30

Branching Influence on Lagrangian Bounding Each branch corresponds to a simple choice: Branch up or branch down. This correspond to choose the value for one of our variables x i. Hence: If we want to include Lagrangian bounding in a branch and bound algorithm, we need to be able to solve subproblems with these fixings... 31

Branching Influence on Lagrangian Bounding II Given this lower bound on some sub-tree in the branch-and-bound tree, we can (perhaps) perform bounding. Important: Any solution to the Lagrangian problem is a bound, so we can stop at any time (not the case in Branch-and-Price). 32

Dual Ascent Another technique considered in the Beasley note is Dual Ascent. The idea is very simple: Given a (hard) MIP: Optimal (min) solution to MIP Optimal solution to LP = Optimal solution to DUAL LP Any solution to DUAL LP 33

Example in Beasley: Setcover LP Relaxation: objective: minimise j c j x j s.t. a ij x j 1 i j x j 0 34

Graphical Upper bounds feasible solutions Optimal solution to MIP GAP Optimal solution to LP Dual feasible solutions lower bounds 35

Dual Setcover objective: maximize s.t. u i a ij c j j i u i 0 i u i 36

Comments to Dual Ascent Dual ascent is a simple neat idea... Beasley is not too impressed... Dual ascent critically depends on the efficiency of the heuristic and the size of the GAP 37