Column Generation: Cutting Stock

Similar documents
Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

COLUMN GENERATION IN LINEAR PROGRAMMING

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI

Benders Decomposition

Lagrangean Methods bounding through penalty adjustment

Solving lexicographic multiobjective MIPs with Branch-Cut-Price

Outline. Alternative Benders Decomposition Explanation explaining Benders without projections. Primal-Dual relations

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Some Advanced Topics in Linear Programming

56:272 Integer Programming & Network Flows Final Examination -- December 14, 1998

Algorithms for Integer Programming

Two-stage column generation

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Selected Topics in Column Generation

3 INTEGER LINEAR PROGRAMMING

Linear Programming. Course review MS-E2140. v. 1.1

Recursive column generation for the Tactical Berth Allocation Problem

Integer Programming Explained Through Gomory s Cutting Plane Algorithm and Column Generation

Gurobi Guidelines for Numerical Issues February 2017

Wireless frequency auctions: Mixed Integer Programs and Dantzig-Wolfe decomposition

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Integer Programming Theory

Fundamentals of Integer Programming

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Unit.9 Integer Programming

CSE 417 Network Flows (pt 4) Min Cost Flows

56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997

5.3 Cutting plane methods and Gomory fractional cuts

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

Solutions for Operations Research Final Exam

Primal-Dual Methods for Approximation Algorithms

NATCOR Convex Optimization Linear Programming 1

Math Models of OR: The Simplex Algorithm: Practical Considerations

February 19, Integer programming. Outline. Problem formulation. Branch-andbound

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Column Generation Method for an Agent Scheduling Problem

What is linear programming (LP)? NATCOR Convex Optimization Linear Programming 1. Solving LP problems: The standard simplex method

Artificial Intelligence

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Handling first-order linear constraints with SCIP

Solving a Challenging Quadratic 3D Assignment Problem

Integer Programming Chapter 9

Optimization Methods in Management Science

Linear Programming Motivation: The Diet Problem

Civil Engineering Systems Analysis Lecture XIV. Instructor: Prof. Naveen Eluru Department of Civil Engineering and Applied Mechanics

Chapter II. Linear Programming

The Ascendance of the Dual Simplex Method: A Geometric View

Generalized Network Flow Programming

CHAPTER 3 REVISED SIMPLEX METHOD AND DATA STRUCTURES

NOTATION AND TERMINOLOGY

Towards a Memory-Efficient Knapsack DP Algorithm

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Branch-price-and-cut for vehicle routing. Guy Desaulniers

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Lecture 3: Totally Unimodularity and Network Flows

An Introduction to Dual Ascent Heuristics

Improving the heuristic performance of Benders decomposition

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Decomposition in Integer Linear Programming

Integer Programming and Network Modeis

TIM 206 Lecture Notes Integer Programming

B553 Lecture 12: Global Optimization

The Size Robust Multiple Knapsack Problem

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Notes for Lecture 18

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

Decomposition in Integer Linear Programming

Column Generation Based Primal Heuristics

An Improved Decomposition Algorithm and Computer Technique for Solving LPs

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Outline. Modeling. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Models Lecture 5 Mixed Integer Programming Models and Exercises

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

Approximation Algorithms

Mathematical Tools for Engineering and Management

lpsymphony - Integer Linear Programming in R

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

Discrete Optimization. Lecture Notes 2

Column Generation and its applications

On the Global Solution of Linear Programs with Linear Complementarity Constraints

3 No-Wait Job Shops with Variable Processing Times

Cutting Planes by Projecting Interior Points onto Polytope Facets

Financial Optimization ISE 347/447. Lecture 13. Dr. Ted Ralphs

Solving Linear Programs Using the Simplex Method (Manual)

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

CSE 417 Branch & Bound (pt 4) Branch & Bound

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

Combinatorial Optimization

Previously Local sensitivity analysis: having found an optimal basis to a problem in standard form,

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

1. Lecture notes on bipartite matching February 4th,

COMP9334: Capacity Planning of Computer Systems and Networks

Transcription:

Column Generation: Cutting Stock A very applied method thst@man.dtu.dk DTU-Management Technical University of Denmark 1

Outline History The Simplex algorithm (re-visited) Column Generation as an extension of the Simplex algorithm A simple example! 2

Introduction to Column Generation Column Generation (CG) is an old method, originally invented by Ford & Fulkerson in 1962! Benders decomposition algorithm dealt with adding constraints to a master problem CG deals with adding variables to a master problem CG is one of the most used methods in real life with lots of applications. (Relax it is significantly easier than Benders algorithm) 3

Given an LP problem Min c T x s.t. Ax b x 0 This is a simple sensible (in the SOB notation) LP problem 4

The simplex slack variables We introduce a number slack variables, one for each constraint, (simply included as extra x variables). They are also positive. Min s.t. z = c T x Ax = b x 0 5

The simplex (intermediate) problem In each iteration of the simplex algorithm, a number of basic variables x B may be non-zero (B matrix) and remaining variables are called non basic x N (A matrix). In the end of each iteration of the simplex algorithm, the values of the variables x B and x N contain a feasible solution. Min s.t. z = c T Bx B + c T Nx N Bx B + Ax N = b x B,x N 0 6

A reformulated version Min z = c T B x B + c T N x N s.t. x B = B 1 b B 1 A N x N x B,x N 0 7

A reformulated version Min z = c B B 1 b + (c N c B B 1 A N )x N s.t. x B = B 1 b B 1 A N x N x B,x N 0 8

Comments to the reformulated version At the end of each iteration (and the beginning of the next) it holds that: The current value of the non-basic variables are: x N = 0. Hence the current optimization value is: z = c B B 1 b. And hence the values of the basic variables are: x B = c B B 1. 9

Comments to the reformulated version The coefficients of the basic variables (x B ) in the objective function are zero! This simply means that the x B variables are not participating in the objective function. The coefficients of the non-basic variables are the so-called reduced costs: c N c B B 1 A N Because z = c T B x B + c T N x N, and we want to minimize, non-basic variables with negative reduced costs can improve the current solution, i.e. c N c B B 1 A N < 0 10

The Simplex Algorithm x S = b x S = 0 while MIN j (c j c B B 1 A j ) < 0 Select new basic variable j: c j c B B 1 A j < 0 Select new non-basic variable j by increasing x j as much as possible Swap columns between matrix B and matrix A end while 11

Comments to the algorithm There are three parts of the algorithm: Select the next basic variable. Select the next non-basic variable. Update data structures. We will now comment on each of the three steps. 12

Selecting the next basic variable To find the most negative reduced cost, all the reduced costs must be checked: c red = MIN j (c j c B B 1 A j ) < 0. In the revised simplex algorithm, first the system yb = c B is solved. (In the standard simplex algorithm, the reduced costs are given directly). Conclusion: This part (mainly) is dependent on the number of variables. 13

Selecting the next non-basic variable There are the same number of basic variables (to compare) as there are constraints We are only considering one non-basic variable Basically we are looking at the system: x B = x B B 1 A j x j. The revised simplex algorithm solves the system Bd = a, where a is the column of the entering variable. Conclusion: This part is dependent on the number of constraints 14

Bookkeeping In the end what needs to be done (in the revised simplex algorithm)? Update of the matrix s B and A: Simply swap the columns. Conclusion: This part is dependent on the number of constraints 15

Comments to the algorithm Selecting a non-negative variable principally requires checking all the many variables (time consuming). Selecting the new non-basic variable: There are the same number of basic variables (to compare) as there are constraints We are only considering one non-basic variable Basically we are looking at the system: x B = x B B 1 A j x j Update of the matrix s B and A: Simply swap 16 the columns.

LP programs with many variables! The crucial insight: The number of non-zero variables (the basis variables) is (at most) equal to the number of constraints, hence eventhough the number of possible variables (columns) may be large, we only need a small subset of these in the optimal solution. Basis Non basis 17

The simple column generation idea The simple idea in column generation is not to represent all variables explicitly, represent them implicitly: When looking for the next basic variable, solve an optimization problem, finding the variable (column) with the most negative reduced cost. 18

The column generation algorithm x S = b x S = 0 repeat repeat Select new basic variable j: c j c B B 1 A j < 0 Select new non-basic variable j by increasing x j as much as possible Swap columns between matrix B and matrix A until MIN j (c j c B B 1 A j ) >= 0 x = MIN(c j c B B 1 A j,legal) until no more improving variables 19

The reduced costs: c j c B B 1 A j < 0 Lets look at the individual components of the reduced costs c j c B B 1 A j < 0 : c j : The original costs for the variable A j : The column in the A matrix, which defines the variable. c B B 1 : For each constraint in the LP problem, this factor is multiplied to the column. c B is the cost vector for the current basic variables and B 1 is the inverse basic matrix. 20

Alternative c B B 1 For an optimization problem: {min cx Ax b,x 0}. Outside the inner loop, the reduced costs are: c N c B B 1 A N 0 for the existing variables. Notice that this also holds for the basic variables for which the costs are all zero, i.e. c cb 1 A 0. If we rename the term c B B 1 to the name π we can rewrite the expression: πa c. But this is exactly dual feasibility of our original problem! Hence we can use the dual variables from our inner loop when looking for new variables. 21

The column generation algorithm x S = b x S = 0 repeat π = MIN(min cx Ax b,x 0) x j = MIN(c j c B B 1 A j,legal) until no more improving variables 22

Cutting Stock Example The example which is always referred to regarding column generation is the cutting stock example. Assume you own a workshop where steel rods are cut into different pieces: Customers arrive and demand steel rods of certain lengths: 22 cm., 45 cm., etc. You serve the customers demands by cutting the steel rods into the rigth sizes. You recieve the rods in lengths of e.g. 200 cm. 23

Cutting Stock Example A A A B B C C C Log A A A B C C C B LOSS 24

Cutting Stock Objective How can you minimize your material waste (what s left of each rod after you have cut the customer pieces)? 25

Cutting Stock Formulation 1 Min: z = k y k s.t.: i x i k = b i i k w i x i k W y k k x i k N 0 y k {0, 1} 26

Cutting Stock Formulation 1 What is wrong with the above formulation? It contains many symmetric solutions, k!. This makes the problem extremely hard for branch and bound algorithms. There are many integer or binary variables For these reasons, it is never used in practice. 27

New Formulation How can we change the formulation? Instead of focussing on which steel rod a particular part is to be cut from, look at possible patterns used to cut from a steel rod. The question is then changed to focussing on how many times a particular pattern is used. 28

Cutting Stock Formulation 2 Min: z = j x j s.t.: j a i j x j b i i x j N 0 29

Cutting Stock Formulation 2 What are the x j and a i j? a i j is a cutting pattern, describing how to cut one steel rod into pieces. Notice that any legal solution to formulation 1 will always be consisting of a number of cutting patterns x j is then simply the number of times that cutting pattern is used. 30

Cutting Stock Formulation 2 There are two different problems with the formulation: We will only solve the relaxed problem, i.e. the LP version of the MIP problem, how do we get integer solutions? We need to consider all the cutting patterns??? Today we will only deal with the last problem... 31

Cutting Patterns We have a number of questions to these cutting patterns: But where do we get the cutting patterns from? How many are there? ( i a ) = i! a!( i a)!, i.e. a lot! Where: i is the number of different types (lengths) required Where: a is the average number of cutsizes in each of the patterns. 32

Cutting Patterns This is a very real problem: Even if we had a way of generating all the legal cutting patterns, our standard simplex algorithm will need to calculate the efficient variables, but we will not have memory to contain the variables in the algoritm, now what? 33

The improving variable How can we find the improving variable. We know that any variable where c j c B B 1 A j = c j πa j. But we want not just to choose any variable, but min c j πa j, the so called Dantzig rule, which is the standard non-basic variable selection rule. The Dantzig rule hence corresponds to the original costs (which might be zero), subtracted by the dual variable vector multiplied by the column of the new variable x j 34

Pattern generation: The subproblem For each iteration in the Simplex algorithm, we need to find the most negative column. We can do that by defining a new optimisation problem: Min z = 1 i π ia i j 35

Pattern generation: Ignore the constants Max: z = i π i a i j s.t.: i l i a i j L a i j Z + What about the j index? 36

What is the subproblem? The subproblem is the classical knapsack problem. The problem is theoretically NP-hard... But it is an easy NP-hard problem, meaning that we can solve the problem for relative large sizes of N, i.e. number of items (in this case different lengths) 37

Knapsack solution methods The problem may e.g. be solved using dynamic programming. We will ignore the problem of efficiency and just use standard MIP solvers. Notice that we are going to solve exactly the same problem, but with updated profit coefficients. 38

The Column Generation Algorithm Create initial columns repeat Solve master problem, find π Solve the subproblem min z sub = 1 i π i a i Add new column problem to master problem until z sub 0 39

Lets look at an example! We have some customers who wants: Pieces in three lengths: 44 pieces of length 81 cm., 3 pieces of length 70 cm. and 48 pieces of length 68 cm. We have steel rods of length 218 cm. of unit price. How do we get the initial columns??? 40

The initial columns We need some initial columns and we basically have two choices: Start with fake columns, which are so expensive that we know they will not be in the final problem. If possible, create some more competetive columns. We simply use columns where each column contains 1 in different rows 41

Master problem \ Initial master problem minimize x_1 + x_2 + x_3 subject to l1: x_1 >= 44 l2: x_2 >= 3 l3: x_3 >= 48 end 42

Sub problem: Given π = [1, 1, 1] \ initial sub problem maximize a_1 + a_2 + a_3 subject to l: 81a_1 + 70a_2 + 68a_3 <= 218 bound a_1 <= 2 a_2 <= 3 a_3 <= 3 integer a_1 a_2 a_3 end 43

First sub-problem solution The best solution (a 1,a 2,a 3 ) = (0, 0, 3) 44

New Master Problem \ second master problem minimize x_1 + x_2 + x_3 + x_4 subject to l1: x_1 >= 44 l2: x_2 >= 3 l3: x_3 + 3x_4 >= 48 end 45

First master solution The best solution duals (π 1,π 2,π 3 ) = (1.0, 1.0, 0.33) 46

Second Sub problem: Given π = [1.0, 1.0, 0.33] \ initial sub problem maximize a_1 + a_2 + 0.33a_3 subject to l: 81a_1 + 70a_2 + 68a_3 <= 218 bound a_1 <= 2 a_2 <= 3 a_3 <= 3 integer a_1 a_2 a_3 end 47

First sub-problem solution The best solution (a 1,a 2,a 3 ) = (0, 3, 0) 48

New Master Problem \ second master problem minimize x_1 + x_2 + x_3 + x_4 + x_5 subject to l1: x_1 >= 44 l2: x_2 + 3x_5 >= 3 l3: x_3 + 3x_4 >= 48 end 49

Why is this so interesting??? I have told you a number of times that this is the most applied method in OR, why? We can solve problems with a "small" number of constraints and an exponential number variables... this is a special type of LP problem but... Through the so called Dantzig-Wolfe decomposition we can change any LP problem to a problem with a reduced number of constraints, but an increased number of variables... By performing clever decompositions we can 50

Subproblem solution Usually (though not always) we need an efficient algorithm to solve the subproblem. Subproblems are often one of the following types: Knapsack (like in the cutting stock problem). Shortest path problems in graphs. Hence: To use column generation efficiently you often need to know something about complexity theory... 51

Getting Integer Solutions There are several methods: Rounding up, not always possible... Getting integer solutions using (meta)heuristics. This is the reason for the great interest in the set partioning/set covering problem. Branch and price, hard and quite time consuming... 52