Integer Programming Chapter 9

Similar documents
3 INTEGER LINEAR PROGRAMMING

Integer Programming Chapter 9

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

TIM 206 Lecture Notes Integer Programming

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Integer Programming! Using linear programming to solve discrete problems

Optimization Methods in Management Science

Fundamentals of Integer Programming

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

The MIP-Solving-Framework SCIP

lpsymphony - Integer Linear Programming in R

Integer Programming Theory

Unit.9 Integer Programming

Parallel Branch & Bound

Computational Integer Programming. Lecture 12: Branch and Cut. Dr. Ted Ralphs

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Column Generation: Cutting Stock

15.083J Integer Programming and Combinatorial Optimization Fall Enumerative Methods

Lagrangean Methods bounding through penalty adjustment

Handling first-order linear constraints with SCIP

February 19, Integer programming. Outline. Problem formulation. Branch-andbound

Approximation Algorithms

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Math Models of OR: The Simplex Algorithm: Practical Considerations

2 is not feasible if rounded. x =0,x 2

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

B553 Lecture 12: Global Optimization

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

Discrete Optimization. Lecture Notes 2

Benders Decomposition

How to use your favorite MIP Solver: modeling, solving, cannibalizing. Andrea Lodi University of Bologna, Italy

Lagrangean relaxation - exercises

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

Motivation for Heuristics

Integer Programming. Xi Chen. Department of Management Science and Engineering International Business School Beijing Foreign Studies University

Heuristics in MILP. Group 1 D. Assouline, N. Molyneaux, B. Morén. Supervisors: Michel Bierlaire, Andrea Lodi. Zinal 2017 Winter School

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

Improved Gomory Cuts for Primal Cutting Plane Algorithms

On Mixed-Integer (Linear) Programming and its connection with Data Science

The SYMPHONY Callable Library for Mixed-Integer Linear Programming

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Column Generation Based Primal Heuristics

Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

CMPSCI611: The Simplex Algorithm Lecture 24

Experiments On General Disjunctions

Solutions for Operations Research Final Exam

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

Topics. Introduction. Specific tuning/troubleshooting topics "It crashed" Gurobi parameters The tuning tool. LP tuning. MIP tuning

Noncommercial Software for Mixed-Integer Linear Programming

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

The Size Robust Multiple Knapsack Problem

Wireless frequency auctions: Mixed Integer Programs and Dantzig-Wolfe decomposition

Gurobi Guidelines for Numerical Issues February 2017

Applied Mixed Integer Programming: Beyond 'The Optimum'

Optimization of Design. Lecturer:Dung-An Wang Lecture 8

3 No-Wait Job Shops with Variable Processing Times

4 Integer Linear Programming (ILP)

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Experience with CGL in the PICO Mixed- Integer Programming Solver

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems

Algorithms for Integer Programming

Primal Heuristics in SCIP

Advanced Use of GAMS Solver Links

Algorithms for Decision Support. Integer linear programming models

Section Notes 4. Duality, Sensitivity, and the Dual Simplex Algorithm. Applied Math / Engineering Sciences 121. Week of October 8, 2018

9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM

Cutting Planes by Projecting Interior Points onto Polytope Facets

Lecture 2 - Introduction to Polytopes

Approximation Algorithms

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

Lagrangean Relaxation of the Hull-Reformulation of Linear Generalized Disjunctive Programs and its use in Disjunctive Branch and Bound

(Duality), Warm Starting, and Sensitivity Analysis for MILP

Agenda. Understanding advanced modeling techniques takes some time and experience No exercises today Ask questions!

The Heuristic (Dark) Side of MIP Solvers. Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL

Linear Programming. Course review MS-E2140. v. 1.1

sbb COIN-OR Simple Branch-and-Cut

Introduction. Linear because it requires linear functions. Programming as synonymous of planning.

Some Advanced Topics in Linear Programming

Linear Programming Duality and Algorithms

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

SCIP. 1 Introduction. 2 Model requirements. Contents. Stefan Vigerske, Humboldt University Berlin, Germany

Manpower Planning: Task Scheduling. Anders Høeg Dohn

A LARGE SCALE INTEGER AND COMBINATORIAL OPTIMIZER

11. APPROXIMATION ALGORITHMS

The Simplex Algorithm

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

Chapter II. Linear Programming

ME 391Q Network Flow Programming

Financial Optimization ISE 347/447. Lecture 13. Dr. Ted Ralphs

Read: H&L chapters 1-6

Linear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.

SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

Linear Programming. them such that they

Approximation Algorithms: The Primal-Dual Method. My T. Thai

New Directions in Linear Programming

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Transcription:

1 Integer Programming Chapter 9 University of Chicago Booth School of Business Kipp Martin October 30, 2017

2 Outline Branch and Bound Theory Branch and Bound Linear Programming Node Selection Strategies Variable Selection Strategies Problem Formulation Quality Solver Options Epsilon Optimality Preprocessing

Branch and Bound Theory In American football, on fourth and long, what do you do? We do the same thing in optimization.

Branch and Bound Theory We punt and solve an easier problem. We solve a relaxation of the original problem. We may solve many (many, many, many) relaxed problems.

Branch and Bound Key Concepts a problem relaxation basic branch and bound an upper bound a lower bound fathoming a node Reading: Chapter 9 of the text.

Branch and Bound Theory Problem (PR) min{f (x) x Γ R n } is a relaxation of problem (P) min{g(x) x ˆΓ R n } if and only if ˆΓ Γ and f (x) g(x) for all x ˆΓ.

Branch and Bound Theory It follows that if (PR) is a relaxation of (P) the optimal solution value of (PR) is less than or equal to the optimal solution value of (P). If (PR) is relaxation of (P), then (P) is a restriction of (PR).

Relaxation Example Let u be an arbitrary set of multipliers for the Ax = b. Then the original linear program min c x LP(x) s.t. Ax = b x 0 is equivalent to min c x + u (b Ax) LP(x, u) s.t. Ax = b x 0

Relaxation Example Let s rewrite LP(x, u) slightly as min (c u A)x + u b LP(x, u) s.t. Ax = b x 0 This is going to seem a bit weird, but just go with me on this. Define a problem relaxation of LP(x, u) by min (c u A)x + u b LPR(x, u) x 0 I formed problem LPR(x, u) from problem LP(x, u) by deleting the Ax = b constraints hence the term relaxation.

10 Relaxation Example Variation on a Theme: Here is another simple relaxation. Take the integer programming problem: min c x (1) s.t. Ax = b (2) x 0 (3) x j {0, 1}, j B (4) x j Z, j I (5)

11 Relaxation Example and replace it with the linear programming relaxation: min c x (6) s.t. Ax = b (7) x 0 (8) x j 1 (9) For now we will work with the linear programming relaxation. We will come back to LPR(x, u) later.

Branch and Bound Theory The solution technique for (MIP) used most often in practice is branch-and-bound. Branch-and-bound is a philosophy for problem solution, not really a specific algorithm. There are three major factors affecting the efficiency of this method.

Branch and Bound Theory 1. Selection of a problem relaxation: Want a relaxation that is tight! Want a relaxation that is easy to solve! want to have our cake and it too! 2. Problem branching/separation: from a current candidate problem create new candidate problems. The new candidate problems are restrictions of the parent candidate problem. 3. Problem selection: select a problem from the candidate list. It is important to make the selection in such a fashion that the gap between the upper and lower bounds closes rapidly and that feasible solutions are found.

14 Branch and Bound Linear Programming Step 1: (Initialization) If there is a known feasible solution x to (MIP) set z UB c x, if there is no known feasible solution set z UB. The feasible solution x which gives the smallest possible value for z UB is known as the incumbent and z UB is an upper bound on the optimal solution value of (MIP). Solve (MIP), the linear programming relaxation of (MIP). If (MIP) is infeasible then (MIP) is infeasible. If the (MIP) is integer for all j I, stop with an optimal solution to (MIP). Otherwise, add problem (MIP) to the list of candidate problems and go to Step 2. Step 2: (Problem Selection) Select a candidate problem for separation from the list of candidate problems and go to Step 3. A common rule for candidate problem selection is to select the candidate problem with the smallest (assuming a minimization) linear programming relaxation value.

Branch and Bound Linear Programming Step 3: (Branching/Separation) The candidate problem (CP) under consideration has at least one fractional integer variable. 3.a Select a fractional integer variable, x k = n k + f k for branching purposes. Here n k is a nonnegative integer and f k is in the open interval (0, 1). 3.b Create two new mixed integer programs from (CP). Create one new candidate problem by adding the constraint x k n k + 1 to the constraint set of (CP). Create the second new candidate problem from (CP) by adding the constraint x k n k to the constraint set of (CP). The two new candidate problems are restrictions of the parent candidate problem since they are created by adding a constraint to the parent.

Branch and Bound Linear Programming 3.c Solve the linear programming relaxation of the two new candidate problems. 3.c.i If the linear programming relaxation is infeasible drop the newly created problem from further consideration. 3.c.ii If the linear programming relaxation is integer for all j I update the incumbent value z UB and make this solution the incumbent if necessary. Drop this problem from further consideration. 3.c.iii If the linear programming relaxation has at least one fractional variable x j for j I and the objective function value is strictly less than z UB, add this problem to the list of candidate problems. Go to Step 4

Branch and Bound Linear Programming Step 4: (Optimality Test) Delete from the list of candidate problems any problem with an objective value of its relaxation which is not strictly less than z UB. Stop if this list is emptied. Otherwise, go to Step 2. If the list is emptied and z UB = problem (MIP) is infeasible, otherwise z UB is the optimal solution value of (MIP). In Steps 3 and 4, a candidate problem is deleted when the linear programming relaxation is infeasible, or integral, or has a linear programming relaxation value larger than z UB. This process of deleting candidate problems is known as fathoming.

Branch and Bound Linear Programming min x 1 x 2 s.t. x 1 + 2x 2 5 9x 1 + 4x 2 18 4x 1 2x 2 4 x 1, x 2 0 x 1, x 2 Z Solution: x 1 =.727273, x 2 = 2.86363 Objective function value is 3.590903. Set z UB =. Select x 1 as the branching variable.

Branch and Bound Linear Programming Add the constraint x 1 0 gives min x 1 x 2 Linear Program 2 : s.t. x 1 + 2x 2 5 9x 1 + 4x 2 18 4x 1 2x 2 4 x 1 0 x 1, x 2 0 The optimal solution is x 1 = 0, x 2 = 2.5 with optimal objective function value 2.5.

Branch and Bound Linear Programming Add the constraint x 1 1 gives min x 1 x 2 Linear Program 3 : s.t. x 1 + 2x 2 5 9x 1 + 4x 2 18 4x 1 2x 2 4 x 1 1 x 1, x 2 0 The optimal solution is x 1 = 1.0, x 2 = 2.25 with an optimal objective function value of 3.25. There are now two candidate problems. Select candidate problem 3 for further branching since it has the smallest linear programming relaxation value.

21 Branch and Bound Linear Programming x 1 <= 0 1 x 1 =.727272 x 2 = 2.863636 x 1 >= 1 2 3 x 1 = 1.0 x 2 = 2.25 x 1 = 0.0 x 2 = 2.5 x 2 <= 2 x 2 >= 3 x 1 = 1.111111 x 2 = 2.0 x 1 <= 1 4 5 x 1 >= 2 Infeasible 6 7 Infeasible x 1 = 1.0 x 2 = 2.0

22 Branch and Bound Linear Programming Variable x 2 is the only fractional variable for linear program 3 and is used for branching. Create two new linear programs by using the constraints x 2 2 and x 2 3. First branch on x 2 2. min x 1 x 2 Linear Program 4 : s.t. x 1 + 2x 2 5 9x 1 + 4x 2 18 4x 1 2x 2 4 x 1 1 x 2 2 x 1, x 2 0 The optimal solution is x 1 = 1.111111, x 2 = 2 with an optimal objective function value of 3.111111.

Branch and Bound Linear Programming Next branch on x 2 2. min x 1 x 2 Linear Program 5 : s.t. x 1 + 2x 2 5 9x 1 + 4x 2 18 4x 1 2x 2 4 x 1 1 x 2 3 x 1, x 2 0 Linear program 5 is infeasible. There are now two candidate problems. They are candidate problems 2 and 4. Since linear program 4 has the smallest objective function value select it for branching.

Branch and Bound Linear Programming Create linear programs 6 and 7 by branching on variable x 1 = 1.111111. First branch on x 1 1. min x 1 x 2 Linear Program 6 : s.t. x 1 + 2x 2 5 9x 1 + 4x 2 18 4x 1 2x 2 4 x 1 1 x 2 2 x 1 1 x 1, x 2 0 The optimal solution is x 1 = 1, x 2 = 2 with an optimal objective function value of 3. Since this solution is integer it is the new incumbent and provides the new upper bound z UB = 3.

Branch and Bound Linear Programming Linear program 7 is infeasible. Candidate problem 2 is the only candidate problem remaining in the list. The linear programming relaxation value of this candidate problem is 2.5 which is worse than the incumbent upper bound value of 3.0. Delete candidate problem 2. There are no candidate problems remaining so the optimal solution to the integer program is x 1 = 1, x 2 = 2 with optimal solution value 3.

26 Branch and Bound Linear Programming x 1 <= 0 1 x 1 =.727272 x 2 = 2.863636 x 1 >= 1 2 3 x 1 = 1.0 x 2 = 2.25 x 1 = 0.0 x 2 = 2.5 x 2 <= 2 x 2 >= 3 x 1 = 1.111111 x 2 = 2.0 x 1 <= 1 4 5 x 1 >= 2 Infeasible 6 7 Infeasible x 1 = 1.0 x 2 = 2.0

Node Selection Strategies It is necessary to pick a node (candidate problem) from a candidate list. Select a node with the best (relaxed) objective function value. This is a breadth first strategy and can leave many dangling nodes which presents a problem of computer storage. Depth first, or last-in, first-out (LIFO) where the node selected is the most recently created node with the smallest linear programming solution value Select a node based on calculations made related to fractional variables.

28 Variable Selection Strategies Pick the fractional variable that is the farthest from being integer Assign priorities (either static or dynamic). Branch on the fractional variable with the highest priority. Use pseudo-costs P U k := zj+1 z j f k, P D k := zj+2 z j 1 f k Use estimates from dual-simplex pivots or use strong branching.

Problem Formulation Quality THE BIG TAKE AWAY: Different polyhedra may contain exactly same set of integer points! THE BIG TAKE AWAY: Different polyhedra may contain exactly same set of integer points! THE BIG TAKE AWAY: Different polyhedra may contain exactly same set of integer points! WHAT IS THE BIG TAKE AWAY?

Problem Formulation Quality Absolutely Critical: Understand the differences between 1. Γ this is an MILP 2. Γ this is an LP 3. conv(γ) this is an LP-R In most interesting cases conv(γ) Γ. We want to find conv(γ) or a close approximation.

31 Reformulation (Lot Sizing) Dynamic Lot Sizing : Variables: x it units of product i produced in period t I it inventory level of product i at the end of the period t y it is 1 if there is nonzero production of product i during period t, 0 otherwise Parameters: d it demand product i in period t f it fixed cost associated with nonzero production of product i in period t c it marginal production cost for product i in period t h it marginal holding cost charged to product i at the end of period t g t production capacity in period t

Reformulation (Lot Sizing) Objective: Minimize sum of marginal production cost, holding cost, fixed cost N T (c it x it + h it I it + f it y it ) i=1 t=1 Constraint 1: Do not exceed total capacity in each period N x it g t, i=1 t = 1,..., T

Reformulation (Lot Sizing) Constraint 2: Inventory balance equations I i,t 1 + x it I it = d it, i = 1,..., N, t = 1,..., T Constraint 3: Fixed cost forcing constraints x it M it y it 0, i = 1,... N, t = 1,..., T

Reformulation (Lot Sizing) Dynamic Lot Sizing : A standard formulation is: min s.t. N T (c it x it + h it I it + f it y it ) i=1 t=1 N x it g t, t = 1,..., T i=1 I i,t 1 + x it I it = d it, i = 1,..., N, t = 1,..., T x it M it y it 0, i = 1,... N, t = 1,..., T x it, I it 0, i = 1,..., N, t = 1,..., T y it {0, 1}, i = 1,..., N, t = 1,..., T.

Reformulation (Lot Sizing) Dynamic Lot Sizing : An alternate formulation: z itk is 1, if for product i in period t, the decision is to produce enough items to satisfy demand for periods t through k, 0 otherwise. z ij,t 1 + t 1 j=1 T z i1k = 1 k=1 T z itk = 0, i = 1,..., N, t = 2,..., T k=t T z itk y it, i = 1,..., N, t = 1,..., T k=t

Reformulation (Lot Sizing) Dynamic Lot Size (tvw200) : Tight Loose Rows 3208 3208 Columns 7987 4600 LP Value 187503 116880 What happens when you try to solve?

Solver Options Most solvers take options. This is particularly important in integer programming. In GAMS you can communicate options to solvers through an option text file. First we tell GAMS which solver we want: OPTION MIP = CoinCbc; Next we tell GAMS that we want the first option file (you can have more than one, 2, 3,...) lot_size.optfile = 1;

Solver Options Here is some code we where we put in GAMS options file opt CoinCbc option file /coincbc.opt/; put opt; put optcr 0 / put reslim 100000 / put nodelim 5000000 / put cuts off / put knapsackcuts on /; putclose opt; Make sure to put this code before the solve statement.

Solver Options Here is what the options file does: set the tolerance on integer optimality to zero (optcr 0) set a time limit of 100000 seconds set a node limit of 5000000 turned cutting plane generation off (cuts off) turned knapsack cuts on (knapsackcuts on)

Epsilon Optimality Key Take Away: Actually proving optimality in branch and bound can be tough. The closer you get, the harder it becomes to resolve that list bit of integrality gap. If, for example, you set optcr =.01, then branch and bound will terminate (assume minimization here) when.99*ub LB

41 Epsilon Optimality Experiment: run the tight version of the lot sizing with ratio 0 and ratio.0005. Here is what happens for me: UB LB Nodes Seconds optcr =.000 187543 187508 112000 4304 optcr =.005 187550 187503 42 21.84 Is it worth it?

Epsilon Optimality Another Take Away: the data may not be that accurate to being with. Standard Oil story.

Preprocessing By preprocessing we mean what is done to a formulation to make it more amenable to solution before solving the linear programming relaxation. Objective: make the linear programming relaxation of the mixed-integer program easy and tight. Try the following eliminate redundant constraints fix variables scale coefficients coefficient reduction rounding improve bounds on variables and constraints probing We work with the following canonical form: a j x j + a j x j a j x j a j x j b (10) j I + j C + j I j C

44 Preprocessing Rounding: If C + = C =, a j is integer for all j I + I and α = gcd(a j a j I + I ) then the conical form of the constraint is which is equivalent to then a valid rounding is a j x j a j x j b (11) j I + j I (a j /α)x j (a j /α)x j b/α. (12) j I + j I (a j /α)x j (a j /α)x j b/α. (13) j I + j I

Preprocessing Rounding Example: Consider the inequality (assume x 1, x 2 are general integer variables. 2x 1 + 2x 2 3 A feasible solution is x 1 = 1.5 and x 2 = 0. Now let s round, So an equivalent inequality is: α = gcd(a 1, a 2 ) = gcd(2, 2) = 2 (1/2)(2x 1 + 2x 2 ) (1/2)3 x 1 + x 2 1.5 Rounding up the right-hand-side gives: x 1 + x 2 2 Is x 1 = 1.5 and x 2 = 0 feasible?

46 Preprocessing Rounding Example: Let s look at some geometry. Plot the feasible regions: Γ 1 = {(x 1, x 2 ) 2x 1 + 2x 2 3, x 1, x 2 0} Γ 2 = {(x 1, x 2 ) x 1 + x 2 2, x 1, x 2 0} What is the relationship between? Γ 1 Z 2 and Γ 2 Z 2 Γ 1 and Γ 2

47 Preprocessing Coefficient Reduction: if C = I = then it is valid to reduce the coefficients on the integer variables to b. That is, (10) is equivalent to j I + min{a j, b}x j + j C + a j x j b. (14) Additionally, if C or I is not empty then upper bounds on the variables in these sets are used as follows. Define: Then (10) is equivalent to λ := b + a j h j + a j h j. (15) j C j I min{a j, λ}x j + a j x j a j x j a j x j b. (16) j I + j C + j I j C

48 Preprocessing Tightening Bounds: The canonical form a j x j + a j x j a j x j a j x j b j I + j C + j I j C implies for each k C + a k x k b j C + j k a j x j a j x j + a j x j + a j x j. j I + j C j I The smallest the right hand side of can be is b j C + j k a j h j a j h j + a j l j + a j l j. j I + j C j I

49 Preprocessing Therefore, it is valid to reset l k to (b j C + j k a j h j a j h j + a j l j + a j l j )/a k j I + j C j I Using similar logic the upper bounds of variables indexed by C I are adjusted by ( a j l j + a j l j a j h j a j h j b)/a k j C + j I + j I j C j k

Preprocessing This is an iterative process. The upper and lower bounds on variables are adjusted until there is no improvement in an upper or lower bound. Once the lower and upper bounds are calculated one can apply coefficient reduction on the integer variable coefficients.

Preprocessing Example: Dynamic Lot Sizing min s.t. N T (c it x it + h it I it + f it y it ) i=1 t=1 N x it g t, t = 1,..., T i=1 I i,t 1 + x it I it = d it, i = 1,..., N, t = 1,..., T x it M it y it 0, i = 1,... N, t = 1,..., T x it, I it 0, i = 1,..., N, t = 1,..., T y it {0, 1}, i = 1,..., N, t = 1,..., T.

52 Preprocessing Example: Dynamic Lot Sizing (Continued) Consider the Big M constraints in canonical form x it My it 0 x it + My it 0 Recall λ := b + a j h j + a j h j. j C j I In this case, what is b, C, I, and λ? What do we get after coefficient reduction?

Preprocessing Example: Dynamic Lot Sizing (Continued) Let s tighten the bounds on the x it variables. Assume 5 time periods (we drop the product subscript all products are treated identically). In canonical form: I 4 x 5 d 5 I 3 x 4 + I 4 d 4 I 2 x 3 + I 3 d 3 I 1 x 2 + I 2 d 2 x 1 + I 1 d 1 What is a valid upper bound on x 5? What about I 4? Work through to period 1.

54 Preprocessing Feasibility: The canonical form is a j x j + a j x j a j x j a j x j b j I + j C + j I j C if a j h j + a j h j a j l j a j l j < b. j I + j C + j I j C the model instance is not feasible.

55 Preprocessing Redundancy: The canonical form is if a j x j + a j x j a j x j a j x j b j I + j C + j I j C a j l j + a j l j a j h j a j h j b. j I + j C + j I j C the constraint is redundant and can be deleted. Solve (possibly relaxations) min a j x j + a j x j a j x j a j x j j I + j C + j I j C s.t. Ax b x 0

56 Preprocessing Probing and Variable Fixing: In a mixed 0/1 linear program probing refers to fixing a binary variable x k to 0 or 1 and then observing any resulting implications. Assume variable x k is binary. If x k = 1, k I and j I + a j h j + j C + a j h j j I \{k} a j l j j C a j l j < b + a k then the model is infeasible which implies it is valid to fix variable x k to 0. If x k = 0, k I + and j I + \{k} a j h j + a j h j a j l j a j l j < b j C + j I j C then the model is infeasible which implies it is valid to fix variable x k to 1.

Preprocessing How to preprocess in Coin-OR Cbc. See (www.coin-or.org) for free optimization solvers. Declare a new solver interface this is what will get preprocessed. OsiSolverInterface *m_osisolverpre = NULL; CglPreProcess process; m_osisolverpre = process.preprocess(*solver->osisolver, false, 10); Build the Cbc model with the preprocessed solver interface and solve CbcModel *model = new CbcModel( *m_osisolverpre); model->branchandbound();

58 Preprocessing Unwind to get the original model back process.postprocess( *model->solver() ); Results with p0033.osil Variables Constraints LP Relax Nodes Without 33 16 2520.6 614 With 24 13 2905.7 184