Separation Algorithms for 0-1 Knapsack Polytopes

Similar documents
Cover Inequalities. As mentioned above, the cover inequalities were first presented in the context of the 0-1 KP. The 0-1 KP takes the following form:

Integer Programming Theory

3 No-Wait Job Shops with Variable Processing Times

Algorithms for Integer Programming

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

Lecture 2 - Introduction to Polytopes

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Improved Gomory Cuts for Primal Cutting Plane Algorithms

11 Linear Programming

6. Lecture notes on matroid intersection

Mathematical and Algorithmic Foundations Linear Programming and Matchings

The Encoding Complexity of Network Coding

Math 5593 Linear Programming Lecture Notes

A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

LECTURES 3 and 4: Flows and Matchings

Chapter 15 Introduction to Linear Programming

In this lecture, we ll look at applications of duality to three problems:

Some Advanced Topics in Linear Programming

Approximation Algorithms

Theorem 2.9: nearest addition algorithm

A Computational Study of Conflict Graphs and Aggressive Cut Separation in Integer Programming

Approximation Algorithms

5.3 Cutting plane methods and Gomory fractional cuts

1. Lecture notes on bipartite matching February 4th,

Branch-and-Cut and GRASP with Hybrid Local Search for the Multi-Level Capacitated Minimum Spanning Tree Problem

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

Discrete Optimization. Lecture Notes 2

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Applied Lagrange Duality for Constrained Optimization

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

Approximation Algorithms

Target Cuts from Relaxed Decision Diagrams

2 The Fractional Chromatic Gap

Notes for Lecture 24

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

3 INTEGER LINEAR PROGRAMMING

Polynomial-Time Approximation Algorithms

Approximation Algorithms: The Primal-Dual Method. My T. Thai

1. Lecture notes on bipartite matching

1 Linear programming relaxation

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings

Treewidth and graph minors

56:272 Integer Programming & Network Flows Final Examination -- December 14, 1998

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Dual-Based Approximation Algorithms for Cut-Based Network Connectivity Problems

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

cuts François Margot 1 Chicago, IL Abstract The generalized intersection cut (GIC) paradigm is a recent framework for generating

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

Optimality certificates for convex minimization and Helly numbers

The Simplex Algorithm

The Size Robust Multiple Knapsack Problem

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Optimality certificates for convex minimization and Helly numbers

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Column Generation: Cutting Stock

Column Generation Method for an Agent Scheduling Problem

Selected Topics in Column Generation

COLUMN GENERATION IN LINEAR PROGRAMMING

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

Last topic: Summary; Heuristics and Approximation Algorithms Topics we studied so far:

Framework for Design of Dynamic Programming Algorithms

11.1 Facility Location

CMPSCI611: The Simplex Algorithm Lecture 24

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Reload Cost Trees and Network Design

Computational Integer Programming. Lecture 12: Branch and Cut. Dr. Ted Ralphs

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints

Probabilistic Graphical Models

On the Max Coloring Problem

Algorithms for Decision Support. Integer linear programming models

From final point cuts to!-polyhedral cuts

A Computational Study of Bi-directional Dynamic Programming for the Traveling Salesman Problem with Time Windows

Linear Programming Duality and Algorithms

Decomposition and Dynamic Cut Generation in Integer Linear Programming

Algorithms and Data Structures

Investigating Mixed-Integer Hulls using a MIP-Solver

6 Randomized rounding of semidefinite programs

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints

Construction of Minimum-Weight Spanners Mikkel Sigurd Martin Zachariasen

General properties of staircase and convex dual feasible functions

Winning Positions in Simplicial Nim

A Branch-and-Bound Algorithm for the Knapsack Problem with Conflict Graph

Exploiting Degeneracy in MIP

3 Fractional Ramsey Numbers

5. Lecture notes on matroid intersection

Cutting Planes for Some Nonconvex Combinatorial Optimization Problems

4 Integer Linear Programming (ILP)

11. APPROXIMATION ALGORITHMS

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

Design and Analysis of Algorithms (V)

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

Transcription:

Separation Algorithms for 0-1 Knapsack Polytopes Konstantinos Kaparis Adam N. Letchford February 2008, Corrected May 2008 Abstract Valid inequalities for 0-1 knapsack polytopes often prove useful when tackling hard 0-1 Linear Programming problems. To use such inequalities effectively, one needs separation algorithms for them, i.e., routines for detecting when they are violated. We show that the separation problems for the so-called extended cover and weight inequalities can be solved exactly in O(nb) time and O((n + a max )b) time, respectively, where n is the number of items, b is the knapsack capacity and a max is the largest item weight. We also present fast and effective separation heuristics for the extended cover and lifted cover inequalities. Finally, we present a new exact separation algorithm for the 0-1 knapsack polytope itself, which is faster than existing methods. Extensive computational results are also given. Keywords: integer programming branch-and-cut knapsack problems 1 Introduction A 0-1 knapsack problem (KP) is a problem of the form max { c T x : a T x b, x {0, 1} n}, where c Z n +, a Z n + and b Z +. It is well-known that the KP is NP -hard, but can be solved in O(nb) time by dynamic programming. A 0-1 knapsack polytope is the convex hull of feasible solutions to a KP, i.e., a polytope (bounded polyhedron) of the form: conv { x {0, 1} n : a T x b }. Such polytopes have been widely studied and many classes of valid inequalities are known, such as the lifted cover inequalities (LCIs) of Balas [3] and Wolsey [31], and the weight inequalities (WIs) of Weismantel [29]. University of Southampton Lancaster University 1

One motive for studying these polytopes, as pointed out by Crowder et al. [11], is that valid inequalities for them can be used as cutting planes for general 0-1 Linear Programs (0-1 LPs). The idea is to consider each individual constraint of a 0-1 LP as a 0-1 knapsack constraint (complementing variables if necessary to obtain non-negative coefficients) and derive cutting planes from the associated polytope. The separation problem for 0-1 knapsack polytopes is this: given n, a, b and a vector x [0, 1] n, either find an inequality that is valid for the knapsack polytope and violated by x, or prove that none exists. One can define the separation problem for a given class of inequalities (such as LCIs or WIs) analogously. Separation algorithms, whether exact or heuristic, are needed if one wishes to use valid inequalities as cutting planes (see, e.g., Grötschel et al. [15] or Nemhauser & Wolsey [24]). Crowder et al. [11] and Gu et al. [16] proposed some simple but effective heuristics for LCI separation, and used them to obtain good results for sparse, unstructured 0-1 LPs. LCIs are now incorporated into several software packages for integer programming. WI separation, on the other hand, has received less attention, although there exists a pseudo-polynomial time exact algorithm (Weismantel [29]) and a simple heuristic (Helmberg & Weismantel [20]). In some cases, one can get better results by using general valid inequalities for knapsack polytopes as cutting planes. The polynomial equivalence of separation and optimisation (Grötschel et al. [15]) implies that the separation problem for knapsack polytopes can be solved in pseudo-polynomial time. Algorithms for this problem have been proposed by Boyd [7, 8, 9] and Boccia [6]. (For extensions to the mixed-integer case, see Yan & Boyd [33] and Fukasawa & Goycoolea [13].) In this paper, we present and test several new exact and heuristic separation algorithms. The structure of the paper is as follows: In Section 2, we briefly review the literature on 0-1 knapsack polytopes, separation algorithms, and their application to general 0-1 LPs. Section 3 is concerned with LCI separation. We consider first the so-called extended cover inequalities (ECIs), a special case of LCIs introduced by Balas [3]. We present a fairly simple exact algorithm that runs in O(n 2 b) time, and then a more complex exact O(nb) algorithm. We also present a new and effective heuristic. Finally, we show how all three of these ECI algorithms can be easily modified to yield heuristics for general LCIs. Section 4 is concerned with WI separation. We first present an enhanced version of Weismantel s exact algorithm, which runs in O(nba max ) time, where a max is the largest item weight. We then present a more complex exact algorithm that runs in O((n + a max )b) time. 2

In Section 5, we present an enhanced version of Boccia s exact separation algorithm. In Section 6, we give extensive computational results, conducted on two sets of benchmark instances: 12 sparse unstructured 0-1 LPs taken from MIPLIB [1], and 180 instances of the 0-1 Multi-Dimensional Knapsack Problem, created by Chu & Beasley [10]. Finally, concluding remarks are made in Section 7. Throughout the paper, we use the notation N = {1,..., n} and a max = max j N a j. 2 Literature Review 2.1 Lifted cover inequalities Consider a 0-1 knapsack polytope of the form Q := conv { x {0, 1} n : a T x b }. A set C N is called a cover if it satisfies j C a j > b. Given any cover C, the cover inequality (CI) j C x j C 1 is clearly valid for Q. The strongest CIs are obtained when the cover C is minimal, in the sense that no proper subset of C is also a cover. Given a minimal cover C, there exists at least one inequality of the form x j + α j x j C 1 (1) j C j N\C that induces a facet of Q. Computing the coefficients α j is called lifting and the inequalities are called lifted cover inequalities (LCIs). The LCIs were discovered independently by Balas [3] and Wolsey [31]. Lifting is usually done sequentially, i.e., one variable at a time (though not always see, e.g., Gu et al. [18]). At first sight, it seems that one must solve a 0-1 KP just to compute a single lifting coefficient. However, Zemel [34] showed that, given a fixed cover C and a fixed lifting sequence, one can compute all lifting coefficients in O(n C ) time. Zemel s algorithm can be made even faster using results of Balas & Zemel [4] and Gu et al. [18], which enable one to determine most of the lifting coefficients in O(n + C log C ) time. As pointed out by Balas [3], a fast but naive way of lifting a CI is as follows. We compute a := max j C a j and define the extension of C as E(C) := C {j N \ C : a j a }. Then the extended cover inequality (ECI) j E(C) x j C 1 is valid for Q, though it is not guaranteed to be facet-inducing. 3

Nemhauser & Wolsey [24] noted that one can derive more general LCIs of the form: x j + β j 1, j C\D j N\C α j x j + j D β j x j C \ D + j D where C is a cover and D C. (This follows from a more general result of Wolsey [32].) We will follow Gu et al. [16] in referring to the computation of the α and β coefficients as up-lifting and down-lifting, respectively, and in referring to the inequalities (1) as simple LCIs. The general LCIs include, not only the simple LCIs, but also the so-called (1, k)-configuration inequalities of Padberg [25] as special cases. Gu et al. [18] remark that Zemel s algorithm can be adapted to compute all down-lifting coefficients in O( C n 3 ) time. Although this is polynomial, it is time-consuming. To save time, one can instead compute approximate down-lifting coefficients by solving a sequence of D continuous 0-1 KPs. This idea, of solving LPs to compute approximate lifting coefficients, is also due to Wolsey [32]. We have used it ourselves in the past (Kaparis & Letchford [22]), and it works well in practice. 2.2 Separation algorithms for LCIs Several exact and heuristic separation techniques have been suggested for the LCIs and their special cases. Crowder et al. [11] noted that the CIs can be written in the form j C (1 x j ) 1. This enabled them to formulate the separation problem for CIs itself as a 0-1 KP. Let x [0, 1] n be the fractional solution to be separated. Define for each j N the 0-1 variable y j, taking the value 1 if and only if j is in the cover, and solve the following 0-1 KP: min j N (1 x j )y j (2) s.t. j N a jy j b + 1 (3) y j {0, 1} (j N). (4) To save time, Crowder et al. solved this 0-1 KP heuristically, simply inserting items into C in non-decreasing order of (1 x j )/a j. They lifted any violated CI found, to yield a violated simple LCI. Later, it was proven that the separation problems for CIs, LCIs and simple LCIs are all NP -hard (Ferreira et al. [12], Gu et al. [17], Klabjan et al. [23]). The same seems likely for ECIs. Gu et al. [16] suggested an alternative greedy heuristic for building the cover: simply insert items into C in non-increasing order of x j. This heuristic tends to put items of smaller weight into C. As a result, more items in N \C tend to be lifted, and the resulting LCI tends to be violated by more. Gu et al. also recommend using general LCIs rather than simple LCIs. They 4

obtained their best computational results by setting D = {j C : x j = 1}, and using the following lifting sequence: first up-lift the variables with x j > 0, then down-lift the variables in D, then up-lift the variables with x j = 0. In Kaparis & Letchford [22], we proposed some simple refinements to the lifting scheme of Gu et al. [16]. The most important of these is the following. If there exists a k N \ C such that x k > 0 and a k > b j D a j, then the up-lifting coefficient of x k will be undefined. To resolve this problem, we remove one or more items from D if necessary. Finally, we mention the paper by Gabrel & Minoux [14], which gives an exact separation algorithm for ECIs, based on the solution of at least n 0-1 KPs. Since our own algorithms (Subsections 3.1 and 3.2) are both simpler and faster, we do not go into details. 2.3 Weight inequalities The weight inequalities (WIs for short) were introduced by Weismantel [29]. Given any set of items W N satisfying j W a j < b, we define the residual capacity r = b j W a j. The WI then takes the following form: a j x j + j W j N\W max{0, a j r}x j j W a j = b r. (5) In general, the WIs do not induce facets of Q, although they do under some restrictive conditions [29]. Atamtürk [2] presented a procedure, based on superadditive functions, for strengthening the WIs under certain conditions. Weismantel [29] also defined a related class of inequalities called extended weight inequalities. For the sake of brevity, we do not explain these developments in detail. To our knowledge, only two papers have considered the separation problem for WIs. Weismantel himself noted that the problem can be solved exactly in pseudo-polynomial time. The idea is to write the WI (5) in the following form: a j x j + rx j b r (a j r)x j. (6) j W :a j r j W :a j >r j N:a j >r Now the right-hand side of (6) depends on r, but not on W. Thus, for fixed r, the separation problem amounts to finding the set W that maximises the left-hand side of (6). This can be formulated as an equality-constrained 0-1 KP, which can be solved in pseudo-polynomial time. Solving one equalityconstrained 0-1 KP for each possible value of r yields the desired pseudopolynomial separation algorithm for WIs. The other paper dealing with WI separation is Helmberg & Weismantel [20]. They present a greedy separation heuristic, that simply inserts items 5

into W in non-increasing order of x value, until no more items can be inserted. To our knowledge, it has not been shown that WI separation is NP -hard. 2.4 Separation algorithms for 0-1 knapsack polytopes We now turn our attention to the separation problem for Q itself. Four main approaches have been proposed: Boyd [7] formulated the problem as an LP with O(nb) variables and constraints. The formulation is based on the classical dynamic programming representation of 0-1 KP solutions as paths in a network with O(nb) nodes and arcs. Boyd [9] proposed a subgradient method for finding so-called Fenchel cuts, which are valid inequalities that maximise the distance (according to some desired norm) from the point to be separated. Fenchel cuts in general are not facet-inducing. Boyd [8] also proposed a primal simplex approach to generate Fenchel cuts, which seemed to perform better than the subgradient method. Using the idea of polarity (e.g., Nemhauser & Wolsey [24]), one can formulate the separation problem as an LP with an exponential number of constraints. Boccia [6] recently proposed solving this LP directly via a cutting plane method. We give some details of Boccia s method, since we will propose ways to improve it in Section 5. We seek a valid inequality α T x α 0 for Q which is violated by a given vector x [0, 1] n. For simplicity, we normalise the inequality so that α 0 = 1. The separation LP is then as follows: max j N x j α j j S α j 1 ( S N : a j b) (7) j S α j 0 (j N). The constraints (7) ensure that the inequality α T x 1 is satisfied by every feasible solution to the given 0-1 KP instance. If the solution α has a profit larger than 1, then the inequality (α ) T x 1 is violated by x. To check if a constraint (7) is violated by a given α [0, 1] n, it suffices to solve the following 0-1 KP: max s.t. j N α j y j j N a jy j b y j {0, 1} 6 (j N).

If the solution y to this 0-1 KP has a profit larger than 1, and we set S = {j N : yj = 1}, then the inequality j S α j 1 is violated by α. This observation enables one to solve the separation LP itself via a cutting plane method, in which an 0-1 KP is solved in each iteration. As Boccia points out, this naive approach suffers from very long computing times, and significant problems with rounding errors. To address these problems, Boccia proposes a three-stage procedure. In the first stage, variables that satisfy x j {0, 1} are temporarily eliminated from the problem, so that the separation LP can be solved in the subspace of the fractional variables. If a violated inequality is found, one proceeds to the second stage, in which the inequality is scaled and rounded to integers. Boccia does this by solving a small Integer Linear Program. In the third and final stage, the violated inequality is lifted to make it valid and facet-inducing for Q. To do this, Boccia solves a sequence of 0-1 KPs. As in Gu et al. [16], Boccia recommends down-lifting the variables with x j = 1 before up-lifting the variables with x j = 0. 2.5 Applications to general 0-1 LPs As mentioned above, inequalities for 0-1 knapsack polytopes can be used as cutting planes for general 0-1 LPs. Given a 0-1 LP in the form max{c T x : Ax b, x {0, 1} n }, where A Z m n and b Z m, we define the following two polytopes: P := {x [0, 1] n : Ax b} (the feasible region of the LP relaxation) P I := conv{x {0, 1} n : Ax b} (the integral hull). Furthermore, let us denote by Q i the 0-1 knapsack polytope associated with the ith constraint; i.e., Q i := conv x {0, 1}n : a ij x j b i. j N (Negative a ij coefficients are easily handled by complementing variables.) Then, we have by definition: P I m Q i P. i=1 Optimising over m i=1 Q i with a method such as Lagrangian decomposition (Guignard & Kim [19]) is extremely time-consuming. Instead, Crowder et al. [11] proposed using valid inequalities for the Q i as cutting planes, to strengthen an initial formulation (an approach now known as cut-andbranch). This works best when the constraint matrix A is sparse and has 7

no special structure. Using their separation heuristic for simple LCIs, they managed to solve ten 0-1 LPs that were previously considered intractable. This approach was developed further by Hoffman & Padberg [21] and Gu et al. [16]. Boyd [8, 9] performed some experiments on the same 10 instances as Crowder et al. [11], using his routines for generating Fenchel cuts. The resulting bounds were very strong, but running times were rather large. Cover inequalities and their variants have also been applied, with more or less success, to various 0-1 LPs with special structure. See, e.g., Bektas & Oguz [5], Gabrel & Minoux [14] and our own paper [22] for applications to multidimensional knapsack problems and Ferreira et al. [12] for an application to multiple knapsack problems. We are not aware of any application of WIs to general 0-1 LPs, but an application to the quadratic knapsack problem can be found in Helmberg & Weismantel [20]. Finally, we remark that one need not restrict oneself to deriving knapsack facets from the rows of the given system Ax b. Indeed, any surrogate constraint obtained as a non-negative linear combination of the original constraints can be used as a source row for generating knapsack facets. This is the approach taken by Boccia himself [6] when tackling certain facility location problems, and by Uchoa et al. [27] when tackling the capacitated minimum spanning tree problem. 3 New Separation Algorithms for ECIs and LCIs In this section, we present two exact algorithms and a heuristic for ECI separation, and then show how all three algorithms can be modified to yield heuristics for the separation of the (stronger and more general) LCIs. 3.1 A simple O(n 2 b) exact algorithm for ECIs We begin by presenting a fairly simple exact algorithm for ECI separation, that involves the solution of a sequence of 0-1 KPs. Recall from Subsection 2.2 that Crowder et al. [11] formulated the separation problem for CIs as the 0-1 KP (2)-(4). We will need the following lemma: Lemma 1 Let C N be the cover obtained by solving the 0-1 KP (2)-(4), and let a := max j C a j. If the ECI with C = C is not violated, then a violated ECI must satisfy C {j N : a j a }. Proof. terms: The violation of an ECI with cover C is equal to the sum of two 1. j C x j C + 1 (the violation of the CI with cover C) 8

2. j N\C:a j a x j. By definition, C is the cover that maximises the first term. So, if we replace C with a different cover C, the first term cannot increase. Moreover, if max j C a j > a, then the second term cannot increase either. If one could make the inequality in Lemma 1 strict, an iterative algorithm for ECI separation would immediately follow: solve the 0-1 KP (2)-(4), eliminate the items with a j a, and repeat until the total weight of the remaining items no longer exceeds the knapsack capacity. However, we were unable to determine whether the lemma is valid with strict inequality. Fortunately, we have the following refinement of Lemma 1: Proposition 1 Let C and a be defined as in Lemma 1, let S = {j C : a j = a } and let k be an item in S of minimum x value. If the ECI with C = C is not violated, then a violated ECI must satisfy C {j N \ {k } : a j a }. Proof. Suppose that the ECI corresponding to C is not violated, but there exists a cover C that yields a violated ECI. By Lemma 1, we can assume C {j N : a j a }. Now recall again the two components of the violation mentioned in the proof of Lemma 1. If C contained at least as many items of weight a as C did, then the second term would be no larger for C than it was for C, and the ECI would not be violated. Thus, C must contain fewer items of weight a than C did. Since k is the item with smallest x value in S, we can assume that k does not belong to C. 1: The following separation algorithm follows immediately from Proposition 0. Input: a knapsack constraint j N a jx j b and a vector x [0, 1] n. 1. Set B := {j N : x j = 0}. 2. Solve the 0-1 KP: min s.t. j N\B (1 x j )y j j N\B a jy j b + 1 y j {0, 1} (j N \ B). 3. Let C be the resulting cover. If the corresponding ECI is violated, output it. 4. Compute a and k as above. Insert into B the item k and all items in N \ B with weight larger than a. 5. If j N\B a j b, stop. Otherwise return to 2. The running time of this algorithm is clearly dominated by the solution of at most n 0-1 KPs. We will see in the next subsection that, even though these 0-1 KPs are of greater-than type, they can be solved in O(nb) time 9

just like the standard 0-1 KP. So, the algorithm has the rather large running time of O(n 2 b). However, in practice only a small number of 0-1 KPs, of rapidly decreasing size, typically need to be solved. They can often be solved quickly by branch-and-bound. Moreover, the algorithm often returns several violated inequalities rather than just one. 3.2 An O(nb) exact algorithm for ECIs In this section, we present a faster exact algorithm for ECI separation. First, we show explicitly how to solve the separation problem for the weaker CIs (i.e., the 0-1 KP (2)-(4)) in O(nb) time by dynamic programming. Then, we will show how to modify the dynamic program in order to solve the separation problem for the ECIs themselves in O(nb time. For k = 1,..., n and r = 0,..., b, define: k k f(k, r) := min (1 x j)y j : a j y j = r, y {0, 1} k. j=1 j=1 Also, for k = 1,..., n, define: k g(k) := min (1 x j)y j : j=1 k a j y j b + 1, y {0, 1} k. j=1 The following algorithm computes all of the f(k, r) and g(k) values, and thereby solves the CI separation problem: Set f(k, r) := for k = 1,..., n and r = 0,..., b. Set f(0, 0) := 0. Set g(k) := for k = 1,..., n. For k = 1,..., n For r = 0,..., b If f(k 1, r) < f(k, r) Set f(k, r) := f(k 1, r). For r = 0,..., b a k If f(k 1, r) + (1 x k ) < f(k, r + a k) Set f(k, r + a k ) := f(k 1, r) + (1 x k ). For r = b a k + 1,..., b If f(k 1, r) + (1 x k ) < g(k) Set g(k) := f(k 1, r) + (1 x k ). If g(k) < 1, output the violated CI. It is easy to see that this algorithm runs in O(nb) time. Suppose now that the items in N have been sorted in non-decreasing order of a j. Under this assumption, we have a violated ECI if and only 10

if g(k) n j=k+1 x j < 1 for some k N. Thus, to convert the exact CI separation algorithm into an exact ECI separation algorithm, it suffices to make the following two minor changes: i) Sort the items in non-decreasing order of a j before running the algorithm. ii) Change the last if statement to: If g(k) n j=k+1 x j < 1, output the violated ECI. Note that the initial sorting can be performed in O(n+a max ) time, using bucket sort with a max buckets. This leads to the following result: Theorem 1 The separation problem for ECIs can be solved exactly in O(nb) time. Proof. The time taken to sort the items, O(n + a max ), is dominated by the time taken to solve the dynamic program, O(nb). We remark that this algorithm may in fact return several violated ECIs in a single call. Moreover, it can be made faster in practice by excluding variables with x j = 0 from the DP, although, each time a violated ECI is found, one should check whether any such variables can be inserted into the extension E(C). 3.3 A fast heuristic for ECIs The ECI separation algorithm presented in Subsection 3.1 can be easily modified to form a fast heuristic for ECI separation: one simply solves the subproblems in step 2 heuristically instead of exactly. Following Crowder et al. [11], we simply insert items into C in non-decreasing order of (1 x j )/a j. This leads to the following algorithm: 1. Sort the items in N in non-decreasing order of (1 x j )/a j, and store them in a list L. Initialise the cover C as the empty set and initialise a = b. 2. Remove an item from the head of the sorted list L. If its weight is larger than a, ignore it, otherwise insert it into C. If C is now a cover, go to step 4. 3. If L is empty, stop. Otherwise, return to step 2. 4. If the ECI corresponding to C is violated by x, output it. 5. Find k, the heaviest item in C, decrease a accordingly, and delete k from C. Return to step 2. This heuristic can easily be implemented so that it runs in O(n 2 ) time. Moreover, O(n 2 ) seems to be necessary, since outputting a single violated ECI itself takes O(n) time. However, one can consider a variant of the heuristic in which, in step 4, one stops as soon as one violated ECI has 11

been output. With some care, this variant can be implemented to run in only O(n log n) time. We skip the details, partly for the sake of brevity, but also because the O(n 2 ) version already works very well in practice (see Subsection 6.3). 3.4 Heuristics for LCIs Recall that ECIs are a special case of, and dominated by, the LCIs. Any of the algorithms for ECI separation presented in the previous three subsections can be easily converted into a heuristic for LCI separation. Namely, each time we generate an ECI, we convert it into an LCI that is violated by at least as much. To do this, we use the same lifting order as Gu et al. [16]. The resulting algorithm is as follows: 0. Input: a knapsack constraint j N a jx j b, a vector x [0, 1] n and a cover C found by any of the ECI separation algorithms. 1. Let N 0 = {j N : x j = 0}, N 1 = {j N : x j = 1} and F = N \ (N 0 N 1 ). 2. Set D := C N 1. 3. If there exists a k N \ (C N 0 ) such that a k > b j D a j, then remove one or more items from D. 4. Up-lift variables in F \ C. If the resulting inequality is not violated, stop. 5. Down-lift variables in D and up-lift variables in N 0 to produce the final LCI. Note that the variables in F \ C have to be up-lifted before one checks the LCI for violation. This, and the need to perform down-lifting, makes LCI separation more time-consuming than ECI separation. Thus, the bounds on running times given for ECIs no longer hold. 4 Exact Separation Algorithms for WIs As we mentioned in Subsection 2.3, the complexity of the separation problem for WIs is unknown. We conjecture that it is weakly NP -hard. In this section, we present an enhanced version of Weismantel s pseudo-polynomial exact algorithm, and then present a faster exact algorithm. 4.1 Enhancements to Weismantel s algorithm We begin this subsection by analysing the running time of Weismantel s algorithm. Recall that it involves the solution of one equality-constrained 0-1 KP for each possible value of the residual capacity r = b j W a j. We will need the following lemma: Lemma 2 Suppose that x [0, 1] n and a T x b. A necessary condition for a WI to be violated is that 0 < r < a max. 12

Proof. When r = 0, the WI reduces to the original knapsack constraint a T x b. When r a max, the WI is implied by the upper bounds x j 1 for all j W. The assumption that x [0, 1] n and a T x b is of course reasonable. So, only a max 1 equality-constrained 0-1 KPs need to be solved. If we solve an equality-constrained 0-1 LP with a standard dynamic programming algorithm, it takes O(nb) time. Thus: Proposition 2 Weismantel s algorithm can be implemented to run in O(nba max ) time. This running time is excessive, in both theory and practice. We have found three simple ways to speed up the algorithm, which make a dramatic difference in practice: 1. It can be shown that, if x k = 0, then one can assume that k / W without losing any violated WIs. Similarly, if x k = 1, then one can assume that k W. This reduces the number of variables in the equality-constrained 0-1 KPs. 2. For a given value of r, it may be that no W N exists satisfying j W a j = b r. As a result, the equality-constrained 0-1 KP will be infeasible. To avoid wasting time trying to solve such infeasible problems, it helps to compute the possible values that j W a j can take before moving on to solve the equality-constrained 0-1 KPs. This can easily be done in O(nb) time and O(b) space, by dynamic programming. 3. If one solves the remaining equality-constrained 0-1 KPs via branchand-bound, one can abort the branch-and-bound run as soon as the upper bound is less than or equal to the right-hand side of (6). For the sake of completeness, we include formal proofs of the two results mentioned above. Proposition 3 Assume (as before) that x [0, 1] n and a T x b. If x k = 0, then one can assume that k / W without losing any violated WIs. Proof. Suppose that the WI (5) is violated by x and that there exists a k W such that x k = 0. If we delete k from W, we obtain the following modified WI: a j x j + max{0, a j r a k }x j b r a k. (8) j W \{k} Now, if we sum together: j N\W 13

the original knapsack constraint a T x b multiplied by a k /(r + a k ), the modified WI (8) multiplied by r/(r + a k ), and a suitable non-negative linear combination of non-negativity inequalities, we obtain the valid inequality: j W \{k} a j x j + j N\W max{0, a j r}x j + a2 k r + a k x k b r. This inequality is identical to the original WI, apart from the fact that the left-hand-side coefficient of x k is smaller. Thus, it must be violated by x by the same amount as the original WI. Since x satisfies the original knapsack constraint and the non-negativity inequalities, the modified WI (8) must be violated by at least as much as the original. Proposition 4 Assume (as before) that x [0, 1] n and a T x b. If x k = 1, then one can assume that k W without losing any violated WIs. Proof. Suppose that the WI (5) is violated by x and that there exists a k / W such that x k = 1. If a k r, then inserting k into W yields a WI that is violated by at least as much as the original. So suppose that a k > r. If we take the inequality a T x b and subtract from it r copies of the equation x k = 1, we obtain: j N\{k} a j x j + (a k r)x k b r. So the original WI cannot be violated, a contradiction. 4.2 An O((n + a max )b) exact algorithm Improving on the running time bound of O(nba max ) turns out to be a nontrivial exercise. We found it helpful to treat the two terms on the left-hand side of (6), and the right-hand side of (6), separately. Thus, we make the following three definitions. Definition 1 For given integers r, s, with 0 < r < a max and 0 s b r, let f(r, s) denote max a j x jy j : a j y j = s, y j {0, 1} (j N : a j r). j N:a j r j N:a j r 14

Definition 2 For given integers r, s, with 0 < r < a max and 0 s b r, let g(r, s) denote max x jy j : a j y j = s, y j {0, 1} (j N : a j > r). j N:a j >r j N:a j >r Definition 3 For a given integer r, with 0 < r < a max, let h(r) denote the right-hand side of (6) computed with respect to x. That is: h(r) = b r (a j r)x j. j N:a j >r Armed with these definitions, we propose the following exact separation algorithm for WIs: 0. Input: a knapsack constraint j N a jx j b and a vector x [0, 1] n to be separated. 1. Sort the items in N in non-decreasing order of weight. 2. Compute f(r, s) for r = 1,..., a max 1 and for s = 0,..., b r. 3. Compute g(r, s) for r = 1,..., a max 1 and for s = 0,..., b r. 4. Compute h(r) for r = 1,..., a max 1. 5. For r = 1,..., a max 1, compute the maximum possible violation of a WI with j W a j = b r. That is, compute: max f(r, s) + rg(r, b r s) h(r). 0 s b r If this quantity is positive, output the violated WI. We have the following theorem. Theorem 2 The separation problem for WIs can be solved exactly in O((n+ a max )b) time. Proof. As noted in Subsection 3.2, Step 1 can be performed in O(n+a max ) time. Step 2 can be performed in O(nb) time: just use the standard dynamic programming algorithm for the 0-1 KP, but introduce new items in nondecreasing order of weight. Similarly, Step 3 can be performed in O(nb) time: use the standard dynamic programming algorithm for the 0-1 KP, but introduce new items in non-increasing order of weight. Step 4 can be performed in O(na max ) time. Finally, step 5 takes O(ba max ) time. The terms O(n + a max ) and O(na max ) are dominated by O(nb). 5 An Enhanced Version of Boccia s Algorithm We now turn our attention to the separation problem for the 0-1 knapsack polytope itself. We have found several effective ways to enhance Boccia s algorithm. They are explained in the following subsections. 15

5.1 Eliminating rows from consideration If x can be quickly proven to lie in Q, there is no need to search for violated inequalities that induce facets of Q. The following propositions, which are easy to prove, give easily checkable sufficient conditions for this to be the case. Proposition 5 Assume (as before) that x [0, 1] n and a T x b. If j N:x j >0 a j b, then x Q. Proposition 6 Assume (as before) that x [0, 1] n and a T x b. If a {0, 1} n, then x Q. 5.2 Looking for violated LCIs first Our best algorithm for LCI separation (see Subsection 6.5) is very fast and effective. Moreover, the LCIs are typically less dense and better behaved numerically than general facet-inducing inequalities. Thus, we recommend calling the LCI separation algorithm first, and resorting to exact knapsack separation only when one does not find a violated LCI. 5.3 Solving knapsack subproblems heuristically Our goal when solving the knapsack subproblem is to find a violated constraint of the form (7). Note, however, that we need not necessarily find the most-violated constraint. Thus, one can solve the knapsack subproblem heuristically, and only call an exact routine when the heuristic fails. We run a simple greedy heuristic, based on profit-to-weight ratio, and then improve the solution obtained using a simple form of local search. A move in our local search algorithm consists in deleting one item from the knapsack and inserting one or more other items. We abort the local search if we find an inequality (7) violated by more than 0.25, otherwise we continue until a local optimum is reached. 5.4 Re-using old constraints Initialising each separation LP with the trivial bounds (0 α j 1 for all j N) is not a good idea. One can achieve more rapid convergence, and improve running times substantially, by re-cycling constraints (7) that have already been generated. We found the following idea effective: each time the separation LP has been solved to optimality, store in memory all of the constraints (7) that were generated. In the next call to the knapsack separator, use these constraints to initialise the separation LP. To prevent the separation LPs from growing too large, solve the initialised separation LP and throw away the 16

constraints that are non-binding (have zero dual price). Then commence solving knapsack subproblems as usual until the separation LP is solved to optimality. Although this seems like a simple idea, there is a significant complication: the separation LP is solved in the space of the fractional variables, but the set F changes from one iteration to the next. To deal with this, we proceed as follows. Any item that was previously in F, but is now integer, is deleted from all constraints (7) in which it appears. Next, we check each remaining constraint (7) and delete further items (taken from F ), if necessary, to obtain valid sets S. Finally, we check if the resulting constraints (7) can be strengthened by inserting into S some items that were previously integral but are now in F. In practice, the set F does not change radically from one separation call to the next. 5.5 A fast scaling heuristic Recall that, once a violated inequality is found, it is then necessary to scale it to integers to avoid problems with rounding errors. Boccia formulates this problem itself as a (small) integer linear program. We have found that the following simple heuristic works in over 90% of cases: let α min be the smallest fractional α value. Divide the entire inequality by α min, and check if the resulting inequality is integral (within some desired tolerance). Note that this heuristic works if and only if there is at least one variable in the desired facet-inducing inequality that has a left-hand-side coefficient equal to 1, which is of course the case for all LCIs. 5.6 One other minor consideration We mentioned in Subsection 2.2 that, when deriving LCIs, one or more items may have to be removed from the down-lifting set D if there exists an item k N \ (C N 0 ) whose weight a k exceeds the residual knapsack capacity b j D a j. An analogous problem arises in Boccia s method if there exists an item k F whose weight a k exceeds the residual knapsack capacity b = b j N 1 a j. When this problem arises, we iteratively move items from N 1 to F until no such item exists. 6 Computational Experiments In this section, we present the results of some computational experiments, to assess the usefulness of our separation algorithms when applied to 0-1 17

LPs. We use the following very simple cutting plane algorithm: 1. Solve the initial LP relaxation of the problem. 2. If the solution is integer, stop. 3. For each row of the problem, call one or more separation algorithms. If any violated inequalities are found, add all of them to the LP, re-optimise and go to step 2. 4. Output the final upper bound and stop. We use the primal simplex algorithm to solve the initial LP in step 1 and dual simplex to re-optimise after cutting planes have been added. The algorithm was implemented in Microsoft Visual Studio.NET 2003 and called on functions from version 10.0 of the ILOG CPLEX Callable Library. All experiments were performed using a PC with an Intel Pentium IV, 2.8 GHz processor and 512 MB of RAM. 6.1 Twelve instances from MIPLIB We will present comprehensive results for some instances taken from the MIPLIB. If one looks at all of the versions of the MIPLIB, one finds a total of 16 pure 0-1 instances. We have decided to omit four of them, for reasons explained below. In all of the remaining 12 instances apart from one, the objective is to minimise cost. Table 1 displays some relevant information for these instances, including: instance name, number of variables n, number of constraints m, density (percentage of entries in the constraint matrix A that are non-zero), number of genuine knapsack constraints (i.e., constraints whose left-hand-side coefficients are not all 0, 1 or -1), cost of the integer optimum, lower bound obtained by solving the initial LP relaxation (before cuts are added), and integrality gap, i.e., the gap between the initial lower bound and the optimum, expressed as a percentage of the optimum. We remark that the seven P instances were used by both Crowder et al. [11] and Boyd [8, 9] as benchmarks. All twelve instances were used as benchmarks by Gu et al. [16]. The four instances that we excluded were: 1152lav, lp41 and mod010, because they each have only one genuine knapsack-type constraint; and P6000, partly because it is very large, but also because its integrality gap is only 0.01%. In a few cases, we observed knapsack constraints for which one or more left-hand-side coefficients exceeded the right-hand side. This phenomenon indicates the presence of one or more variables that can be fixed permanently at zero or one without losing the integer optimum. We performed this fixing right at the start, to avoid problems with our separation routines. 18

name n m %Dens. KPs Opt. Init. LB IG (%) P0033 33 16 17.4 11 3089.0 2520.6 18.40 P0040 40 24 11.3 13 62027.0 61796.5 0.37 P0201 201 134 5.0 107 7615.0 6875.0 9.72 P0282 282 242 2.0 44 258411.0 176867.5 31.56 P0291 291 253 1.9 14 5223.75 1705.1 67.36 P0548 548 177 1.8 92 8691.0 315.3 96.37 P2756 2756 756 0.4 382 3124.0 2688.7 13.93 bm23 27 20 88.5 20 34.0 20.6 39.41 lseu 89 28 12.4 11 1120 834.7 25.47 mod008 319 6 64.9 6 307 290.9 5.24 pipex 48 25 16.0 9 788.263 773.8 1.83 sentoy 60 30 100.0 30-7772 -7839.3 0.87 Table 1: Description of the MIPLIB instances. 6.2 Application of CIs to MIPLIB instances Table 2 reports the results obtained when applying CIs to the MIPLIB instances. The columns headed % gap closed report the percentage of the integrality gap closed by the inequalities, and the columns headed time (s) report the total running time of the cutting plane algorithm, in seconds. This includes the time taken to solve the initial LP, the time spent calling the separation algorithms and the time taken to re-optimise after adding the cutting planes. The columns headed CJP were obtained using the greedy heuristic of Crowder et al. [11]. The column headed Exact was obtained by solving the separation problem exactly. The column headed Ex1 reports the time taken when using the exact separation algorithm in its original form. The column headed Ex2 reports the time taken by a hybrid approach, in which the exact algorithm is called only when the Crowder et al. heuristic fails. Although the CIs are theoretically very weak cutting planes, we see that they close half of the integrality gap on average. Moreover, the Crowder et al. heuristic does a remarkably good job. All of the running times are negligible (fractions of a second). 6.3 Application of ECIs to MIPLIB instances Table 3 reports the results obtained when applying ECIs to the MIPLIB instances. The columns headed CJP were again obtained using the Crowder et al. heuristic. The columns headed GNS were obtained using the alternative greedy heuristic of Gu et al. [16]. The columns headed N.H. were obtained using the new heuristic presented in Subsection 3.3. The column headed Exact was obtained by solving the separation problem exactly. The 19

%gap closed time (s) name CJP Exact CJP Ex1 Ex2 P0033 63.55 63.55 0.016 0.015 0.016 P0040 76.82 76.82 0.000 0.000 0.000 P0201 33.78 33.78 0.016 0.015 0.016 P0282 94.19 94.27 0.047 0.468 0.235 P0291 95.19 95.23 0.031 0.094 0.063 P0548 67.55 67.67 0.063 1.047 0.282 P2756 86.26 86.26 0.187 0.546 0.203 bm23 5.56 5.56 0.000 0.110 0.047 lseu 39.87 39.87 0.000 0.047 0.032 mod008 3.36 3.75 0.000 0.266 0.047 pipex 16.68 16.68 0.000 0.015 0.015 sentoy 16.33 16.58 0.000 0.407 0.297 Aver. 49.93 50.00 0.030 0.253 0.104 Table 2: Results obtained when applying CIs to MIPLIB instances. column headed Ex1 corresponds to the O(n 2 b) exact algorithm described in Subsection 3.1. The column headed Ex2 corresponds to the O(nb) exact algorithm described in Subsection 3.2. The missing entries are due to the fact that the algorithm ran into time and memory difficulties for the instance mod008. Finally, the column headed Ex3 corresponds to a hybrid approach, in which the O(n 2 b) exact algorithm is called only when the new heuristic fails. We found these results rather suprising, for several reasons. First, the ECIs do not usually close substantially more of the gap than the CIs, although they do for a few instances. Second, the Gu et al. heuristic performs no better than the Crowder et al. heuristic. (In fact, it performs slightly worse, but we doubt that the difference is statistically significant.) Third, the O(nb) exact algorithm turned out to be much slower in practice than the O(n 2 b) algorithm. Finally, we were also pleasantly surprised to find that our new separation heuristic closes more of the gap, on average, than the other two heuristics. All things considered, the new ECI heuristic and the new hybrid exact ECI algorithm look very promising. 6.4 Application of simple LCIs to MIPLIB instances Next, we converted the ECI algorithms into simple LCI algorithms, as explained in Subsection 3.4. Remarkably, the simple LCIs closed no more of the gap than the ECIs, for any of the twelve instances, despite the fact that they are stronger in theory. Further investigation revealed that up- 20

%gap closed time (s) name CJP GNS N.H. Exact CJP GNS N.H. Ex1 Ex2 Ex3 P0033 66.50 66.50 71.51 71.93 0.00 0.02 0.02 0.06 0.09 0.03 P0040 76.82 76.82 100.00 100.00 0.00 0.02 0.00 0.00 0.19 0.00 P0201 33.78 33.78 33.78 33.78 0.02 0.00 0.02 0.02 0.19 0.00 P0282 94.19 93.48 92.72 94.35 0.03 0.03 0.02 0.44 0.39 0.22 P0291 95.19 95.15 94.96 95.23 0.03 0.02 0.02 0.09 5.71 0.08 P0548 67.55 66.48 67.37 67.68 0.06 0.08 0.06 1.05 16.9 0.30 P2756 86.26 78.16 86.26 86.26 0.17 0.22 0.16 0.52 70.6 0.20 bm23 15.36 15.36 15.82 15.82 0.00 0.00 0.00 0.08 0.02 0.05 lseu 39.87 39.85 61.01 61.36 0.00 0.00 0.02 0.13 3.16 0.06 mod008 4.58 4.42 6.31 17.59 0.00 0.02 0.20 0.44 0.34 pipex 16.68 16.68 23.82 27.96 0.00 0.00 0.00 0.06 0.03 0.09 sentoy 16.53 15.99 17.27 21.57 0.02 0.00 0.02 0.94 8.06 0.70 Aver. 51.11 50.22 55.90 57.80 0.03 0.03 0.04 0.32 0.17 Table 3: Results obtained when applying ECIs to MIPLIB instances. lifting coefficients larger than 1 occurred very rarely when simple LCIs were generated. As expected, the running times were slightly higher for simple LCIs than for ECIs. (For brevity, we do not present the times in tables.) All things considered, the simple LCIs exhibit disappointing performance for these instances. 6.5 Application of general LCIs to MIPLIB instances Table 4 reports the results obtained when applying general LCIs to the MI- PLIB instances. Four different separation heuristics are compared. The columns headed CJP and GNS were again obtained using the Crowder et al. and Gu et al. heuristics to generate the covers. The columns headed N.H.1 were obtained using our best exact ECI separation algorithm (labelled Ex3 in Subsection 6.3) to generate the covers. The columns headed N.H.2 were obtained using a hybrid method, in which our best ECI algorithm is called only when the Gu et al. heuristic fails to yield a violated LCI. In all four cases, the lifting sequence of Gu et al. [16] was used to derive the general LCIs (see Subsections 2.2 and 3.4). The amount of gap closed by the general LCIs is significantly better than that closed by the ECIs on some instances, and especially so on pipex. This confirms the value of down-lifting. Moreover, in our view, the running times are extremely promising. All things considered, we recommend using the strategy represented by N.H.2. 21

%gap closed time (s) name CJP GNS N.H.1 N.H.2 CJP GNS N.H.1 N.H.2 P0033 70.36 80.62 80.62 80.62 0.000 0.015 0.031 0.031 P0040 100.00 100.00 100.00 100.00 0.000 0.000 0.000 0.000 P0201 33.78 33.78 33.78 33.78 0.015 0.016 0.016 0.016 P0282 95.97 94.84 96.15 96.21 0.046 0.031 0.625 0.437 P0291 95.19 96.39 95.41 96.40 0.015 0.031 0.110 0.047 P0548 67.56 66.81 67.71 67.71 0.062 0.078 0.437 0.297 P2756 86.26 80.52 86.26 86.26 0.187 0.219 0.265 0.250 bm23 16.11 16.00 16.11 16.11 0.000 0.000 0.047 0.047 lseu 56.99 59.28 61.56 66.20 0.000 0.016 0.062 0.062 mod008 17.81 17.55 18.24 18.24 0.047 0.031 0.359 0.094 pipex 72.78 72.62 73.34 73.34 0.000 0.000 0.062 0.032 sentoy 16.53 16.50 22.69 22.72 0.000 0.016 0.641 0.813 Aver. 60.78 61.24 62.66 63.13 0.031 0.038 0.221 0.177 Table 4: Results obtained when applying general LCIs to MIPLIB instances. 6.6 Application of WIs to MIPLIB instances Table 5 reports results obtained for the WIs, using three algorithms: the heuristic of Helmberg & Weismantel [20], the enhanced version of Weismantel s exact algorithm presented in Subsection 4.1, and a hybrid algorithm, in which the exact algorithm is called only when the heuristic fails. We do not report results for the O((n + a max )b) exact algorithm presented in Subsection 4.2, because it ran into serious time and memory problems for most of the instances. We see that, for these instances, the WIs are less effective than the LCIs in terms of percentage gap closed. Moreover, our exact separation algorithm for WIs is excessively time-consuming on some of the instances, even if we call the Helmberg-Weismantel heuristic first. On the other hand, the Helmberg-Weismantel heuristic itself does remarkably well: it is much faster than the exact algorithm, yet closes the same percentage of the integrality gap as the exact algorithm on several instances. We also experimented with several variants of our cutting plane algorithm in which LCIs and WIs were used in combination. Using our best scheme, the LCIs and WIs were able to close 64.54% of the gap on average. Considering that our best scheme for LCIs (see Table 4) already closes 63.13% of the gap on average, this suggests that the WIs are of little use for these instances. (We do not report detailed results, for the sake of brevity.) 22

%gap closed time (s) name HW Exact HW Ex1 Ex2 P0033 6.44 6.44 0.000 0.203 0.000 P0040 29.24 29.24 0.016 0.031 0.016 P0201 12.50 12.50 0.016 0.078 0.016 P0282 76.46 89.15 0.032 19.859 7.422 P0291 88.72 88.79 0.032 0.453 1.078 P0548 46.03 73.53 0.016 100.704 97.953 P2756 24.71 63.63 0.156 2.750 1.859 bm23 1.49 1.81 0.000 0.078 0.000 lseu 15.18 15.25 0.031 0.937 0.671 mod008 2.30 2.62 0.015 0.344 0.093 pipex 4.56 4.56 0.015 0.125 0.015 sentoy 3.02 7.32 0.015 342.64 106.844 Aver. 25.89 32.90 0.029 39.017 17.997 Table 5: Results obtained when applying WIs to MIPLIB instances. 6.7 Application of general knapsack facets to MIPLIB instances Table 6 reports results obtained using general facets of the knapsack polytope. The percentage gap closed is shown, along with the running times for three variants of Boccia s separation scheme. The columns headed T1 and T2 correspond to Boccia s original scheme and our enhanced scheme, respectively. The column headed T3 was obtained with a scheme that incorporates all of our enhancements apart from the one mentioned in Subsection 5.2, i.e., a scheme that does not use LCIs. Several things are apparent from the table. First, we see that the general knapsack facets close significantly more of the integrality gap than the LCIs and WIs. Indeed, they close most of the integrality gap on all instances, apart from P0201, bm23 and sentoy. Second, our enhanced scheme is around four times faster, on average, than the original one. Third, it definitely pays off to call LCI separation first, provided of course that one has a fast and effective separation heuristic for LCIs. Fourth, our best exact separation algorithm for general knapsack facets is (bizarrely) less time-consuming than our best exact separation algorithm for WIs. A couple of further comments are in order. First, we were using a generalpurpose branch-and-bound solver to solve the knapsack subproblems. One could probably obtain a substantial speed-up using a specialised algorithm for the 0-1 KP (Boccia himself uses a modified version of the MINKNAP algorithm of Pisinger [26]). Second, we wish to point out that, for the P instances, the percentage gaps closed by the general knapsack facets are exactly the 23

name %gap T1 T2 T3 P0033 87.42 0.594 0.109 0.265 P0040 100.0 0.015 0.000 0.016 P0201 33.78 1.031 0.031 0.718 P0282 98.59 29.188 5.172 7.734 P0291 99.43 9.454 1.875 3.579 P0548 84.34 37.250 2.250 16.391 P2756 86.36 38.469 1.000 24.547 bm23 20.08 2.734 0.578 0.703 lseu 76.09 3.297 0.687 1.625 mod008 89.23 68.906 41.438 57.687 pipex 86.55 4.640 0.797 1.188 sentoy 30.97 38.844 3.829 11.141 Aver. 74.40 19.535 4.814 10.466 Table 6: Results obtained when applying general knapsack facets to MIPLIB instances. same as those reported by Boyd [8, 9], with one exception: for P2756, Boyd obtained a slightly smaller value of 86.16%. Indeed, this instance contains a few very dense constraints that, according to Boyd, caused his algorithm to run into difficulties. 6.8 Multi-dimensional knapsack instances To gain further insight into the relative performance of the different inequalities, and also to explore the limits of our approaches, we also performed experiments on the 0-1 multi-dimensional knapsack problem (MKP). We used the library of MKP instances created by Chu & Beasley [10]. In these instances, the constraint matrix A consists entirely of positive integers and is therefore 100% dense. Although the library contains 270 instances, the largest instances have not yet been solved to proven optimality (see, e.g., Vimont et al. [28]). Thus, we restrict our attention to 180 of the instances, for which the optima are known. The 180 instances are arranged in blocks of ten. Table 7 reports the following information for each block. The first three columns show the number of variables n, the number of constraints m and the so-called tightness ratio α. The next column shows the percentage integrality gap of the initial LP relaxation (before the addition of cutting planes). The remaining five columns show the percentage of the integrality gap closed by each of five classes of inequalities. Each figure in the last six columns is averaged over the given ten instances. Before interpreting the results, there are three points that we need to 24