Separation Algorithms for 0-1 Knapsack Polytopes

Size: px
Start display at page:

Download "Separation Algorithms for 0-1 Knapsack Polytopes"

Transcription

1 Separation Algorithms for 0-1 Knapsack Polytopes Konstantinos Kaparis Adam N. Letchford February 2008, Corrected May 2008 Abstract Valid inequalities for 0-1 knapsack polytopes often prove useful when tackling hard 0-1 Linear Programming problems. To use such inequalities effectively, one needs separation algorithms for them, i.e., routines for detecting when they are violated. We show that the separation problems for the so-called extended cover and weight inequalities can be solved exactly in O(nb) time and O((n + a max )b) time, respectively, where n is the number of items, b is the knapsack capacity and a max is the largest item weight. We also present fast and effective separation heuristics for the extended cover and lifted cover inequalities. Finally, we present a new exact separation algorithm for the 0-1 knapsack polytope itself, which is faster than existing methods. Extensive computational results are also given. Keywords: integer programming branch-and-cut knapsack problems 1 Introduction A 0-1 knapsack problem (KP) is a problem of the form max { c T x : a T x b, x {0, 1} n}, where c Z n +, a Z n + and b Z +. It is well-known that the KP is NP -hard, but can be solved in O(nb) time by dynamic programming. A 0-1 knapsack polytope is the convex hull of feasible solutions to a KP, i.e., a polytope (bounded polyhedron) of the form: conv { x {0, 1} n : a T x b }. Such polytopes have been widely studied and many classes of valid inequalities are known, such as the lifted cover inequalities (LCIs) of Balas [3] and Wolsey [31], and the weight inequalities (WIs) of Weismantel [29]. University of Southampton Lancaster University 1

2 One motive for studying these polytopes, as pointed out by Crowder et al. [11], is that valid inequalities for them can be used as cutting planes for general 0-1 Linear Programs (0-1 LPs). The idea is to consider each individual constraint of a 0-1 LP as a 0-1 knapsack constraint (complementing variables if necessary to obtain non-negative coefficients) and derive cutting planes from the associated polytope. The separation problem for 0-1 knapsack polytopes is this: given n, a, b and a vector x [0, 1] n, either find an inequality that is valid for the knapsack polytope and violated by x, or prove that none exists. One can define the separation problem for a given class of inequalities (such as LCIs or WIs) analogously. Separation algorithms, whether exact or heuristic, are needed if one wishes to use valid inequalities as cutting planes (see, e.g., Grötschel et al. [15] or Nemhauser & Wolsey [24]). Crowder et al. [11] and Gu et al. [16] proposed some simple but effective heuristics for LCI separation, and used them to obtain good results for sparse, unstructured 0-1 LPs. LCIs are now incorporated into several software packages for integer programming. WI separation, on the other hand, has received less attention, although there exists a pseudo-polynomial time exact algorithm (Weismantel [29]) and a simple heuristic (Helmberg & Weismantel [20]). In some cases, one can get better results by using general valid inequalities for knapsack polytopes as cutting planes. The polynomial equivalence of separation and optimisation (Grötschel et al. [15]) implies that the separation problem for knapsack polytopes can be solved in pseudo-polynomial time. Algorithms for this problem have been proposed by Boyd [7, 8, 9] and Boccia [6]. (For extensions to the mixed-integer case, see Yan & Boyd [33] and Fukasawa & Goycoolea [13].) In this paper, we present and test several new exact and heuristic separation algorithms. The structure of the paper is as follows: In Section 2, we briefly review the literature on 0-1 knapsack polytopes, separation algorithms, and their application to general 0-1 LPs. Section 3 is concerned with LCI separation. We consider first the so-called extended cover inequalities (ECIs), a special case of LCIs introduced by Balas [3]. We present a fairly simple exact algorithm that runs in O(n 2 b) time, and then a more complex exact O(nb) algorithm. We also present a new and effective heuristic. Finally, we show how all three of these ECI algorithms can be easily modified to yield heuristics for general LCIs. Section 4 is concerned with WI separation. We first present an enhanced version of Weismantel s exact algorithm, which runs in O(nba max ) time, where a max is the largest item weight. We then present a more complex exact algorithm that runs in O((n + a max )b) time. 2

3 In Section 5, we present an enhanced version of Boccia s exact separation algorithm. In Section 6, we give extensive computational results, conducted on two sets of benchmark instances: 12 sparse unstructured 0-1 LPs taken from MIPLIB [1], and 180 instances of the 0-1 Multi-Dimensional Knapsack Problem, created by Chu & Beasley [10]. Finally, concluding remarks are made in Section 7. Throughout the paper, we use the notation N = {1,..., n} and a max = max j N a j. 2 Literature Review 2.1 Lifted cover inequalities Consider a 0-1 knapsack polytope of the form Q := conv { x {0, 1} n : a T x b }. A set C N is called a cover if it satisfies j C a j > b. Given any cover C, the cover inequality (CI) j C x j C 1 is clearly valid for Q. The strongest CIs are obtained when the cover C is minimal, in the sense that no proper subset of C is also a cover. Given a minimal cover C, there exists at least one inequality of the form x j + α j x j C 1 (1) j C j N\C that induces a facet of Q. Computing the coefficients α j is called lifting and the inequalities are called lifted cover inequalities (LCIs). The LCIs were discovered independently by Balas [3] and Wolsey [31]. Lifting is usually done sequentially, i.e., one variable at a time (though not always see, e.g., Gu et al. [18]). At first sight, it seems that one must solve a 0-1 KP just to compute a single lifting coefficient. However, Zemel [34] showed that, given a fixed cover C and a fixed lifting sequence, one can compute all lifting coefficients in O(n C ) time. Zemel s algorithm can be made even faster using results of Balas & Zemel [4] and Gu et al. [18], which enable one to determine most of the lifting coefficients in O(n + C log C ) time. As pointed out by Balas [3], a fast but naive way of lifting a CI is as follows. We compute a := max j C a j and define the extension of C as E(C) := C {j N \ C : a j a }. Then the extended cover inequality (ECI) j E(C) x j C 1 is valid for Q, though it is not guaranteed to be facet-inducing. 3

4 Nemhauser & Wolsey [24] noted that one can derive more general LCIs of the form: x j + β j 1, j C\D j N\C α j x j + j D β j x j C \ D + j D where C is a cover and D C. (This follows from a more general result of Wolsey [32].) We will follow Gu et al. [16] in referring to the computation of the α and β coefficients as up-lifting and down-lifting, respectively, and in referring to the inequalities (1) as simple LCIs. The general LCIs include, not only the simple LCIs, but also the so-called (1, k)-configuration inequalities of Padberg [25] as special cases. Gu et al. [18] remark that Zemel s algorithm can be adapted to compute all down-lifting coefficients in O( C n 3 ) time. Although this is polynomial, it is time-consuming. To save time, one can instead compute approximate down-lifting coefficients by solving a sequence of D continuous 0-1 KPs. This idea, of solving LPs to compute approximate lifting coefficients, is also due to Wolsey [32]. We have used it ourselves in the past (Kaparis & Letchford [22]), and it works well in practice. 2.2 Separation algorithms for LCIs Several exact and heuristic separation techniques have been suggested for the LCIs and their special cases. Crowder et al. [11] noted that the CIs can be written in the form j C (1 x j ) 1. This enabled them to formulate the separation problem for CIs itself as a 0-1 KP. Let x [0, 1] n be the fractional solution to be separated. Define for each j N the 0-1 variable y j, taking the value 1 if and only if j is in the cover, and solve the following 0-1 KP: min j N (1 x j )y j (2) s.t. j N a jy j b + 1 (3) y j {0, 1} (j N). (4) To save time, Crowder et al. solved this 0-1 KP heuristically, simply inserting items into C in non-decreasing order of (1 x j )/a j. They lifted any violated CI found, to yield a violated simple LCI. Later, it was proven that the separation problems for CIs, LCIs and simple LCIs are all NP -hard (Ferreira et al. [12], Gu et al. [17], Klabjan et al. [23]). The same seems likely for ECIs. Gu et al. [16] suggested an alternative greedy heuristic for building the cover: simply insert items into C in non-increasing order of x j. This heuristic tends to put items of smaller weight into C. As a result, more items in N \C tend to be lifted, and the resulting LCI tends to be violated by more. Gu et al. also recommend using general LCIs rather than simple LCIs. They 4

5 obtained their best computational results by setting D = {j C : x j = 1}, and using the following lifting sequence: first up-lift the variables with x j > 0, then down-lift the variables in D, then up-lift the variables with x j = 0. In Kaparis & Letchford [22], we proposed some simple refinements to the lifting scheme of Gu et al. [16]. The most important of these is the following. If there exists a k N \ C such that x k > 0 and a k > b j D a j, then the up-lifting coefficient of x k will be undefined. To resolve this problem, we remove one or more items from D if necessary. Finally, we mention the paper by Gabrel & Minoux [14], which gives an exact separation algorithm for ECIs, based on the solution of at least n 0-1 KPs. Since our own algorithms (Subsections 3.1 and 3.2) are both simpler and faster, we do not go into details. 2.3 Weight inequalities The weight inequalities (WIs for short) were introduced by Weismantel [29]. Given any set of items W N satisfying j W a j < b, we define the residual capacity r = b j W a j. The WI then takes the following form: a j x j + j W j N\W max{0, a j r}x j j W a j = b r. (5) In general, the WIs do not induce facets of Q, although they do under some restrictive conditions [29]. Atamtürk [2] presented a procedure, based on superadditive functions, for strengthening the WIs under certain conditions. Weismantel [29] also defined a related class of inequalities called extended weight inequalities. For the sake of brevity, we do not explain these developments in detail. To our knowledge, only two papers have considered the separation problem for WIs. Weismantel himself noted that the problem can be solved exactly in pseudo-polynomial time. The idea is to write the WI (5) in the following form: a j x j + rx j b r (a j r)x j. (6) j W :a j r j W :a j >r j N:a j >r Now the right-hand side of (6) depends on r, but not on W. Thus, for fixed r, the separation problem amounts to finding the set W that maximises the left-hand side of (6). This can be formulated as an equality-constrained 0-1 KP, which can be solved in pseudo-polynomial time. Solving one equalityconstrained 0-1 KP for each possible value of r yields the desired pseudopolynomial separation algorithm for WIs. The other paper dealing with WI separation is Helmberg & Weismantel [20]. They present a greedy separation heuristic, that simply inserts items 5

6 into W in non-increasing order of x value, until no more items can be inserted. To our knowledge, it has not been shown that WI separation is NP -hard. 2.4 Separation algorithms for 0-1 knapsack polytopes We now turn our attention to the separation problem for Q itself. Four main approaches have been proposed: Boyd [7] formulated the problem as an LP with O(nb) variables and constraints. The formulation is based on the classical dynamic programming representation of 0-1 KP solutions as paths in a network with O(nb) nodes and arcs. Boyd [9] proposed a subgradient method for finding so-called Fenchel cuts, which are valid inequalities that maximise the distance (according to some desired norm) from the point to be separated. Fenchel cuts in general are not facet-inducing. Boyd [8] also proposed a primal simplex approach to generate Fenchel cuts, which seemed to perform better than the subgradient method. Using the idea of polarity (e.g., Nemhauser & Wolsey [24]), one can formulate the separation problem as an LP with an exponential number of constraints. Boccia [6] recently proposed solving this LP directly via a cutting plane method. We give some details of Boccia s method, since we will propose ways to improve it in Section 5. We seek a valid inequality α T x α 0 for Q which is violated by a given vector x [0, 1] n. For simplicity, we normalise the inequality so that α 0 = 1. The separation LP is then as follows: max j N x j α j j S α j 1 ( S N : a j b) (7) j S α j 0 (j N). The constraints (7) ensure that the inequality α T x 1 is satisfied by every feasible solution to the given 0-1 KP instance. If the solution α has a profit larger than 1, then the inequality (α ) T x 1 is violated by x. To check if a constraint (7) is violated by a given α [0, 1] n, it suffices to solve the following 0-1 KP: max s.t. j N α j y j j N a jy j b y j {0, 1} 6 (j N).

7 If the solution y to this 0-1 KP has a profit larger than 1, and we set S = {j N : yj = 1}, then the inequality j S α j 1 is violated by α. This observation enables one to solve the separation LP itself via a cutting plane method, in which an 0-1 KP is solved in each iteration. As Boccia points out, this naive approach suffers from very long computing times, and significant problems with rounding errors. To address these problems, Boccia proposes a three-stage procedure. In the first stage, variables that satisfy x j {0, 1} are temporarily eliminated from the problem, so that the separation LP can be solved in the subspace of the fractional variables. If a violated inequality is found, one proceeds to the second stage, in which the inequality is scaled and rounded to integers. Boccia does this by solving a small Integer Linear Program. In the third and final stage, the violated inequality is lifted to make it valid and facet-inducing for Q. To do this, Boccia solves a sequence of 0-1 KPs. As in Gu et al. [16], Boccia recommends down-lifting the variables with x j = 1 before up-lifting the variables with x j = Applications to general 0-1 LPs As mentioned above, inequalities for 0-1 knapsack polytopes can be used as cutting planes for general 0-1 LPs. Given a 0-1 LP in the form max{c T x : Ax b, x {0, 1} n }, where A Z m n and b Z m, we define the following two polytopes: P := {x [0, 1] n : Ax b} (the feasible region of the LP relaxation) P I := conv{x {0, 1} n : Ax b} (the integral hull). Furthermore, let us denote by Q i the 0-1 knapsack polytope associated with the ith constraint; i.e., Q i := conv x {0, 1}n : a ij x j b i. j N (Negative a ij coefficients are easily handled by complementing variables.) Then, we have by definition: P I m Q i P. i=1 Optimising over m i=1 Q i with a method such as Lagrangian decomposition (Guignard & Kim [19]) is extremely time-consuming. Instead, Crowder et al. [11] proposed using valid inequalities for the Q i as cutting planes, to strengthen an initial formulation (an approach now known as cut-andbranch). This works best when the constraint matrix A is sparse and has 7

8 no special structure. Using their separation heuristic for simple LCIs, they managed to solve ten 0-1 LPs that were previously considered intractable. This approach was developed further by Hoffman & Padberg [21] and Gu et al. [16]. Boyd [8, 9] performed some experiments on the same 10 instances as Crowder et al. [11], using his routines for generating Fenchel cuts. The resulting bounds were very strong, but running times were rather large. Cover inequalities and their variants have also been applied, with more or less success, to various 0-1 LPs with special structure. See, e.g., Bektas & Oguz [5], Gabrel & Minoux [14] and our own paper [22] for applications to multidimensional knapsack problems and Ferreira et al. [12] for an application to multiple knapsack problems. We are not aware of any application of WIs to general 0-1 LPs, but an application to the quadratic knapsack problem can be found in Helmberg & Weismantel [20]. Finally, we remark that one need not restrict oneself to deriving knapsack facets from the rows of the given system Ax b. Indeed, any surrogate constraint obtained as a non-negative linear combination of the original constraints can be used as a source row for generating knapsack facets. This is the approach taken by Boccia himself [6] when tackling certain facility location problems, and by Uchoa et al. [27] when tackling the capacitated minimum spanning tree problem. 3 New Separation Algorithms for ECIs and LCIs In this section, we present two exact algorithms and a heuristic for ECI separation, and then show how all three algorithms can be modified to yield heuristics for the separation of the (stronger and more general) LCIs. 3.1 A simple O(n 2 b) exact algorithm for ECIs We begin by presenting a fairly simple exact algorithm for ECI separation, that involves the solution of a sequence of 0-1 KPs. Recall from Subsection 2.2 that Crowder et al. [11] formulated the separation problem for CIs as the 0-1 KP (2)-(4). We will need the following lemma: Lemma 1 Let C N be the cover obtained by solving the 0-1 KP (2)-(4), and let a := max j C a j. If the ECI with C = C is not violated, then a violated ECI must satisfy C {j N : a j a }. Proof. terms: The violation of an ECI with cover C is equal to the sum of two 1. j C x j C + 1 (the violation of the CI with cover C) 8

9 2. j N\C:a j a x j. By definition, C is the cover that maximises the first term. So, if we replace C with a different cover C, the first term cannot increase. Moreover, if max j C a j > a, then the second term cannot increase either. If one could make the inequality in Lemma 1 strict, an iterative algorithm for ECI separation would immediately follow: solve the 0-1 KP (2)-(4), eliminate the items with a j a, and repeat until the total weight of the remaining items no longer exceeds the knapsack capacity. However, we were unable to determine whether the lemma is valid with strict inequality. Fortunately, we have the following refinement of Lemma 1: Proposition 1 Let C and a be defined as in Lemma 1, let S = {j C : a j = a } and let k be an item in S of minimum x value. If the ECI with C = C is not violated, then a violated ECI must satisfy C {j N \ {k } : a j a }. Proof. Suppose that the ECI corresponding to C is not violated, but there exists a cover C that yields a violated ECI. By Lemma 1, we can assume C {j N : a j a }. Now recall again the two components of the violation mentioned in the proof of Lemma 1. If C contained at least as many items of weight a as C did, then the second term would be no larger for C than it was for C, and the ECI would not be violated. Thus, C must contain fewer items of weight a than C did. Since k is the item with smallest x value in S, we can assume that k does not belong to C. 1: The following separation algorithm follows immediately from Proposition 0. Input: a knapsack constraint j N a jx j b and a vector x [0, 1] n. 1. Set B := {j N : x j = 0}. 2. Solve the 0-1 KP: min s.t. j N\B (1 x j )y j j N\B a jy j b + 1 y j {0, 1} (j N \ B). 3. Let C be the resulting cover. If the corresponding ECI is violated, output it. 4. Compute a and k as above. Insert into B the item k and all items in N \ B with weight larger than a. 5. If j N\B a j b, stop. Otherwise return to 2. The running time of this algorithm is clearly dominated by the solution of at most n 0-1 KPs. We will see in the next subsection that, even though these 0-1 KPs are of greater-than type, they can be solved in O(nb) time 9

10 just like the standard 0-1 KP. So, the algorithm has the rather large running time of O(n 2 b). However, in practice only a small number of 0-1 KPs, of rapidly decreasing size, typically need to be solved. They can often be solved quickly by branch-and-bound. Moreover, the algorithm often returns several violated inequalities rather than just one. 3.2 An O(nb) exact algorithm for ECIs In this section, we present a faster exact algorithm for ECI separation. First, we show explicitly how to solve the separation problem for the weaker CIs (i.e., the 0-1 KP (2)-(4)) in O(nb) time by dynamic programming. Then, we will show how to modify the dynamic program in order to solve the separation problem for the ECIs themselves in O(nb time. For k = 1,..., n and r = 0,..., b, define: k k f(k, r) := min (1 x j)y j : a j y j = r, y {0, 1} k. j=1 j=1 Also, for k = 1,..., n, define: k g(k) := min (1 x j)y j : j=1 k a j y j b + 1, y {0, 1} k. j=1 The following algorithm computes all of the f(k, r) and g(k) values, and thereby solves the CI separation problem: Set f(k, r) := for k = 1,..., n and r = 0,..., b. Set f(0, 0) := 0. Set g(k) := for k = 1,..., n. For k = 1,..., n For r = 0,..., b If f(k 1, r) < f(k, r) Set f(k, r) := f(k 1, r). For r = 0,..., b a k If f(k 1, r) + (1 x k ) < f(k, r + a k) Set f(k, r + a k ) := f(k 1, r) + (1 x k ). For r = b a k + 1,..., b If f(k 1, r) + (1 x k ) < g(k) Set g(k) := f(k 1, r) + (1 x k ). If g(k) < 1, output the violated CI. It is easy to see that this algorithm runs in O(nb) time. Suppose now that the items in N have been sorted in non-decreasing order of a j. Under this assumption, we have a violated ECI if and only 10

11 if g(k) n j=k+1 x j < 1 for some k N. Thus, to convert the exact CI separation algorithm into an exact ECI separation algorithm, it suffices to make the following two minor changes: i) Sort the items in non-decreasing order of a j before running the algorithm. ii) Change the last if statement to: If g(k) n j=k+1 x j < 1, output the violated ECI. Note that the initial sorting can be performed in O(n+a max ) time, using bucket sort with a max buckets. This leads to the following result: Theorem 1 The separation problem for ECIs can be solved exactly in O(nb) time. Proof. The time taken to sort the items, O(n + a max ), is dominated by the time taken to solve the dynamic program, O(nb). We remark that this algorithm may in fact return several violated ECIs in a single call. Moreover, it can be made faster in practice by excluding variables with x j = 0 from the DP, although, each time a violated ECI is found, one should check whether any such variables can be inserted into the extension E(C). 3.3 A fast heuristic for ECIs The ECI separation algorithm presented in Subsection 3.1 can be easily modified to form a fast heuristic for ECI separation: one simply solves the subproblems in step 2 heuristically instead of exactly. Following Crowder et al. [11], we simply insert items into C in non-decreasing order of (1 x j )/a j. This leads to the following algorithm: 1. Sort the items in N in non-decreasing order of (1 x j )/a j, and store them in a list L. Initialise the cover C as the empty set and initialise a = b. 2. Remove an item from the head of the sorted list L. If its weight is larger than a, ignore it, otherwise insert it into C. If C is now a cover, go to step If L is empty, stop. Otherwise, return to step If the ECI corresponding to C is violated by x, output it. 5. Find k, the heaviest item in C, decrease a accordingly, and delete k from C. Return to step 2. This heuristic can easily be implemented so that it runs in O(n 2 ) time. Moreover, O(n 2 ) seems to be necessary, since outputting a single violated ECI itself takes O(n) time. However, one can consider a variant of the heuristic in which, in step 4, one stops as soon as one violated ECI has 11

12 been output. With some care, this variant can be implemented to run in only O(n log n) time. We skip the details, partly for the sake of brevity, but also because the O(n 2 ) version already works very well in practice (see Subsection 6.3). 3.4 Heuristics for LCIs Recall that ECIs are a special case of, and dominated by, the LCIs. Any of the algorithms for ECI separation presented in the previous three subsections can be easily converted into a heuristic for LCI separation. Namely, each time we generate an ECI, we convert it into an LCI that is violated by at least as much. To do this, we use the same lifting order as Gu et al. [16]. The resulting algorithm is as follows: 0. Input: a knapsack constraint j N a jx j b, a vector x [0, 1] n and a cover C found by any of the ECI separation algorithms. 1. Let N 0 = {j N : x j = 0}, N 1 = {j N : x j = 1} and F = N \ (N 0 N 1 ). 2. Set D := C N If there exists a k N \ (C N 0 ) such that a k > b j D a j, then remove one or more items from D. 4. Up-lift variables in F \ C. If the resulting inequality is not violated, stop. 5. Down-lift variables in D and up-lift variables in N 0 to produce the final LCI. Note that the variables in F \ C have to be up-lifted before one checks the LCI for violation. This, and the need to perform down-lifting, makes LCI separation more time-consuming than ECI separation. Thus, the bounds on running times given for ECIs no longer hold. 4 Exact Separation Algorithms for WIs As we mentioned in Subsection 2.3, the complexity of the separation problem for WIs is unknown. We conjecture that it is weakly NP -hard. In this section, we present an enhanced version of Weismantel s pseudo-polynomial exact algorithm, and then present a faster exact algorithm. 4.1 Enhancements to Weismantel s algorithm We begin this subsection by analysing the running time of Weismantel s algorithm. Recall that it involves the solution of one equality-constrained 0-1 KP for each possible value of the residual capacity r = b j W a j. We will need the following lemma: Lemma 2 Suppose that x [0, 1] n and a T x b. A necessary condition for a WI to be violated is that 0 < r < a max. 12

13 Proof. When r = 0, the WI reduces to the original knapsack constraint a T x b. When r a max, the WI is implied by the upper bounds x j 1 for all j W. The assumption that x [0, 1] n and a T x b is of course reasonable. So, only a max 1 equality-constrained 0-1 KPs need to be solved. If we solve an equality-constrained 0-1 LP with a standard dynamic programming algorithm, it takes O(nb) time. Thus: Proposition 2 Weismantel s algorithm can be implemented to run in O(nba max ) time. This running time is excessive, in both theory and practice. We have found three simple ways to speed up the algorithm, which make a dramatic difference in practice: 1. It can be shown that, if x k = 0, then one can assume that k / W without losing any violated WIs. Similarly, if x k = 1, then one can assume that k W. This reduces the number of variables in the equality-constrained 0-1 KPs. 2. For a given value of r, it may be that no W N exists satisfying j W a j = b r. As a result, the equality-constrained 0-1 KP will be infeasible. To avoid wasting time trying to solve such infeasible problems, it helps to compute the possible values that j W a j can take before moving on to solve the equality-constrained 0-1 KPs. This can easily be done in O(nb) time and O(b) space, by dynamic programming. 3. If one solves the remaining equality-constrained 0-1 KPs via branchand-bound, one can abort the branch-and-bound run as soon as the upper bound is less than or equal to the right-hand side of (6). For the sake of completeness, we include formal proofs of the two results mentioned above. Proposition 3 Assume (as before) that x [0, 1] n and a T x b. If x k = 0, then one can assume that k / W without losing any violated WIs. Proof. Suppose that the WI (5) is violated by x and that there exists a k W such that x k = 0. If we delete k from W, we obtain the following modified WI: a j x j + max{0, a j r a k }x j b r a k. (8) j W \{k} Now, if we sum together: j N\W 13

14 the original knapsack constraint a T x b multiplied by a k /(r + a k ), the modified WI (8) multiplied by r/(r + a k ), and a suitable non-negative linear combination of non-negativity inequalities, we obtain the valid inequality: j W \{k} a j x j + j N\W max{0, a j r}x j + a2 k r + a k x k b r. This inequality is identical to the original WI, apart from the fact that the left-hand-side coefficient of x k is smaller. Thus, it must be violated by x by the same amount as the original WI. Since x satisfies the original knapsack constraint and the non-negativity inequalities, the modified WI (8) must be violated by at least as much as the original. Proposition 4 Assume (as before) that x [0, 1] n and a T x b. If x k = 1, then one can assume that k W without losing any violated WIs. Proof. Suppose that the WI (5) is violated by x and that there exists a k / W such that x k = 1. If a k r, then inserting k into W yields a WI that is violated by at least as much as the original. So suppose that a k > r. If we take the inequality a T x b and subtract from it r copies of the equation x k = 1, we obtain: j N\{k} a j x j + (a k r)x k b r. So the original WI cannot be violated, a contradiction. 4.2 An O((n + a max )b) exact algorithm Improving on the running time bound of O(nba max ) turns out to be a nontrivial exercise. We found it helpful to treat the two terms on the left-hand side of (6), and the right-hand side of (6), separately. Thus, we make the following three definitions. Definition 1 For given integers r, s, with 0 < r < a max and 0 s b r, let f(r, s) denote max a j x jy j : a j y j = s, y j {0, 1} (j N : a j r). j N:a j r j N:a j r 14

15 Definition 2 For given integers r, s, with 0 < r < a max and 0 s b r, let g(r, s) denote max x jy j : a j y j = s, y j {0, 1} (j N : a j > r). j N:a j >r j N:a j >r Definition 3 For a given integer r, with 0 < r < a max, let h(r) denote the right-hand side of (6) computed with respect to x. That is: h(r) = b r (a j r)x j. j N:a j >r Armed with these definitions, we propose the following exact separation algorithm for WIs: 0. Input: a knapsack constraint j N a jx j b and a vector x [0, 1] n to be separated. 1. Sort the items in N in non-decreasing order of weight. 2. Compute f(r, s) for r = 1,..., a max 1 and for s = 0,..., b r. 3. Compute g(r, s) for r = 1,..., a max 1 and for s = 0,..., b r. 4. Compute h(r) for r = 1,..., a max For r = 1,..., a max 1, compute the maximum possible violation of a WI with j W a j = b r. That is, compute: max f(r, s) + rg(r, b r s) h(r). 0 s b r If this quantity is positive, output the violated WI. We have the following theorem. Theorem 2 The separation problem for WIs can be solved exactly in O((n+ a max )b) time. Proof. As noted in Subsection 3.2, Step 1 can be performed in O(n+a max ) time. Step 2 can be performed in O(nb) time: just use the standard dynamic programming algorithm for the 0-1 KP, but introduce new items in nondecreasing order of weight. Similarly, Step 3 can be performed in O(nb) time: use the standard dynamic programming algorithm for the 0-1 KP, but introduce new items in non-increasing order of weight. Step 4 can be performed in O(na max ) time. Finally, step 5 takes O(ba max ) time. The terms O(n + a max ) and O(na max ) are dominated by O(nb). 5 An Enhanced Version of Boccia s Algorithm We now turn our attention to the separation problem for the 0-1 knapsack polytope itself. We have found several effective ways to enhance Boccia s algorithm. They are explained in the following subsections. 15

16 5.1 Eliminating rows from consideration If x can be quickly proven to lie in Q, there is no need to search for violated inequalities that induce facets of Q. The following propositions, which are easy to prove, give easily checkable sufficient conditions for this to be the case. Proposition 5 Assume (as before) that x [0, 1] n and a T x b. If j N:x j >0 a j b, then x Q. Proposition 6 Assume (as before) that x [0, 1] n and a T x b. If a {0, 1} n, then x Q. 5.2 Looking for violated LCIs first Our best algorithm for LCI separation (see Subsection 6.5) is very fast and effective. Moreover, the LCIs are typically less dense and better behaved numerically than general facet-inducing inequalities. Thus, we recommend calling the LCI separation algorithm first, and resorting to exact knapsack separation only when one does not find a violated LCI. 5.3 Solving knapsack subproblems heuristically Our goal when solving the knapsack subproblem is to find a violated constraint of the form (7). Note, however, that we need not necessarily find the most-violated constraint. Thus, one can solve the knapsack subproblem heuristically, and only call an exact routine when the heuristic fails. We run a simple greedy heuristic, based on profit-to-weight ratio, and then improve the solution obtained using a simple form of local search. A move in our local search algorithm consists in deleting one item from the knapsack and inserting one or more other items. We abort the local search if we find an inequality (7) violated by more than 0.25, otherwise we continue until a local optimum is reached. 5.4 Re-using old constraints Initialising each separation LP with the trivial bounds (0 α j 1 for all j N) is not a good idea. One can achieve more rapid convergence, and improve running times substantially, by re-cycling constraints (7) that have already been generated. We found the following idea effective: each time the separation LP has been solved to optimality, store in memory all of the constraints (7) that were generated. In the next call to the knapsack separator, use these constraints to initialise the separation LP. To prevent the separation LPs from growing too large, solve the initialised separation LP and throw away the 16

17 constraints that are non-binding (have zero dual price). Then commence solving knapsack subproblems as usual until the separation LP is solved to optimality. Although this seems like a simple idea, there is a significant complication: the separation LP is solved in the space of the fractional variables, but the set F changes from one iteration to the next. To deal with this, we proceed as follows. Any item that was previously in F, but is now integer, is deleted from all constraints (7) in which it appears. Next, we check each remaining constraint (7) and delete further items (taken from F ), if necessary, to obtain valid sets S. Finally, we check if the resulting constraints (7) can be strengthened by inserting into S some items that were previously integral but are now in F. In practice, the set F does not change radically from one separation call to the next. 5.5 A fast scaling heuristic Recall that, once a violated inequality is found, it is then necessary to scale it to integers to avoid problems with rounding errors. Boccia formulates this problem itself as a (small) integer linear program. We have found that the following simple heuristic works in over 90% of cases: let α min be the smallest fractional α value. Divide the entire inequality by α min, and check if the resulting inequality is integral (within some desired tolerance). Note that this heuristic works if and only if there is at least one variable in the desired facet-inducing inequality that has a left-hand-side coefficient equal to 1, which is of course the case for all LCIs. 5.6 One other minor consideration We mentioned in Subsection 2.2 that, when deriving LCIs, one or more items may have to be removed from the down-lifting set D if there exists an item k N \ (C N 0 ) whose weight a k exceeds the residual knapsack capacity b j D a j. An analogous problem arises in Boccia s method if there exists an item k F whose weight a k exceeds the residual knapsack capacity b = b j N 1 a j. When this problem arises, we iteratively move items from N 1 to F until no such item exists. 6 Computational Experiments In this section, we present the results of some computational experiments, to assess the usefulness of our separation algorithms when applied to

18 LPs. We use the following very simple cutting plane algorithm: 1. Solve the initial LP relaxation of the problem. 2. If the solution is integer, stop. 3. For each row of the problem, call one or more separation algorithms. If any violated inequalities are found, add all of them to the LP, re-optimise and go to step Output the final upper bound and stop. We use the primal simplex algorithm to solve the initial LP in step 1 and dual simplex to re-optimise after cutting planes have been added. The algorithm was implemented in Microsoft Visual Studio.NET 2003 and called on functions from version 10.0 of the ILOG CPLEX Callable Library. All experiments were performed using a PC with an Intel Pentium IV, 2.8 GHz processor and 512 MB of RAM. 6.1 Twelve instances from MIPLIB We will present comprehensive results for some instances taken from the MIPLIB. If one looks at all of the versions of the MIPLIB, one finds a total of 16 pure 0-1 instances. We have decided to omit four of them, for reasons explained below. In all of the remaining 12 instances apart from one, the objective is to minimise cost. Table 1 displays some relevant information for these instances, including: instance name, number of variables n, number of constraints m, density (percentage of entries in the constraint matrix A that are non-zero), number of genuine knapsack constraints (i.e., constraints whose left-hand-side coefficients are not all 0, 1 or -1), cost of the integer optimum, lower bound obtained by solving the initial LP relaxation (before cuts are added), and integrality gap, i.e., the gap between the initial lower bound and the optimum, expressed as a percentage of the optimum. We remark that the seven P instances were used by both Crowder et al. [11] and Boyd [8, 9] as benchmarks. All twelve instances were used as benchmarks by Gu et al. [16]. The four instances that we excluded were: 1152lav, lp41 and mod010, because they each have only one genuine knapsack-type constraint; and P6000, partly because it is very large, but also because its integrality gap is only 0.01%. In a few cases, we observed knapsack constraints for which one or more left-hand-side coefficients exceeded the right-hand side. This phenomenon indicates the presence of one or more variables that can be fixed permanently at zero or one without losing the integer optimum. We performed this fixing right at the start, to avoid problems with our separation routines. 18

19 name n m %Dens. KPs Opt. Init. LB IG (%) P P P P P P P bm lseu mod pipex sentoy Table 1: Description of the MIPLIB instances. 6.2 Application of CIs to MIPLIB instances Table 2 reports the results obtained when applying CIs to the MIPLIB instances. The columns headed % gap closed report the percentage of the integrality gap closed by the inequalities, and the columns headed time (s) report the total running time of the cutting plane algorithm, in seconds. This includes the time taken to solve the initial LP, the time spent calling the separation algorithms and the time taken to re-optimise after adding the cutting planes. The columns headed CJP were obtained using the greedy heuristic of Crowder et al. [11]. The column headed Exact was obtained by solving the separation problem exactly. The column headed Ex1 reports the time taken when using the exact separation algorithm in its original form. The column headed Ex2 reports the time taken by a hybrid approach, in which the exact algorithm is called only when the Crowder et al. heuristic fails. Although the CIs are theoretically very weak cutting planes, we see that they close half of the integrality gap on average. Moreover, the Crowder et al. heuristic does a remarkably good job. All of the running times are negligible (fractions of a second). 6.3 Application of ECIs to MIPLIB instances Table 3 reports the results obtained when applying ECIs to the MIPLIB instances. The columns headed CJP were again obtained using the Crowder et al. heuristic. The columns headed GNS were obtained using the alternative greedy heuristic of Gu et al. [16]. The columns headed N.H. were obtained using the new heuristic presented in Subsection 3.3. The column headed Exact was obtained by solving the separation problem exactly. The 19

20 %gap closed time (s) name CJP Exact CJP Ex1 Ex2 P P P P P P P bm lseu mod pipex sentoy Aver Table 2: Results obtained when applying CIs to MIPLIB instances. column headed Ex1 corresponds to the O(n 2 b) exact algorithm described in Subsection 3.1. The column headed Ex2 corresponds to the O(nb) exact algorithm described in Subsection 3.2. The missing entries are due to the fact that the algorithm ran into time and memory difficulties for the instance mod008. Finally, the column headed Ex3 corresponds to a hybrid approach, in which the O(n 2 b) exact algorithm is called only when the new heuristic fails. We found these results rather suprising, for several reasons. First, the ECIs do not usually close substantially more of the gap than the CIs, although they do for a few instances. Second, the Gu et al. heuristic performs no better than the Crowder et al. heuristic. (In fact, it performs slightly worse, but we doubt that the difference is statistically significant.) Third, the O(nb) exact algorithm turned out to be much slower in practice than the O(n 2 b) algorithm. Finally, we were also pleasantly surprised to find that our new separation heuristic closes more of the gap, on average, than the other two heuristics. All things considered, the new ECI heuristic and the new hybrid exact ECI algorithm look very promising. 6.4 Application of simple LCIs to MIPLIB instances Next, we converted the ECI algorithms into simple LCI algorithms, as explained in Subsection 3.4. Remarkably, the simple LCIs closed no more of the gap than the ECIs, for any of the twelve instances, despite the fact that they are stronger in theory. Further investigation revealed that up- 20

21 %gap closed time (s) name CJP GNS N.H. Exact CJP GNS N.H. Ex1 Ex2 Ex3 P P P P P P P bm lseu mod pipex sentoy Aver Table 3: Results obtained when applying ECIs to MIPLIB instances. lifting coefficients larger than 1 occurred very rarely when simple LCIs were generated. As expected, the running times were slightly higher for simple LCIs than for ECIs. (For brevity, we do not present the times in tables.) All things considered, the simple LCIs exhibit disappointing performance for these instances. 6.5 Application of general LCIs to MIPLIB instances Table 4 reports the results obtained when applying general LCIs to the MI- PLIB instances. Four different separation heuristics are compared. The columns headed CJP and GNS were again obtained using the Crowder et al. and Gu et al. heuristics to generate the covers. The columns headed N.H.1 were obtained using our best exact ECI separation algorithm (labelled Ex3 in Subsection 6.3) to generate the covers. The columns headed N.H.2 were obtained using a hybrid method, in which our best ECI algorithm is called only when the Gu et al. heuristic fails to yield a violated LCI. In all four cases, the lifting sequence of Gu et al. [16] was used to derive the general LCIs (see Subsections 2.2 and 3.4). The amount of gap closed by the general LCIs is significantly better than that closed by the ECIs on some instances, and especially so on pipex. This confirms the value of down-lifting. Moreover, in our view, the running times are extremely promising. All things considered, we recommend using the strategy represented by N.H.2. 21

22 %gap closed time (s) name CJP GNS N.H.1 N.H.2 CJP GNS N.H.1 N.H.2 P P P P P P P bm lseu mod pipex sentoy Aver Table 4: Results obtained when applying general LCIs to MIPLIB instances. 6.6 Application of WIs to MIPLIB instances Table 5 reports results obtained for the WIs, using three algorithms: the heuristic of Helmberg & Weismantel [20], the enhanced version of Weismantel s exact algorithm presented in Subsection 4.1, and a hybrid algorithm, in which the exact algorithm is called only when the heuristic fails. We do not report results for the O((n + a max )b) exact algorithm presented in Subsection 4.2, because it ran into serious time and memory problems for most of the instances. We see that, for these instances, the WIs are less effective than the LCIs in terms of percentage gap closed. Moreover, our exact separation algorithm for WIs is excessively time-consuming on some of the instances, even if we call the Helmberg-Weismantel heuristic first. On the other hand, the Helmberg-Weismantel heuristic itself does remarkably well: it is much faster than the exact algorithm, yet closes the same percentage of the integrality gap as the exact algorithm on several instances. We also experimented with several variants of our cutting plane algorithm in which LCIs and WIs were used in combination. Using our best scheme, the LCIs and WIs were able to close 64.54% of the gap on average. Considering that our best scheme for LCIs (see Table 4) already closes 63.13% of the gap on average, this suggests that the WIs are of little use for these instances. (We do not report detailed results, for the sake of brevity.) 22

23 %gap closed time (s) name HW Exact HW Ex1 Ex2 P P P P P P P bm lseu mod pipex sentoy Aver Table 5: Results obtained when applying WIs to MIPLIB instances. 6.7 Application of general knapsack facets to MIPLIB instances Table 6 reports results obtained using general facets of the knapsack polytope. The percentage gap closed is shown, along with the running times for three variants of Boccia s separation scheme. The columns headed T1 and T2 correspond to Boccia s original scheme and our enhanced scheme, respectively. The column headed T3 was obtained with a scheme that incorporates all of our enhancements apart from the one mentioned in Subsection 5.2, i.e., a scheme that does not use LCIs. Several things are apparent from the table. First, we see that the general knapsack facets close significantly more of the integrality gap than the LCIs and WIs. Indeed, they close most of the integrality gap on all instances, apart from P0201, bm23 and sentoy. Second, our enhanced scheme is around four times faster, on average, than the original one. Third, it definitely pays off to call LCI separation first, provided of course that one has a fast and effective separation heuristic for LCIs. Fourth, our best exact separation algorithm for general knapsack facets is (bizarrely) less time-consuming than our best exact separation algorithm for WIs. A couple of further comments are in order. First, we were using a generalpurpose branch-and-bound solver to solve the knapsack subproblems. One could probably obtain a substantial speed-up using a specialised algorithm for the 0-1 KP (Boccia himself uses a modified version of the MINKNAP algorithm of Pisinger [26]). Second, we wish to point out that, for the P instances, the percentage gaps closed by the general knapsack facets are exactly the 23

24 name %gap T1 T2 T3 P P P P P P P bm lseu mod pipex sentoy Aver Table 6: Results obtained when applying general knapsack facets to MIPLIB instances. same as those reported by Boyd [8, 9], with one exception: for P2756, Boyd obtained a slightly smaller value of 86.16%. Indeed, this instance contains a few very dense constraints that, according to Boyd, caused his algorithm to run into difficulties. 6.8 Multi-dimensional knapsack instances To gain further insight into the relative performance of the different inequalities, and also to explore the limits of our approaches, we also performed experiments on the 0-1 multi-dimensional knapsack problem (MKP). We used the library of MKP instances created by Chu & Beasley [10]. In these instances, the constraint matrix A consists entirely of positive integers and is therefore 100% dense. Although the library contains 270 instances, the largest instances have not yet been solved to proven optimality (see, e.g., Vimont et al. [28]). Thus, we restrict our attention to 180 of the instances, for which the optima are known. The 180 instances are arranged in blocks of ten. Table 7 reports the following information for each block. The first three columns show the number of variables n, the number of constraints m and the so-called tightness ratio α. The next column shows the percentage integrality gap of the initial LP relaxation (before the addition of cutting planes). The remaining five columns show the percentage of the integrality gap closed by each of five classes of inequalities. Each figure in the last six columns is averaged over the given ten instances. Before interpreting the results, there are three points that we need to 24

Cover Inequalities. As mentioned above, the cover inequalities were first presented in the context of the 0-1 KP. The 0-1 KP takes the following form:

Cover Inequalities. As mentioned above, the cover inequalities were first presented in the context of the 0-1 KP. The 0-1 KP takes the following form: Cover Inequalities Konstantinos Kaparis Adam N. Letchford Many problems arising in OR/MS can be formulated as Mixed-Integer Linear Programs (MILPs); see entry #1.4.1.1. If one wishes to solve a class of

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization

More information

Improved Gomory Cuts for Primal Cutting Plane Algorithms

Improved Gomory Cuts for Primal Cutting Plane Algorithms Improved Gomory Cuts for Primal Cutting Plane Algorithms S. Dey J-P. Richard Industrial Engineering Purdue University INFORMS, 2005 Outline 1 Motivation The Basic Idea Set up the Lifting Problem How to

More information

11 Linear Programming

11 Linear Programming 11 Linear Programming 11.1 Definition and Importance The final topic in this course is Linear Programming. We say that a problem is an instance of linear programming when it can be effectively expressed

More information

6. Lecture notes on matroid intersection

6. Lecture notes on matroid intersection Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem

A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem Presented by: Ted Ralphs Joint work with: Leo Kopman Les Trotter Bill Pulleyblank 1 Outline of Talk Introduction Description

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

LECTURES 3 and 4: Flows and Matchings

LECTURES 3 and 4: Flows and Matchings LECTURES 3 and 4: Flows and Matchings 1 Max Flow MAX FLOW (SP). Instance: Directed graph N = (V,A), two nodes s,t V, and capacities on the arcs c : A R +. A flow is a set of numbers on the arcs such that

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

In this lecture, we ll look at applications of duality to three problems:

In this lecture, we ll look at applications of duality to three problems: Lecture 7 Duality Applications (Part II) In this lecture, we ll look at applications of duality to three problems: 1. Finding maximum spanning trees (MST). We know that Kruskal s algorithm finds this,

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

A Computational Study of Conflict Graphs and Aggressive Cut Separation in Integer Programming

A Computational Study of Conflict Graphs and Aggressive Cut Separation in Integer Programming A Computational Study of Conflict Graphs and Aggressive Cut Separation in Integer Programming Samuel Souza Brito and Haroldo Gambini Santos 1 Dep. de Computação, Universidade Federal de Ouro Preto - UFOP

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

5.3 Cutting plane methods and Gomory fractional cuts

5.3 Cutting plane methods and Gomory fractional cuts 5.3 Cutting plane methods and Gomory fractional cuts (ILP) min c T x s.t. Ax b x 0integer feasible region X Assumption: a ij, c j and b i integer. Observation: The feasible region of an ILP can be described

More information

1. Lecture notes on bipartite matching February 4th,

1. Lecture notes on bipartite matching February 4th, 1. Lecture notes on bipartite matching February 4th, 2015 6 1.1.1 Hall s Theorem Hall s theorem gives a necessary and sufficient condition for a bipartite graph to have a matching which saturates (or matches)

More information

Branch-and-Cut and GRASP with Hybrid Local Search for the Multi-Level Capacitated Minimum Spanning Tree Problem

Branch-and-Cut and GRASP with Hybrid Local Search for the Multi-Level Capacitated Minimum Spanning Tree Problem Branch-and-Cut and GRASP with Hybrid Local Search for the Multi-Level Capacitated Minimum Spanning Tree Problem Eduardo Uchoa Túlio A.M. Toffolo Mauricio C. de Souza Alexandre X. Martins + Departamento

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2018 04 24 Lecture 9 Linear and integer optimization with applications

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 18 All-Integer Dual Algorithm We continue the discussion on the all integer

More information

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling

More information

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem L. De Giovanni M. Di Summa The Traveling Salesman Problem (TSP) is an optimization problem on a directed

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm Instructor: Shaddin Dughmi Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Given an NP-hard problem, what should be done? Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one of three desired features. Solve problem to optimality.

More information

Target Cuts from Relaxed Decision Diagrams

Target Cuts from Relaxed Decision Diagrams Target Cuts from Relaxed Decision Diagrams Christian Tjandraatmadja 1, Willem-Jan van Hoeve 1 1 Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA {ctjandra,vanhoeve}@andrew.cmu.edu

More information

2 The Fractional Chromatic Gap

2 The Fractional Chromatic Gap C 1 11 2 The Fractional Chromatic Gap As previously noted, for any finite graph. This result follows from the strong duality of linear programs. Since there is no such duality result for infinite linear

More information

Notes for Lecture 24

Notes for Lecture 24 U.C. Berkeley CS170: Intro to CS Theory Handout N24 Professor Luca Trevisan December 4, 2001 Notes for Lecture 24 1 Some NP-complete Numerical Problems 1.1 Subset Sum The Subset Sum problem is defined

More information

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

Polynomial-Time Approximation Algorithms

Polynomial-Time Approximation Algorithms 6.854 Advanced Algorithms Lecture 20: 10/27/2006 Lecturer: David Karger Scribes: Matt Doherty, John Nham, Sergiy Sidenko, David Schultz Polynomial-Time Approximation Algorithms NP-hard problems are a vast

More information

Approximation Algorithms: The Primal-Dual Method. My T. Thai

Approximation Algorithms: The Primal-Dual Method. My T. Thai Approximation Algorithms: The Primal-Dual Method My T. Thai 1 Overview of the Primal-Dual Method Consider the following primal program, called P: min st n c j x j j=1 n a ij x j b i j=1 x j 0 Then the

More information

1. Lecture notes on bipartite matching

1. Lecture notes on bipartite matching Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans February 5, 2017 1. Lecture notes on bipartite matching Matching problems are among the fundamental problems in

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings On the Relationships between Zero Forcing Numbers and Certain Graph Coverings Fatemeh Alinaghipour Taklimi, Shaun Fallat 1,, Karen Meagher 2 Department of Mathematics and Statistics, University of Regina,

More information

Treewidth and graph minors

Treewidth and graph minors Treewidth and graph minors Lectures 9 and 10, December 29, 2011, January 5, 2012 We shall touch upon the theory of Graph Minors by Robertson and Seymour. This theory gives a very general condition under

More information

56:272 Integer Programming & Network Flows Final Examination -- December 14, 1998

56:272 Integer Programming & Network Flows Final Examination -- December 14, 1998 56:272 Integer Programming & Network Flows Final Examination -- December 14, 1998 Part A: Answer any four of the five problems. (15 points each) 1. Transportation problem 2. Integer LP Model Formulation

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

Dual-Based Approximation Algorithms for Cut-Based Network Connectivity Problems

Dual-Based Approximation Algorithms for Cut-Based Network Connectivity Problems Dual-Based Approximation Algorithms for Cut-Based Network Connectivity Problems Benjamin Grimmer bdg79@cornell.edu arxiv:1508.05567v2 [cs.ds] 20 Jul 2017 Abstract We consider a variety of NP-Complete network

More information

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh Lecture 3 Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets Gérard Cornuéjols Tepper School of Business Carnegie Mellon University, Pittsburgh January 2016 Mixed Integer Linear Programming

More information

cuts François Margot 1 Chicago, IL Abstract The generalized intersection cut (GIC) paradigm is a recent framework for generating

cuts François Margot 1 Chicago, IL Abstract The generalized intersection cut (GIC) paradigm is a recent framework for generating Partial hyperplane activation for generalized intersection cuts Aleksandr M. Kazachkov 1 Selvaprabu Nadarajah 2 Egon Balas 1 François Margot 1 1 Tepper School of Business, Carnegie Mellon University, Pittsburgh,

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone: MA4254: Discrete Optimization Defeng Sun Department of Mathematics National University of Singapore Office: S14-04-25 Telephone: 6516 3343 Aims/Objectives: Discrete optimization deals with problems of

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge May 10, 2017 Abstract We consider the problem of minimizing

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

More information

The Size Robust Multiple Knapsack Problem

The Size Robust Multiple Knapsack Problem MASTER THESIS ICA-3251535 The Size Robust Multiple Knapsack Problem Branch and Price for the Separate and Combined Recovery Decomposition Model Author: D.D. Tönissen, Supervisors: dr. ir. J.M. van den

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I Instructor: Shaddin Dughmi Announcements Posted solutions to HW1 Today: Combinatorial problems

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge October 20, 2016 Abstract We consider the problem of minimizing

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem Column Generation: Cutting Stock A very applied method thst@man.dtu.dk Outline History The Simplex algorithm (re-visited) Column Generation as an extension of the Simplex algorithm A simple example! DTU-Management

More information

Column Generation: Cutting Stock

Column Generation: Cutting Stock Column Generation: Cutting Stock A very applied method thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline History The Simplex algorithm (re-visited) Column Generation as an extension

More information

Column Generation Method for an Agent Scheduling Problem

Column Generation Method for an Agent Scheduling Problem Column Generation Method for an Agent Scheduling Problem Balázs Dezső Alpár Jüttner Péter Kovács Dept. of Algorithms and Their Applications, and Dept. of Operations Research Eötvös Loránd University, Budapest,

More information

Selected Topics in Column Generation

Selected Topics in Column Generation Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a

More information

COLUMN GENERATION IN LINEAR PROGRAMMING

COLUMN GENERATION IN LINEAR PROGRAMMING COLUMN GENERATION IN LINEAR PROGRAMMING EXAMPLE: THE CUTTING STOCK PROBLEM A certain material (e.g. lumber) is stocked in lengths of 9, 4, and 6 feet, with respective costs of $5, $9, and $. An order for

More information

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018 CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 2018 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved.

More information

Last topic: Summary; Heuristics and Approximation Algorithms Topics we studied so far:

Last topic: Summary; Heuristics and Approximation Algorithms Topics we studied so far: Last topic: Summary; Heuristics and Approximation Algorithms Topics we studied so far: I Strength of formulations; improving formulations by adding valid inequalities I Relaxations and dual problems; obtaining

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

11.1 Facility Location

11.1 Facility Location CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local

More information

CMPSCI611: The Simplex Algorithm Lecture 24

CMPSCI611: The Simplex Algorithm Lecture 24 CMPSCI611: The Simplex Algorithm Lecture 24 Let s first review the general situation for linear programming problems. Our problem in standard form is to choose a vector x R n, such that x 0 and Ax = b,

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Reload Cost Trees and Network Design

Reload Cost Trees and Network Design Reload Cost Trees and Network Design Ioannis Gamvros, ILOG, Inc., 1080 Linda Vista Avenue, Mountain View, CA 94043, USA Luis Gouveia, Faculdade de Ciencias da Universidade de Lisboa, Portugal S. Raghavan,

More information

Computational Integer Programming. Lecture 12: Branch and Cut. Dr. Ted Ralphs

Computational Integer Programming. Lecture 12: Branch and Cut. Dr. Ted Ralphs Computational Integer Programming Lecture 12: Branch and Cut Dr. Ted Ralphs Computational MILP Lecture 12 1 Reading for This Lecture Wolsey Section 9.6 Nemhauser and Wolsey Section II.6 Martin Computational

More information

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints Daniel Porumbel CEDRIC CS Lab, CNAM, 292 rue Saint-Martin, F-75141 Paris, France

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Eric Xing Lecture 14, February 29, 2016 Reading: W & J Book Chapters Eric Xing @

More information

On the Max Coloring Problem

On the Max Coloring Problem On the Max Coloring Problem Leah Epstein Asaf Levin May 22, 2010 Abstract We consider max coloring on hereditary graph classes. The problem is defined as follows. Given a graph G = (V, E) and positive

More information

Algorithms for Decision Support. Integer linear programming models

Algorithms for Decision Support. Integer linear programming models Algorithms for Decision Support Integer linear programming models 1 People with reduced mobility (PRM) require assistance when travelling through the airport http://www.schiphol.nl/travellers/atschiphol/informationforpassengerswithreducedmobility.htm

More information

From final point cuts to!-polyhedral cuts

From final point cuts to!-polyhedral cuts AUSSOIS 2017 From final point cuts to!-polyhedral cuts Egon Balas, Aleksandr M. Kazachkov, François Margot Tepper School of Business, Carnegie Mellon University Overview Background Generalized intersection

More information

A Computational Study of Bi-directional Dynamic Programming for the Traveling Salesman Problem with Time Windows

A Computational Study of Bi-directional Dynamic Programming for the Traveling Salesman Problem with Time Windows A Computational Study of Bi-directional Dynamic Programming for the Traveling Salesman Problem with Time Windows Jing-Quan Li California PATH, University of California, Berkeley, Richmond, CA 94804, jingquan@path.berkeley.edu

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Decomposition and Dynamic Cut Generation in Integer Linear Programming

Decomposition and Dynamic Cut Generation in Integer Linear Programming Decomposition and Dynamic Cut Generation in Integer Linear Programming M.V. Galati Advanced Analytics - Operations R & D, SAS Institute, Chesterbrook, PA 908 Ted K. Ralphs Department of Industrial and

More information

Algorithms and Data Structures

Algorithms and Data Structures Algorithms and Data Structures Spring 2019 Alexis Maciel Department of Computer Science Clarkson University Copyright c 2019 Alexis Maciel ii Contents 1 Analysis of Algorithms 1 1.1 Introduction.................................

More information

Investigating Mixed-Integer Hulls using a MIP-Solver

Investigating Mixed-Integer Hulls using a MIP-Solver Investigating Mixed-Integer Hulls using a MIP-Solver Matthias Walter Otto-von-Guericke Universität Magdeburg Joint work with Volker Kaibel (OvGU) Aussois Combinatorial Optimization Workshop 2015 Outline

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints Daniel Porumbel CEDRIC CS Lab, CNAM, 292 rue Saint-Martin, F-75141 Paris, France

More information

Construction of Minimum-Weight Spanners Mikkel Sigurd Martin Zachariasen

Construction of Minimum-Weight Spanners Mikkel Sigurd Martin Zachariasen Construction of Minimum-Weight Spanners Mikkel Sigurd Martin Zachariasen University of Copenhagen Outline Motivation and Background Minimum-Weight Spanner Problem Greedy Spanner Algorithm Exact Algorithm:

More information

General properties of staircase and convex dual feasible functions

General properties of staircase and convex dual feasible functions General properties of staircase and convex dual feasible functions JÜRGEN RIETZ, CLÁUDIO ALVES, J. M. VALÉRIO de CARVALHO Centro de Investigação Algoritmi da Universidade do Minho, Escola de Engenharia

More information

Winning Positions in Simplicial Nim

Winning Positions in Simplicial Nim Winning Positions in Simplicial Nim David Horrocks Department of Mathematics and Statistics University of Prince Edward Island Charlottetown, Prince Edward Island, Canada, C1A 4P3 dhorrocks@upei.ca Submitted:

More information

A Branch-and-Bound Algorithm for the Knapsack Problem with Conflict Graph

A Branch-and-Bound Algorithm for the Knapsack Problem with Conflict Graph A Branch-and-Bound Algorithm for the Knapsack Problem with Conflict Graph Andrea Bettinelli, Valentina Cacchiani, Enrico Malaguti DEI, Università di Bologna, Viale Risorgimento 2, 40136 Bologna, Italy

More information

Exploiting Degeneracy in MIP

Exploiting Degeneracy in MIP Exploiting Degeneracy in MIP Tobias Achterberg 9 January 2018 Aussois Performance Impact in Gurobi 7.5+ 35% 32.0% 30% 25% 20% 15% 14.6% 10% 5.7% 7.9% 6.6% 5% 0% 2.9% 1.2% 0.1% 2.6% 2.6% Time limit: 10000

More information

3 Fractional Ramsey Numbers

3 Fractional Ramsey Numbers 27 3 Fractional Ramsey Numbers Since the definition of Ramsey numbers makes use of the clique number of graphs, we may define fractional Ramsey numbers simply by substituting fractional clique number into

More information

5. Lecture notes on matroid intersection

5. Lecture notes on matroid intersection Massachusetts Institute of Technology Handout 14 18.433: Combinatorial Optimization April 1st, 2009 Michel X. Goemans 5. Lecture notes on matroid intersection One nice feature about matroids is that a

More information

Cutting Planes for Some Nonconvex Combinatorial Optimization Problems

Cutting Planes for Some Nonconvex Combinatorial Optimization Problems Cutting Planes for Some Nonconvex Combinatorial Optimization Problems Ismael Regis de Farias Jr. Department of Industrial Engineering Texas Tech Summary Problem definition Solution strategy Multiple-choice

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS 11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: vertex cover LP rounding: vertex cover generalized load balancing knapsack problem Lecture slides by Kevin Wayne Copyright 2005

More information

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008 LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following

More information

Design and Analysis of Algorithms (V)

Design and Analysis of Algorithms (V) Design and Analysis of Algorithms (V) An Introduction to Linear Programming Guoqiang Li School of Software, Shanghai Jiao Tong University Homework Assignment 2 is announced! (deadline Apr. 10) Linear Programming

More information

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :

More information