A Generalized Wedelin Heuristic for Integer Programming

Size: px
Start display at page:

Download "A Generalized Wedelin Heuristic for Integer Programming"

Transcription

1 Online Supplement to A Generalized Wedelin Heuristic for Integer Programming Oliver Bastert Fair Isaac, Leam House, 64 Trinity Street, Leamington Spa, Warwicks CV32 5YN, UK, OliverBastert@fairisaac.com Benjamin Hummel Institut für Informatik, Technische Universität München, Garching bei München, Germany, hummelb@in.tum.de Sven de Vries FB IV - Mathematik, Universität Trier, Trier, Germany, devries@uni-trier.de A. Other improvements This section complements Section 4 in the main paper by listing other approaches explored for improving the results of the heuristic. As those listed here are helpful only for special instances and thus not used in our evaluation on an inhomogeneous set of instances, they were moved to this online supplement. A.1 Adaptive adjustment of κ Wedelin (1995) mentions that the value of κ should be increased from iteration to iteration and this increment should be small near the point of convergence (which was found during another run). Unfortunately only little detail is given how to control κ; therefore Davey (1995) and Mason (2001) started anew and choose a simple linear function to control it. The function we use for the value of kappa is κ = κ min + max{0, i w} κ step where i is the number of iterations performed and w the number of warmup iterations during which the value of κ is not changed. Usually κ min is chosen 0 and an additional parameter κ max is selected which denotes the maximal value of κ allowed before reporting infeasibility. A general problem with this approach is that we have to use the same step size for both the initial iterations and the final iterations near convergence. Choosing κ step small will result in increased running time due to the greater number of iterations needed to reach the point of convergence. On the other hand we might miss the best convergence spot for bigger κ step and thus lose in the quality of the solution found. One solution to this problem would be to extend the scheme presented above to a piecewise linear function where the step size is determined by the results of a previous run similar to the way suggested by Wedelin (1995). This however introduces new problems in how to choose the parameters for the first run, how to derive parameters for the next run, and what to do if the fast initial run does not find a feasible solution. To circumvent this problem we explored a different idea. It can be expected, and is confirmed by our practical experience with the algorithm, that the relative number of violated constraints usually decreases significantly near the point of convergence. Therefore we calculate the current step size based on this number; with the new adaptiveness parameter α we use the following function to update κ after every iteration: { κmin if i w κ new κ old + κ step ( R m ) α otherwise where R is the set of violated constraints and m is the total number of constraints. Setting α = 0 gives the same linear scheme as above while increasing α will couple the step size more to the current number of violated constraints. The computational results collected in Table 1 confirm the assumption that a decreased step size usually increases both solution quality and running time. Furthermore it shows that a value of α = 1 usually improves the quality of the solution often with only a moderate slow down. A drawback of this method is its problem dependency. While for some problems the fraction of violated constraints typically is around 0.3, for other problems we have values of less than Thus α = 1 might be a good value for problems 1

2 Table 1: Results for different step sizes and adaptiveness parameters. Best times and smallest values for each minimization problem are emphasized. air04 air05 eild76 nw04 κ step = α = s 0.38s 0.21s 7.40s κ step = α = s 0.40s 0.23s 9.76s κ step = α = s 0.97s 0.34s 14.28s κ step = α = s 0.48s 0.45s 13.12s κ step = α = s 5.36s 0.86s 12.72s κ step = α = s 2.18s 1.71s 21.53s κ step = α = s 0.81s 2.00s 13.50s κ step = α = s 2.11s 6.37s 62.79s κ step = α = s 0.89s 13.65s 17.57s of the first class while for a problem of the second kind the step size for this value will be too small to get results within reasonable time. Hence in Section 5.4 of the main paper we use α = 0, but for the discrete tomography instances of little diversity in Section 5.6 setting α = 1 helps in reliably finding solutions. A remedy for the general case might be to analyze the first few iterations to determine the values of κ step and α, but this was not further pursued. A.2 Changing constraint order Motivated by some ideas presented by Davey (1995) we added the ability to sort the constraints in every iteration of our implementation of Wedelin s heuristic. This specifies the order in which we step through the elements of R in the basic algorithm to apply the update step. Besides the option to keep the constraints ordered as specified in the problem (no sorting) we implemented the possibilities to reverse their order in every iteration or perturb them randomly. Furthermore we added sorting by infeasibility which is directly based on a selection criterion from Davey (1995). For a constraint l k i a kix i u k and a given solution ˆx we define the infeasibility ι k as 1 0 if l k i a kiˆx i u k ι k = l k i max i a ki a kiˆx i if i a kiˆx i < l k i a kiˆx i u k if u k < i a kiˆx i The first term scales the infeasibility measure by the largest coefficient, so it is independent of multiplying the row by a constant. As ˆx and a ik are always integral in our context, we have ι k non-negative and, as long as we only deal with ±1 coefficients, integral. We tested sorting by ι k both in increasing and decreasing order while ties (which occur quite often) were resolved randomly. Computational results for different strategies of ordering can be found in Table 2. Unfortunately no method appears to dominate the others. For more structurally homogeneous instances, this idea might be worth pursuing. 2

3 Table 2: Results for different constraint sorting (times omitted as they were similar for all sorting methods and instances [except for nw04]) sorting method air05 eild76 manna81 nw04 sp97ar none reversing random sorting infeasibility (decr.) infeasibility (incr.) A.3 Utilizing LP information When used in the context of an IP solver, a primal/dual solution of the relaxed (not integrality restricted) linear problem is usually calculated as a first step and thus is available for free. So use of additional LP information might be worthwhile. Here we will pursue four different ways to utilize LP information, starting with one way to use a dual solution and then continuing with three ways to use the primal solution to restrict the solution space. A.3.1 Initializing from an LP solution The method proposed by Davey (1995) is to initialize the Lagrangian multipliers ˆπ with an optimal LP dual π LP solution of the problem. This way the algorithm starts with a solution that is (hopefully) closer to the integer optimum and thus yields faster convergence and better quality. According to our computations (Table 3) this might be true for some cases while for other problems the use of a dual solution seems to be misleading for the heuristic. The solution times are mainly increased due to the additional time needed to load the (pre-calculated) simplex solution. Although, this idea sometimes improves the solution compared to the base algorithm, we will see in the next subsections other often better uses for LP-information. A.3.2 Restricting solution space by bounding the objective For some problems it is too easy to find a feasible solution. Especially when dealing with inequality constraints the set of feasible solutions is so large that the Wedelin algorithm often finds a solution after just a few iterations. While this is good for the running time of the algorithm, the solution quality in this case will usually be far from optimal. In this subsection (and the next two) we will present some ideas that help to restrict the solution space to a smaller region that (hopefully still contains the optimum and) is more likely to yield a near optimal solution. Those methods all have in common that they consist in adding one or more additional constraints and need an LP solution or some other problem specific knowledge to find the region which is most promising for producing good solutions. The first idea is very simple. Assume that an upper bound u for the optimal value is known a priori. Then we could simply add the constraint c x u (where c is the objective vector) to our problem. There are two issues with this approach. The first problem is how to find such a value u and the second is how to include this constraint into the update step as c might have fractional entries. To find a suitable value u we could either use problem specific knowledge or use the LP objective as an estimate and derive u by multiplication with some value 1 + ɛ. It should be noted that choosing u too small results in an infeasible problem. The handling of the fractional coefficients in the constraint c x u is easy in this case, because we only have an upper bound. So after treating variables with negative coefficients as shown in Section 3.3 of the main paper we can use a scanning approach similar to Section 3.2. Without a lower bound we will not have to split any of these variables (or even apply rounding to the coefficients). A.3.3 Restricting solution space by local branching constraints Our second method for restricting the solution space is based on the local branching constraints by Fischetti and Lodi (2003) which they use to improve a known integer solution by adding constraints that restrict the search of the MIP solver to a neighborhood of the solution. Although we usually have no integer solution 3

4 Table 3: Results for different approaches for using an existing LP solution (LP-solution time excluded) air05 core2536 eild76 fast0507 manna81 nw04 seymour normal s 1.29s 0.42s 3.62s 0.30s 13.33s 0.27s dual LP solution 1.08s 2.86s 0.60s 9.16s 1.18s 16.54s 0.87s bounding objective (1.2 times LP obj.) 1.06s 11.66s 1.39s 80.87s 2.37s 14.81s 1.32s local branching cons. (k = 15) 1.14s 3.13s 0.65s 12.04s 1.21s 17.31s 0.87s bounds tightened (d = 1) 1.11s 5.47s 0.58s 10.97s 1.80s 17.18s 5.48s previous two together 1.12s 6.33s 0.66s 13.25s 1.80s 17.28s 1.03s for our problems when running the heuristic, we might have an LP solution where(due to the properties of a simplex basis) often a large quantity of the variables have integer values. Now let x LP be a simplex solution of the problem min{c x x [0, 1] n Ax b} and D := {i x LP i = 0}, E := {i x LP i = 1} the index sets of those variables whose LP solution has a value of 0 or 1. The adapted local branching constraint will then be (1 x i ) k i D x i + i E where k should be chosen in the range [10, 20] independent of problem size according to Fischetti and Lodi. This constraint allows a maximum of k of those fixed variables to differ from their LP solution. Although Fischetti and Lodi (2003) describe how to generalize these constraints we decided instead, to handle problems with general integer variables by just excluding their indices from D and E; this improved solution quality already substantially. A.3.4 Restricting solution space by tightening bounds The last idea can be considered dual to the previous one as it involves modifying the bounds of the constraints. Often the feasible region is huge due to the absence of upper bounds for the constraints. So it seems natural to tighten the bounds of the constraints to some extent. Again let x LP be the LP solution of the problem mentioned above. For every constraint l a x u of the problem we consider a constraint a x LP d a x a x LP + d where d is the distance from the LP solution we allow and replace the original constraint by A.3.5 max(a x LP d, l) a x min(a x LP + d, u). Comparison of different ways to utilize the LP relaxation To give an impression of the abilities of these methods (which of course can also be used in conjunction with each other) we have collected some results in Table 3. It sometimes seems to be a good idea to restrict the solution space. While for some problems there will be no gain or even a slight deterioration, other problems seem to benefit from this. The drawbacks are the somewhat increased run-times, the risk of getting no solution due to an infeasible problem, and the need for an LP solution (which might come at no cost in some applications). When evaluated over our entire test set (see Section 5.4, main paper), however using no LP solution at all was the best option on average. But as Table 3 suggests, the best combination of these methods depends on the problem at hand, so in case of a large set of structurally similar instances it might be a good idea to try several of these approaches on a smaller test set beforehand to decide on the final settings. 4

5 A.4 Cycle avoidance When tracing the amount of infeasible rows for the current value of ˆx of the iterations, one usually gets a plot as shown in Figure 1(a) where the number of infeasible constraints starts high and decreases over time. Although the numbers tend to go up and down between iterations, the general trend is clearly downwards, until finally a solution is found. The only difference between most instances is the number of iterations needed for reaching this point. However for some instances, such as v0415 shown in Figure 1(b), there are plateaus for which the number of infeasible constraints seems to stabilize. Actually a closer look reveals that the same sets of infeasible rows are encountered in alternation. We call this behavior a cycle. The cycle is usually left after some time, when the value of κ is large enough. However in the plot shown, the algorithm reports infeasibility, as we have set κ max to 0.6 and κ step to 10 3, which leads to a maximum of 620 iterations (including 20 warmup iterations with κ = 0). The relevant part of Figure 1(b) is magnified in 1(c). The obvious solution to this problem is to detect these cycles and perform some kind of counter measures. For the first step we keep track of the rows that have been infeasible during (some of) the last 24 iterations (for speed only hash values are used in the implementation) and check for repetitive patterns (most cycles we found where of length 3 or 5, although we also found longer ones). To break out of the cycle, we do the same trick as with the pushing operation by treating all rows as infeasible in the next iteration, which perturbs ˆx usually enough to break out of the cycle. The result of this is seen Figure 1(d), where the cycle is detected in iteration 598. The forced update of all rows suffices in this case to find a solution in iteration 609. We only show the last part of the plot, as no cycle was found before (the very long plateau actually contains some irregularities) and so the plots look the same for the initial iterations. Overall, cycling only seems to affect a small fraction of our instances and also depends on the settings used. For those instances which are affected from cycling, our approach often helps in finding a solution earlier, which makes some of the instances solvable for the heuristic. However for those we could already solve without dealing with cycles, we could find no improvement in the solution quality, but we encountered too few cycles in our test set for a final conclusion. As our approach of cycle detection and escaping has no negative impact on runs without cycles, we decided to use it for all benchmarks in Section 5. A.5 Other ideas Alefragis, Sanders, Takkula, and Wedelin (2000) examine several ideas for the parallelization and speedup of the Wedelin heuristic. One of these is the active set strategy. An observation that leads to the idea of an active set quite naturally is the fact that over the iterations only a small fraction of the variables will change their values. Ideally one would determine all variables that might change beforehand and then only compute on this smaller set. Unfortunately this does not seem possible and so we have to generate this active set during the progress of the algorithm. We implemented the active set strategy including some adjustments to support our extensions of Wedelin s heuristic. However, our implementation could not achieve (even after further investigation) the up to 100 fold speed-up or the improved solution quality reported by Alefragis et al. (2000). This could indicate that our implementation of active sets is suboptimal (the mentioned paper provides only little information on the details), or our base implementation is better, or the observed speed-up is due to other aspects, such as their aggressive optimization to the caching architecture, which we did not implement. B. Detailed results This section contains the detailed results which are too long to make it into the paper. B.1 Results for instances solved by all heuristics It is conceivable that the 600 second time limit we used for the results in Section 5.4 penalizes heuristics which try finding a solution as long as possible, since if they do not converge they might run the full duration, while our heuristics might stop earlier, due to its iteration limit. To rule this out we present the performance 5

6 1 modified rows iteration (a) Plot for instance disctom 1 modified rows iteration (b) Plot for instance v0415 modified rows iteration (c) Plot for instance v0415 (magnified) modified rows iteration (d) Plot for instance v0415 with active cycle avoidance (magnified) Figure 1: Plots of relative number of modified (=infeasible) rows in each iteration. ultimately successful run, while indicates that ultimately no solution was found. A + indicates a 6

7 Table 4: The different settings used for Xpress optimizer XP heuristics XP first solution feasibility pump miplog = 3 miplog = 3 miplog = 3 maxnode = -1 maxmipsol = 1 cutstrategy = 0 cutstrategy = 0 maxnode = -1 feasibilitypump = 1 heurselect = Wedelin fast 0.2 Wedelin good feasibility pump XP first solution XP heuristic (a) Profile of solution times 0.4 Wedelin fast 0.2 Wedelin good feasibility pump XP first solution XP heuristic (b) Profile of values Figure 2: Performance profiles for the set of 50 instances which could be solved by all solvers within 600sec. profiles for those 50 instances that were solved by all 5 heuristics within the 600 second limit. Clearly, on these instances the 600 second limit is never binding. The performance profiles for this set are given in Figure 2 and support our findings in Section 5.4: Our heuristic is still considerably faster. Although the value profiles are moving closer together (which is a consequence that many of the hard instances are removed, as not all heuristics can solve them within the time limit) the overall ordering of them stays essentially the same. Taken together this additional analysis supports the claim that the conclusions in Section 5.4 are independent of the specific time limit. 7

8 B.2 The instances used original instance after presolving LP solution Root LP-Bound best IP solution best IP bound Instance rows columns entries rows columns entries after cuts found within 24 hours ab ab ab ab ab ab air air cap coral-sp97ic coral-sp98ar coral-sp98ic coral-sp98ir core d001 11B d002 11B d003 11B d005 11B d006 1kI d007 1kI d010 kkb d d disctom ds eil eila eila eilb eilb eilb eilc eilc eild eild fast harp irp manna neos neos continued on next page 8

9 original instance after presolving LP solution Root LP-Bound best IP solution best IP bound Instance rows columns entries rows columns entries after cuts found within 24 hours neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos neos nug nw p p6b protfold qap ramos rococob rococob rococob rococob rococob continued on next page 9

10 original instance after presolving LP solution Root LP-Bound best IP solution best IP bound Instance rows columns entries rows columns entries after cuts found within 24 hours rocococ rocococ rocococ rocococ rocococ rocococ seymour sp97ar sp97ic sp98ic stp3d t t t t t t t t t t t t t t v v v v v v v v v v v v v v Table 5: Set of test problems used. Those shown in bold are part of the tuning set, too. The upper index at the problem names indicates the source of the problem, where 1 is de Vries (2004), 2 is MIPLIB (2003), 3 is Linderoth (2007), 4 is Mittelmann (2006a), 5 is Bastert (2003), 6 is Linderoth (2001), 7 is Mittelmann (2006b), 8 is Danna et al. (2005), 9 is Borndörfer (1998). The last four columns were calculated using Xpress. 10

11 B.3 Computational results Instance Wedelin Wedelin feasibility XP first XP heuristics %simple LP solution best IP solution fast good pump solution root LP solution best IP bound ab s 9.29s 15.12s 11.55s 16.01s ab s 8.45s 16.85s 12.11s 16.28s ab s 7.49s 25.17s 15.24s 17.63s ab s 3.81s 8.70s 6.23s 7.93s ab s 7.09s 21.04s 13.65s 15.55s ab s 8.26s 21.14s 13.17s 16.05s air s 1.63s 24.40s 19.07s 11.05s air s 1.06s 9.10s 11.44s 4.06s cap s 0.99s 0.75s 0.74s 1.27s coral-sp97ic s 1.55s 2.72s 2.20s 5.36s coral-sp98ar s 5.64s s 10.57s 20.02s coral-sp98ic s 4.80s 5.43s 3.84s 9.97s coral-sp98ir s 1.52s 4.06s 2.40s 4.69s core s 11.14s s 45.19s 74.10s d001 11B s 1.05s 0.98s 0.98s 1.51s d002 11B s 25.39s s 44.95s s d003 11B s 0.43s 0.25s 0.26s 0.23s d005 11B s 6.44s 35.18s 28.68s 63.65s d006 1kI s 16.63s 76.78s 71.24s s d007 1kI s 0.73s 0.55s 0.59s 0.56s d010 kkb continued on next page 11

12 Instance Wedelin Wedelin feasibility XP first XP heuristics %simple LP solution best IP solution fast good pump solution root LP solution best IP bound 3.65s 3.67s 1.57s 1.32s 4.60s d s 1.48s 3.40s 1.92s 6.67s d s 4.73s 25.62s 8.09s 53.09s disctom s 1.08s 23.17s 6.71s 5.82s ds s 57.49s s s s eil s 3.54s 9.26s 0.80s 2.52s eila s s s 53.41s s eila s 0.61s 2.48s 0.40s 0.76s eilb s 0.92s 13.35s 0.79s 1.41s eilb s 74.68s s 33.65s 50.04s eilb s 0.54s 1.20s 0.25s 0.61s eilc s 0.69s 10.18s 0.45s 1.01s eilc s 21.57s s 13.30s 24.50s eild s 0.84s 69.50s 0.50s 1.86s eild s 47.25s s 15.98s 25.54s fast s 18.31s 81.22s 75.08s s harp s 8.72s 0.61s 0.16s 0.41s irp s 2.22s 5.12s 2.89s 6.44s manna s 1.88s 6.44s 0.62s 3.52s neos s 7.42s 11.60s 8.37s 9.60s neos s 4.92s 3.83s s 0.30s neos continued on next page 12

13 Instance Wedelin Wedelin feasibility XP first XP heuristics %simple LP solution best IP solution fast good pump solution root LP solution best IP bound 2.40s 1.50s 1.22s 0.80s 1.31s neos s 0.99s 0.64s 0.50s 4.95s neos s 1.01s 0.80s 0.19s 0.33s neos s 0.91s 1.23s 0.16s 0.31s neos s 13.43s 46.76s 35.42s 53.20s neos s 2.52s 7.60s 5.21s 7.92s neos s 74.32s 7.65s 32.05s 7.78s neos s 42.31s s s 43.34s 0 0 neos s 6.38s 19.47s 11.27s 6.09s neos s 90.82s s s s neos s s s s s neos s 73.18s s s s neos s 4.28s 3.46s 4.16s 3.28s neos s 3.64s 2.61s 2.85s 2.55s neos s 3.00s 2.10s 2.56s 2.08s neos s 11.07s 48.62s 1.19s 1.87s neos s 57.09s s s s neos s 6.23s 5.81s 35.01s 23.55s neos s 91.84s s s s neos s 0.88s 9.21s 23.08s 2.95s neos s 3.11s 81.81s s 1.16s neos continued on next page 13

14 Instance Wedelin Wedelin feasibility XP first XP heuristics %simple LP solution best IP solution fast good pump solution root LP solution best IP bound 0.69s 0.66s 17.62s 2.34s 0.73s neos s 10.07s 16.42s 13.37s 26.10s neos s 1.09s s 1.74s 5.88s neos s 2.27s 11.46s 7.78s 12.92s neos s 7.03s s 6.29s 8.21s neos s 2.73s 10.14s 1.64s 5.68s neos s 5.64s s s 3.71s neos s 3.90s s 22.68s 3.59s 3 3 neos s 7.51s s 20.86s 34.10s nug s 0.37s 7.94s 5.93s 6.57s nw s 18.54s 14.21s 13.12s 20.02s p s 0.60s 2.68s 0.30s 0.54s p6b s 0.89s 1.73s 1.48s 4.50s protfold s 3.41s s 3.87s 7.95s qap s 1.44s s 97.96s 99.12s ramos s 1.54s s s s rococob s 0.97s 1.85s 1.36s 1.89s rococob s 0.83s 2.13s 1.67s 2.23s rococob s 2.42s 5.15s 4.41s 5.54s rococob s 6.29s 14.67s 10.93s 13.05s rococob s 19.00s s s 13.73s rocococ continued on next page 14

15 Instance Wedelin Wedelin feasibility XP first XP heuristics %simple LP solution best IP solution fast good pump solution root LP solution best IP bound 0.35s 0.38s 0.57s 0.42s 0.69s rocococ s 6.33s 5.07s 3.23s 4.18s rocococ s 1.63s 4.35s 3.66s 5.15s rocococ s 0.91s 2.77s 2.50s 3.21s rocococ s 42.37s 26.70s 18.06s 21.60s rocococ s 17.98s 7.23s 6.25s 8.61s seymour s 1.43s 11.13s 7.83s 8.38s sp97ar s 10.18s 14.04s 10.36s 16.33s sp97ic s 9.48s 8.65s 7.91s 11.91s sp98ic s 9.58s 7.11s 6.77s 10.74s stp3d s s s s s t s 5.67s s s 88.93s t s 3.89s s s s t s 5.74s s s s t s 7.04s s s s t s 1.62s s s s t s 6.46s s s 15.47s t s 6.97s s s 12.56s t s 9.01s s s s t s 10.00s s s s t s 9.68s s s s t continued on next page 15

16 Instance Wedelin Wedelin feasibility XP first XP heuristics %simple LP solution best IP solution fast good pump solution root LP solution best IP bound 7.47s 10.96s s s s t s 10.06s s s s t s 5.04s 38.12s 11.08s 37.80s t s 4.06s 79.04s s 23.52s v s 2.65s 0.78s 0.51s 2.68s v s 3.80s 6.48s 1.30s 5.13s v s 28.06s 21.21s 10.91s 23.03s v s 2.28s 1.64s 0.56s 3.10s v s 3.83s 2.42s 1.02s 4.26s v s 1.94s 0.55s 0.34s 1.74s v s 1.23s 0.19s 0.14s 0.87s v s 26.53s s 9.36s 21.04s v s 53.79s s 16.12s 45.29s v s 58.99s s 18.03s 52.56s v s 60.23s s 15.94s 42.38s v s 45.27s s 20.11s 47.77s v s 5.80s 18.83s 2.77s 5.66s v s 4.13s 17.21s 1.49s 2.93s Table 6: Results for the set of instances. The best solution is shown in bold face. The %simple column gives the relative number of simple constraints in the presolved instance. The last two columns provide the same values as the last four columns of Table 5 for easier comparison. 16

17 solution time in seconds solution time in seconds grid size grid size (a) (b) Figure 3: Solution times for Wedelin s algorithm (Wedelin settings are the same as in the main paper, but with pushing disabled and κ min = 0.3, κ step = 10 4, θ = 0.5, and α = 1) for matrix reconstruction with 4 directions in linear scale with outliers emphasized (left) and in log-log scale (right). The algorithm turned out to find always an exact solution! C. More results on discrete tomography Figure 3 and Table 7 provide results for discrete reconstruction problems with 4 directions, which were omitted from Section 5.6 of the main paper. Table 7: Running times for discrete reconstruction problems with 4 directions. For more details see Table 5 in the main paper. Again, Wedelin s algorithm turned out to solve all instances exactly! Exact LP-solvers IP-solvers matrix Xpress Xpress Volume Xpress Wedelin size LP solver barrier solver MIP solver algorithm s 0.07s 0.07s 1.67s 0.12s s 0.25s 0.31s 0.21s s 1.14s 1.27s 0.79s s 5.69s 11.05s 10.76s s 87.92s 48.83s s s s s s s mem mem s mem s References Alefragis, Panayiotis, Peter Sanders, Tuomo Takkula, Dag Wedelin Parallel integer optimization for crew scheduling. Ann. Oper. Res Bastert, Oliver Proprietary instances. Borndörfer, Ralf Aspects of Set Packing, Partitioning, and Covering. Shaker Verlag, Aachen. Ph.D. thesis, Technische Universität Berlin; problem instances at Danna, Emilie, Edward Rothberg, Claude Le Pape Exploring relaxation induced neighborhoods to improve MIP solutions. Math. Program. Ser. A

18 Davey, Bruce Cost modification heuristics for set partitioning problems and air crew scheduling. Bachelor s (hon.) thesis, Department of Mathematics, University of Melbourne, Australia. de Vries, Sven Problem instances from spectrum auctions. URL See, Günlük, Ladányi, and de Vries (2005). Fischetti, Matteo, Andrea Lodi Local branching. Math. Program. Ser. B Günlük, Oktay, László Ladányi, Sven de Vries Branch-and-price and new test problems for spectrum auctions. Manag. Science Linderoth, Jeff T Problem instances from inventory and vehicle routing. URL See, Linderoth, Lee, and Savelsbergh (2001). Linderoth, Jeff T Mixed integer programming instances. URL Linderoth, Jeff T., Eva K. Lee, Martin W. P. Savelsbergh A parallel, linear programming based heuristic for large scale set partitioning problems. INFORMS J. Comput Mason, Andrew J Elastic constraint branching, the Wedelin/Carmen Lagrangian heuristic and integer programming for personnel scheduling. Ann. Oper. Res MIPLIB Mixed Integer Problem LIBrary. URL Mittelmann, Hans. 2006a. Instances from the university of Bologna. URL Mittelmann, Hans. 2006b. Various mixed-integer LP problems. URL Wedelin, Dag An algorithm for large scale 0-1 integer programming with application to airline crew scheduling. Ann. Oper. Res

Motivation for Heuristics

Motivation for Heuristics MIP Heuristics 1 Motivation for Heuristics Why not wait for branching? Produce feasible solutions as quickly as possible Often satisfies user demands Avoid exploring unproductive sub trees Better reduced

More information

The Gurobi Optimizer. Bob Bixby

The Gurobi Optimizer. Bob Bixby The Gurobi Optimizer Bob Bixby Outline Gurobi Introduction Company Products Benchmarks Gurobi Technology Rethinking MIP MIP as a bag of tricks 8-Jul-11 2010 Gurobi Optimization 2 Gurobi Optimization Incorporated

More information

Primal Heuristics for Branch-and-Price Algorithms

Primal Heuristics for Branch-and-Price Algorithms Primal Heuristics for Branch-and-Price Algorithms Marco Lübbecke and Christian Puchert Abstract In this paper, we present several primal heuristics which we implemented in the branch-and-price solver GCG

More information

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

Heuristics in MILP. Group 1 D. Assouline, N. Molyneaux, B. Morén. Supervisors: Michel Bierlaire, Andrea Lodi. Zinal 2017 Winter School

Heuristics in MILP. Group 1 D. Assouline, N. Molyneaux, B. Morén. Supervisors: Michel Bierlaire, Andrea Lodi. Zinal 2017 Winter School Heuristics in MILP Group 1 D. Assouline, N. Molyneaux, B. Morén Supervisors: Michel Bierlaire, Andrea Lodi Zinal 2017 Winter School 0 / 23 Primal heuristics Original paper: Fischetti, M. and Lodi, A. (2011).

More information

The MIP-Solving-Framework SCIP

The MIP-Solving-Framework SCIP The MIP-Solving-Framework SCIP Timo Berthold Zuse Institut Berlin DFG Research Center MATHEON Mathematics for key technologies Berlin, 23.05.2007 What Is A MIP? Definition MIP The optimization problem

More information

State-of-the-Optimization using Xpress-MP v2006

State-of-the-Optimization using Xpress-MP v2006 State-of-the-Optimization using Xpress-MP v2006 INFORMS Annual Meeting Pittsburgh, USA November 5 8, 2006 by Alkis Vazacopoulos Outline LP benchmarks Xpress performance on MIPLIB 2003 Conclusions 3 Barrier

More information

Column Generation Method for an Agent Scheduling Problem

Column Generation Method for an Agent Scheduling Problem Column Generation Method for an Agent Scheduling Problem Balázs Dezső Alpár Jüttner Péter Kovács Dept. of Algorithms and Their Applications, and Dept. of Operations Research Eötvös Loránd University, Budapest,

More information

9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM

9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM 9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM Whereas the simplex method is effective for solving linear programs, there is no single technique for solving integer programs. Instead, a

More information

The Heuristic (Dark) Side of MIP Solvers. Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL

The Heuristic (Dark) Side of MIP Solvers. Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL The Heuristic (Dark) Side of MIP Solvers Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL 1 Table of content [Lodi], The Heuristic (Dark) Side of MIP Solvers, Hybrid Metaheuristics, 273-284,

More information

A Feasibility Pump heuristic for general Mixed-Integer Problems

A Feasibility Pump heuristic for general Mixed-Integer Problems A Feasibility Pump heuristic for general Mixed-Integer Problems Livio Bertacco, Matteo Fischetti, Andrea Lodi Department of Pure & Applied Mathematics, University of Padova, via Belzoni 7-35131 Padova

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

Comparisons of Commercial MIP Solvers and an Adaptive Memory (Tabu Search) Procedure for a Class of 0-1 Integer Programming Problems

Comparisons of Commercial MIP Solvers and an Adaptive Memory (Tabu Search) Procedure for a Class of 0-1 Integer Programming Problems Comparisons of Commercial MIP Solvers and an Adaptive Memory (Tabu Search) Procedure for a Class of 0-1 Integer Programming Problems Lars M. Hvattum The Norwegian University of Science and Technology Trondheim,

More information

Gurobi Guidelines for Numerical Issues February 2017

Gurobi Guidelines for Numerical Issues February 2017 Gurobi Guidelines for Numerical Issues February 2017 Background Models with numerical issues can lead to undesirable results: slow performance, wrong answers or inconsistent behavior. When solving a model

More information

Operations Research and Optimization: A Primer

Operations Research and Optimization: A Primer Operations Research and Optimization: A Primer Ron Rardin, PhD NSF Program Director, Operations Research and Service Enterprise Engineering also Professor of Industrial Engineering, Purdue University Introduction

More information

The Gurobi Solver V1.0

The Gurobi Solver V1.0 The Gurobi Solver V1.0 Robert E. Bixby Gurobi Optimization & Rice University Ed Rothberg, Zonghao Gu Gurobi Optimization 1 1 Oct 09 Overview Background Rethinking the MIP solver Introduction Tree of Trees

More information

Primal Heuristics in SCIP

Primal Heuristics in SCIP Primal Heuristics in SCIP Timo Berthold Zuse Institute Berlin DFG Research Center MATHEON Mathematics for key technologies Berlin, 10/11/2007 Outline 1 Introduction Basics Integration Into SCIP 2 Available

More information

Restrict-and-relax search for 0-1 mixed-integer programs

Restrict-and-relax search for 0-1 mixed-integer programs EURO J Comput Optim (23) :2 28 DOI.7/s3675-3-7-y ORIGINAL PAPER Restrict-and-relax search for - mixed-integer programs Menal Guzelsoy George Nemhauser Martin Savelsbergh Received: 2 September 22 / Accepted:

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Exploiting Degeneracy in MIP

Exploiting Degeneracy in MIP Exploiting Degeneracy in MIP Tobias Achterberg 9 January 2018 Aussois Performance Impact in Gurobi 7.5+ 35% 32.0% 30% 25% 20% 15% 14.6% 10% 5.7% 7.9% 6.6% 5% 0% 2.9% 1.2% 0.1% 2.6% 2.6% Time limit: 10000

More information

Applied Mixed Integer Programming: Beyond 'The Optimum'

Applied Mixed Integer Programming: Beyond 'The Optimum' Applied Mixed Integer Programming: Beyond 'The Optimum' 14 Nov 2016, Simons Institute, Berkeley Pawel Lichocki Operations Research Team, Google https://developers.google.com/optimization/ Applied Mixed

More information

Selected Topics in Column Generation

Selected Topics in Column Generation Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 18 All-Integer Dual Algorithm We continue the discussion on the all integer

More information

Experiments On General Disjunctions

Experiments On General Disjunctions Experiments On General Disjunctions Some Dumb Ideas We Tried That Didn t Work* and Others We Haven t Tried Yet *But that may provide some insight Ted Ralphs, Serdar Yildiz COR@L Lab, Department of Industrial

More information

Heuristics in Commercial MIP Solvers Part I (Heuristics in IBM CPLEX)

Heuristics in Commercial MIP Solvers Part I (Heuristics in IBM CPLEX) Andrea Tramontani CPLEX Optimization, IBM CWI, Amsterdam, June 12, 2018 Heuristics in Commercial MIP Solvers Part I (Heuristics in IBM CPLEX) Agenda CPLEX Branch-and-Bound (B&B) Primal heuristics in CPLEX

More information

A Parallel Macro Partitioning Framework for Solving Mixed Integer Programs

A Parallel Macro Partitioning Framework for Solving Mixed Integer Programs This research is funded by NSF, CMMI and CIEG 0521953: Exploiting Cyberinfrastructure to Solve Real-time Integer Programs A Parallel Macro Partitioning Framework for Solving Mixed Integer Programs Mahdi

More information

A Nonlinear Presolve Algorithm in AIMMS

A Nonlinear Presolve Algorithm in AIMMS A Nonlinear Presolve Algorithm in AIMMS By Marcel Hunting marcel.hunting@aimms.com November 2011 This paper describes the AIMMS presolve algorithm for nonlinear problems. This presolve algorithm uses standard

More information

Exact solutions to mixed-integer linear programming problems

Exact solutions to mixed-integer linear programming problems Exact solutions to mixed-integer linear programming problems Dan Steffy Zuse Institute Berlin and Oakland University Joint work with Bill Cook, Thorsten Koch and Kati Wolter November 18, 2011 Mixed-Integer

More information

Solving lexicographic multiobjective MIPs with Branch-Cut-Price

Solving lexicographic multiobjective MIPs with Branch-Cut-Price Solving lexicographic multiobjective MIPs with Branch-Cut-Price Marta Eso (The Hotchkiss School) Laszlo Ladanyi (IBM T.J. Watson Research Center) David Jensen (IBM T.J. Watson Research Center) McMaster

More information

Graph Coloring via Constraint Programming-based Column Generation

Graph Coloring via Constraint Programming-based Column Generation Graph Coloring via Constraint Programming-based Column Generation Stefano Gualandi Federico Malucelli Dipartimento di Elettronica e Informatica, Politecnico di Milano Viale Ponzio 24/A, 20133, Milan, Italy

More information

Linear Programming. Course review MS-E2140. v. 1.1

Linear Programming. Course review MS-E2140. v. 1.1 Linear Programming MS-E2140 Course review v. 1.1 Course structure Modeling techniques Linear programming theory and the Simplex method Duality theory Dual Simplex algorithm and sensitivity analysis Integer

More information

The Ascendance of the Dual Simplex Method: A Geometric View

The Ascendance of the Dual Simplex Method: A Geometric View The Ascendance of the Dual Simplex Method: A Geometric View Robert Fourer 4er@ampl.com AMPL Optimization Inc. www.ampl.com +1 773-336-AMPL U.S.-Mexico Workshop on Optimization and Its Applications Huatulco

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

How to use your favorite MIP Solver: modeling, solving, cannibalizing. Andrea Lodi University of Bologna, Italy

How to use your favorite MIP Solver: modeling, solving, cannibalizing. Andrea Lodi University of Bologna, Italy How to use your favorite MIP Solver: modeling, solving, cannibalizing Andrea Lodi University of Bologna, Italy andrea.lodi@unibo.it January-February, 2012 @ Universität Wien A. Lodi, How to use your favorite

More information

Cloud Branching MIP workshop, Ohio State University, 23/Jul/2014

Cloud Branching MIP workshop, Ohio State University, 23/Jul/2014 Cloud Branching MIP workshop, Ohio State University, 23/Jul/2014 Timo Berthold Xpress Optimization Team Gerald Gamrath Zuse Institute Berlin Domenico Salvagnin Universita degli Studi di Padova This presentation

More information

Parallel and Distributed Optimization with Gurobi Optimizer

Parallel and Distributed Optimization with Gurobi Optimizer Parallel and Distributed Optimization with Gurobi Optimizer Our Presenter Dr. Tobias Achterberg Developer, Gurobi Optimization 2 Parallel & Distributed Optimization 3 Terminology for this presentation

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2018 04 24 Lecture 9 Linear and integer optimization with applications

More information

Welcome to the Webinar. What s New in Gurobi 7.5

Welcome to the Webinar. What s New in Gurobi 7.5 Welcome to the Webinar What s New in Gurobi 7.5 Speaker Introduction Dr. Tobias Achterberg Director of R&D at Gurobi Optimization Formerly a developer at ILOG, where he worked on CPLEX 11.0 to 12.6 Obtained

More information

Pivot and Gomory Cut. A MIP Feasibility Heuristic NSERC

Pivot and Gomory Cut. A MIP Feasibility Heuristic NSERC Pivot and Gomory Cut A MIP Feasibility Heuristic Shubhashis Ghosh Ryan Hayward shubhashis@randomknowledge.net hayward@cs.ualberta.ca NSERC CGGT 2007 Kyoto Jun 11-15 page 1 problem given a MIP, find a feasible

More information

Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps

Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps Björn Geißler, Antonio Morsi, Lars Schewe, Martin Schmidt FAU Erlangen-Nürnberg, Discrete Optimization

More information

Computational Integer Programming. Lecture 12: Branch and Cut. Dr. Ted Ralphs

Computational Integer Programming. Lecture 12: Branch and Cut. Dr. Ted Ralphs Computational Integer Programming Lecture 12: Branch and Cut Dr. Ted Ralphs Computational MILP Lecture 12 1 Reading for This Lecture Wolsey Section 9.6 Nemhauser and Wolsey Section II.6 Martin Computational

More information

Parallelizing the dual revised simplex method

Parallelizing the dual revised simplex method Parallelizing the dual revised simplex method Qi Huangfu 1 Julian Hall 2 1 FICO 2 School of Mathematics, University of Edinburgh Birmingham 9 September 2016 Overview Background Two parallel schemes Single

More information

A. Atamturk. G.L. Nemhauser. M.W.P. Savelsbergh. Georgia Institute of Technology. School of Industrial and Systems Engineering.

A. Atamturk. G.L. Nemhauser. M.W.P. Savelsbergh. Georgia Institute of Technology. School of Industrial and Systems Engineering. A Combined Lagrangian, Linear Programming and Implication Heuristic for Large-Scale Set Partitioning Problems 1 A. Atamturk G.L. Nemhauser M.W.P. Savelsbergh Georgia Institute of Technology School of Industrial

More information

Wireless frequency auctions: Mixed Integer Programs and Dantzig-Wolfe decomposition

Wireless frequency auctions: Mixed Integer Programs and Dantzig-Wolfe decomposition Wireless frequency auctions: Mixed Integer Programs and Dantzig-Wolfe decomposition Laszlo Ladanyi (IBM T.J. Watson Research Center) joint work with Marta Eso (The Hotchkiss School) David Jensen (IBM T.J.

More information

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:

More information

A Computational Study of Conflict Graphs and Aggressive Cut Separation in Integer Programming

A Computational Study of Conflict Graphs and Aggressive Cut Separation in Integer Programming A Computational Study of Conflict Graphs and Aggressive Cut Separation in Integer Programming Samuel Souza Brito and Haroldo Gambini Santos 1 Dep. de Computação, Universidade Federal de Ouro Preto - UFOP

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 22.1 Introduction We spent the last two lectures proving that for certain problems, we can

More information

5.3 Cutting plane methods and Gomory fractional cuts

5.3 Cutting plane methods and Gomory fractional cuts 5.3 Cutting plane methods and Gomory fractional cuts (ILP) min c T x s.t. Ax b x 0integer feasible region X Assumption: a ij, c j and b i integer. Observation: The feasible region of an ILP can be described

More information

Rounding and Propagation Heuristics for Mixed Integer Programming

Rounding and Propagation Heuristics for Mixed Integer Programming Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7 D-9 Berlin-Dahlem Germany TOBIAS ACHTERBERG TIMO BERTHOLD GREGOR HENDEL Rounding and Propagation Heuristics for Mixed Integer Programming

More information

Repairing MIP infeasibility through Local Branching

Repairing MIP infeasibility through Local Branching Repairing MIP infeasibility through Local Branching Matteo Fischetti, Andrea Lodi, DEI, University of Padova, Via Gradenigo 6A - 35131 Padova - Italy T.J. Watson Research Center, IBM, Yorktown Heights,

More information

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007) In the name of God Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm Spring 2009 Instructor: Dr. Masoud Yaghini Outlines Problem Definition Modeling As A Set Partitioning

More information

4.1 The original problem and the optimal tableau

4.1 The original problem and the optimal tableau Chapter 4 Sensitivity analysis The sensitivity analysis is performed after a given linear problem has been solved, with the aim of studying how changes to the problem affect the optimal solution In particular,

More information

5.4 Pure Minimal Cost Flow

5.4 Pure Minimal Cost Flow Pure Minimal Cost Flow Problem. Pure Minimal Cost Flow Networks are especially convenient for modeling because of their simple nonmathematical structure that can be easily portrayed with a graph. This

More information

Algorithms II MIP Details

Algorithms II MIP Details Algorithms II MIP Details What s Inside Gurobi Optimizer Algorithms for continuous optimization Algorithms for discrete optimization Automatic presolve for both LP and MIP Algorithms to analyze infeasible

More information

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem Column Generation: Cutting Stock A very applied method thst@man.dtu.dk Outline History The Simplex algorithm (re-visited) Column Generation as an extension of the Simplex algorithm A simple example! DTU-Management

More information

Column Generation: Cutting Stock

Column Generation: Cutting Stock Column Generation: Cutting Stock A very applied method thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline History The Simplex algorithm (re-visited) Column Generation as an extension

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008 LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following

More information

Using Multiple Machines to Solve Models Faster with Gurobi 6.0

Using Multiple Machines to Solve Models Faster with Gurobi 6.0 Using Multiple Machines to Solve Models Faster with Gurobi 6.0 Distributed Algorithms in Gurobi 6.0 Gurobi 6.0 includes 3 distributed algorithms Distributed concurrent LP (new in 6.0) MIP Distributed MIP

More information

Integer Programming Chapter 9

Integer Programming Chapter 9 1 Integer Programming Chapter 9 University of Chicago Booth School of Business Kipp Martin October 30, 2017 2 Outline Branch and Bound Theory Branch and Bound Linear Programming Node Selection Strategies

More information

Solving a Challenging Quadratic 3D Assignment Problem

Solving a Challenging Quadratic 3D Assignment Problem Solving a Challenging Quadratic 3D Assignment Problem Hans Mittelmann Arizona State University Domenico Salvagnin DEI - University of Padova Quadratic 3D Assignment Problem Quadratic 3D Assignment Problem

More information

On Mixed-Integer (Linear) Programming and its connection with Data Science

On Mixed-Integer (Linear) Programming and its connection with Data Science On Mixed-Integer (Linear) Programming and its connection with Data Science Andrea Lodi Canada Excellence Research Chair École Polytechnique de Montréal, Québec, Canada andrea.lodi@polymtl.ca January 16-20,

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 35 Quadratic Programming In this lecture, we continue our discussion on

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Conflict Analysis in Mixed Integer Programming

Conflict Analysis in Mixed Integer Programming Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7 D-14195 Berlin-Dahlem Germany TOBIAS ACHTERBERG Conflict Analysis in Mixed Integer Programming URL: http://www.zib.de/projects/integer-optimization/mip

More information

Active-Constraint Variable Ordering for Faster Feasibility of Mixed Integer Linear Programs

Active-Constraint Variable Ordering for Faster Feasibility of Mixed Integer Linear Programs To appear in Mathematical Programming (2006) The original article is available at http://www.springerlink.com Active-Constraint Variable Ordering for Faster Feasibility of Mixed Integer Linear Programs

More information

Column Generation based Primal Heuristics

Column Generation based Primal Heuristics Electronic Notes in Discrete Mathematics 36 (2010) 695 702 www.elsevier.com/locate/endm Column Generation based Primal Heuristics C. Joncour(1,3), S. Michel (2), R. Sadykov (3,1), D. Sverdlov (3), F. Vanderbeck

More information

Alternating Criteria Search: A Parallel Large Neighborhood Search Algorithm for Mixed Integer Programs

Alternating Criteria Search: A Parallel Large Neighborhood Search Algorithm for Mixed Integer Programs Alternating Criteria Search: A Parallel Large Neighborhood Search Algorithm for Mixed Integer Programs Lluís-Miquel Munguía 1, Shabbir Ahmed 2, David A. Bader 1, George L. Nemhauser 2, and Yufen Shao 3

More information

Improved Gomory Cuts for Primal Cutting Plane Algorithms

Improved Gomory Cuts for Primal Cutting Plane Algorithms Improved Gomory Cuts for Primal Cutting Plane Algorithms S. Dey J-P. Richard Industrial Engineering Purdue University INFORMS, 2005 Outline 1 Motivation The Basic Idea Set up the Lifting Problem How to

More information

Comparisons of Commercial MIP Solvers and an Adaptive Memory (Tabu Search) Procedure for a Class of 0 1 Integer Programming Problems

Comparisons of Commercial MIP Solvers and an Adaptive Memory (Tabu Search) Procedure for a Class of 0 1 Integer Programming Problems Algorithmic Operations Research Vol.7 (2012) 13 20 Comparisons of Commercial MIP Solvers and an Adaptive Memory (Tabu Search) Procedure for a Class of 0 1 Integer Programming Problems Lars MagnusHvattum

More information

Constraint Branching and Disjunctive Cuts for Mixed Integer Programs

Constraint Branching and Disjunctive Cuts for Mixed Integer Programs Constraint Branching and Disunctive Cuts for Mixed Integer Programs Constraint Branching and Disunctive Cuts for Mixed Integer Programs Michael Perregaard Dash Optimization Constraint Branching and Disunctive

More information

Branch-and-bound: an example

Branch-and-bound: an example Branch-and-bound: an example Giovanni Righini Università degli Studi di Milano Operations Research Complements The Linear Ordering Problem The Linear Ordering Problem (LOP) is an N P-hard combinatorial

More information

General properties of staircase and convex dual feasible functions

General properties of staircase and convex dual feasible functions General properties of staircase and convex dual feasible functions JÜRGEN RIETZ, CLÁUDIO ALVES, J. M. VALÉRIO de CARVALHO Centro de Investigação Algoritmi da Universidade do Minho, Escola de Engenharia

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

George Reloaded. M. Monaci (University of Padova, Italy) joint work with M. Fischetti. MIP Workshop, July 2010

George Reloaded. M. Monaci (University of Padova, Italy) joint work with M. Fischetti. MIP Workshop, July 2010 George Reloaded M. Monaci (University of Padova, Italy) joint work with M. Fischetti MIP Workshop, July 2010 Why George? Because of Karzan, Nemhauser, Savelsbergh Information-based branching schemes for

More information

Unit.9 Integer Programming

Unit.9 Integer Programming Unit.9 Integer Programming Xiaoxi Li EMS & IAS, Wuhan University Dec. 22-29, 2016 (revised) Operations Research (Li, X.) Unit.9 Integer Programming Dec. 22-29, 2016 (revised) 1 / 58 Organization of this

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours

More information

Combinatorial Auctions: A Survey by de Vries and Vohra

Combinatorial Auctions: A Survey by de Vries and Vohra Combinatorial Auctions: A Survey by de Vries and Vohra Ashwin Ganesan EE228, Fall 2003 September 30, 2003 1 Combinatorial Auctions Problem N is the set of bidders, M is the set of objects b j (S) is the

More information

Generalized Network Flow Programming

Generalized Network Flow Programming Appendix C Page Generalized Network Flow Programming This chapter adapts the bounded variable primal simplex method to the generalized minimum cost flow problem. Generalized networks are far more useful

More information

lpsymphony - Integer Linear Programming in R

lpsymphony - Integer Linear Programming in R lpsymphony - Integer Linear Programming in R Vladislav Kim October 30, 2017 Contents 1 Introduction 2 2 lpsymphony: Quick Start 2 3 Integer Linear Programming 5 31 Equivalent and Dual Formulations 5 32

More information

A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem

A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem Presented by: Ted Ralphs Joint work with: Leo Kopman Les Trotter Bill Pulleyblank 1 Outline of Talk Introduction Description

More information

Assessing Performance of Parallel MILP Solvers

Assessing Performance of Parallel MILP Solvers Assessing Performance of Parallel MILP Solvers How Are We Doing, Really? Ted Ralphs 1 Stephen J. Maher 2, Yuji Shinano 3 1 COR@L Lab, Lehigh University, Bethlehem, PA USA 2 Lancaster University, Lancaster,

More information

Pure Cutting Plane Methods for ILP: a computational perspective

Pure Cutting Plane Methods for ILP: a computational perspective Pure Cutting Plane Methods for ILP: a computational perspective Matteo Fischetti, DEI, University of Padova Rorschach test for OR disorders: can you see the tree? 1 Outline 1. Pure cutting plane methods

More information

LECTURES 3 and 4: Flows and Matchings

LECTURES 3 and 4: Flows and Matchings LECTURES 3 and 4: Flows and Matchings 1 Max Flow MAX FLOW (SP). Instance: Directed graph N = (V,A), two nodes s,t V, and capacities on the arcs c : A R +. A flow is a set of numbers on the arcs such that

More information

VARIANTS OF THE SIMPLEX METHOD

VARIANTS OF THE SIMPLEX METHOD C H A P T E R 6 VARIANTS OF THE SIMPLEX METHOD By a variant of the Simplex Method (in this chapter) we mean an algorithm consisting of a sequence of pivot steps in the primal system using alternative rules

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution

More information

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY

5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5.1 DUALITY Associated with every linear programming problem (the primal) is another linear programming problem called its dual. If the primal involves

More information

In this paper we address the problem of finding a feasible solution of a generic MIP problem of the form

In this paper we address the problem of finding a feasible solution of a generic MIP problem of the form Mathematical Programming manuscript No. (will be inserted by the editor) Matteo Fischetti Fred Glover Andrea Lodi The feasibility pump Revised version Abstract. In this paper we consider the NP-hard problem

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

TMA946/MAN280 APPLIED OPTIMIZATION. Exam instructions

TMA946/MAN280 APPLIED OPTIMIZATION. Exam instructions Chalmers/GU Mathematics EXAM TMA946/MAN280 APPLIED OPTIMIZATION Date: 03 05 28 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

RENS. The optimal rounding. Timo Berthold

RENS. The optimal rounding. Timo Berthold Math. Prog. Comp. (2014) 6:33 54 DOI 10.1007/s12532-013-0060-9 FULL LENGTH PAPER RENS The optimal rounding Timo Berthold Received: 25 April 2012 / Accepted: 2 October 2013 / Published online: 1 November

More information

Modelling of LP-problems (2WO09)

Modelling of LP-problems (2WO09) Modelling of LP-problems (2WO09) assignor: Judith Keijsper room: HG 9.31 email: J.C.M.Keijsper@tue.nl course info : http://www.win.tue.nl/ jkeijspe Technische Universiteit Eindhoven meeting 1 J.Keijsper

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

On the Global Solution of Linear Programs with Linear Complementarity Constraints

On the Global Solution of Linear Programs with Linear Complementarity Constraints On the Global Solution of Linear Programs with Linear Complementarity Constraints J. E. Mitchell 1 J. Hu 1 J.-S. Pang 2 K. P. Bennett 1 G. Kunapuli 1 1 Department of Mathematical Sciences RPI, Troy, NY

More information

On the selection of Benders cuts

On the selection of Benders cuts Mathematical Programming manuscript No. (will be inserted by the editor) On the selection of Benders cuts Matteo Fischetti Domenico Salvagnin Arrigo Zanette Received: date / Revised 23 February 2010 /Accepted:

More information

Addressing degeneracy in the dual simplex algorithm using a decompositon approach

Addressing degeneracy in the dual simplex algorithm using a decompositon approach Addressing degeneracy in the dual simplex algorithm using a decompositon approach Ambros Gleixner, Stephen J Maher, Matthias Miltenberger Zuse Institute Berlin Berlin, Germany 16th July 2015 @sj_maher

More information