CP-based Local Branching

Size: px
Start display at page:

Download "CP-based Local Branching"

Transcription

1 CP-based Local Branching Zeynep Kiziltan 1, Andrea Lodi 2, Michela Milano 2, and Fabio Parisini 2 1 Department of Computer Science, University of Bologna, Italy. zeynep@cs.unibo.it 2 D.E.I.S., University of Bologna, Italy. {alodi,mmilano}@deis.unibo.it parisini@quaqua.net Abstract. We propose the integration and extension of the local branching search strategy in Constraint Programming (CP). Local branching is a general purpose heuristic method which searches locally around the best known solution by employing tree search. It has been successfully used in MIP where local branching constraints are used to model the neighborhood of an incumbent solution and improve the bound. The integration of local branching in CP is not simply a matter of implementation, but requires a number of significant extensions (concerning the computation of the bound, cost-based filtering of the branching constraints, diversification, variable neighbourhood width and search heuristics) and can greatly benefit from the CP environment. In this paper, we discuss how such extensions are possible and provide some experimental results to demonstrate the practical value of local branching in CP. 1 Introduction Local branching is a successful search strategy in Mixed-Integer Programming (MIP) [7]. It is a general purpose heuristic method which searches locally around the best known solution by employing tree search. Local branching has similarities with limited discrepancy search [5] and local search. The neighbourhoods are obtained by linear inequalities in the MIP model so that MIP searches for the optimal solution within the Hamming distance of the incumbent solution. The linear constraints representing the neighbourhood of incumbent solutions are called local branching constraints and are involved in the computation of the problem bound. Local branching is a general framework to explore effectively suitable solution subspaces, making use of the state-of-the-art MIP solvers. Even though the framework aims to improve the heuristic behaviour of MIP solvers, it offers a complete method which can be integrated in any tree based search. In this paper, we propose the integration and extension of the local branching framework in Constraint Programming (CP). The main motivation is to combine the power of constraint propagation and problem bound with a local search method which is potentially complete and applicable to any optimisation problem. We expect the following advantages over a pure CP-based or local search for solving difficult optimisations problems: (1) to obtain better solutions within a time limit; (2) to improve solution quality quicker; (3) to prove optimality; (4) to prove optimality quicker.

2 Integrating local branching in CP is not simply a matter of implementation. It requires a number of substantial modifications to the original search strategy as we discuss in the paper. First, using a linear programming solver for computing the bound of each neighborhood is not computationally affordable in CP. We have therefore studied a lighter way to compute the bound of the neighborhood which is efficient, effective and incremental, using the additive bounding technique. Second, we have developed a cost-based filtering algorithm for the local branching constraint by extracting valid reduced-costs out of the additive bounding procedure. Third, we have studied several techniques so as to diversify the search and analyzed different ways to enlarge the neighborhood when the neighborhood exploration is no longer effective. Finally, we have designed general search heuristics applicable in the context of local branching. This last point will not be discussed in detail in the paper for reasons of space. Our experimental results demonstrate the practical value of integrating local branching in CP. 2 Formal Background A Constraint Satisfaction Problem (CSP) has a set of variables X = [X 1,..., X n ], each with a finite domain D(X i ) of values, and a set of constraints specifying allowed combinations of values for subsets of variables. A solution X = [ X 1,..., X n ] is an assignment of X satisfying the constraints. Constraint programming (CP) typically solves a CSP by exploring the space of partial assignments using a backtrack tree search, enforcing a local consistency property using either specialised or general purpose propagation algorithms. A CSP is a constraint optimisation problem when we are interested in the optimal solution. Such a problem has a cost variable C defining the cost of the solution. In the rest, we focus on minimisation problems with C = i {1,...,n} C(i, X i) where C(i, v) is the cost of assigning v to X i (cost-matrix). In optimisation problems, even if a value in a domain can participate in a feasible solution, it can be pruned if it cannot be a part of a better solution (cost-based filtering [9]). CP makes use of branch-and-bound search for finding the optimal solution, hence computing tight lower bounds for C is essential to speed up search. We can exploit the additive bounding procedure [8] for computing tight lower bounds. This technique works using reduced-costs which are computed as a result of the solution of a linear program. A reduced-cost is associated to a variable and represents the additional cost to pay to the optimal solution when it becomes part of the optimal solution. 3 Local Branching Search Strategy 3.1 Local Branching in MIP Local branching is a successful search method proposed for MIP [7]. The idea is that, given a reference solution to an optimisation problem, its neighbourhood is searched with the hope of improving the solution quality, using tree search. After

3 1 (X, X 1) k (X, X 1) k (X, X 2) k (X, X 2) k + 1 T improved solution X (X, X 3) k (X, X 3) k + 1 T improved solution X T T no improved solution Fig. 1. The basic local branching framework. the optimal solution in the neighbourhood is found, the incumbent solution is updated to this new solution and the search continues from this better solution. The basic machinery of the framework is depicted in Figure 1. Assume we have a tree search method for solving an optimisation problem P whose constrained variables are X. For a given positive integer parameter k, the k-opt neighbourhood N ( X, k) is the set of feasible solutions of P satisfying the additional local branching constraint (X, X) k. This constraint defines the neighbourhood of a reference solution X by the space of assignments which have at most k different values. It ensures that the sought assignment of X have at most k different values than X. The neighbourhood is thus measured by the Hamming distance w.r.t. X and is used as a branching criteria within the solving method of P. Given X, the solution space associated with the current branching node is partitioned by means of the disjunction (X, X) k (X, X) k+1. In this way, the whole search space is divided into two, and thus exploring each part exhaustively would guarantee completeness. The neighbourhood structure is general-purpose in the sense that it is independent of the problem being solved. In Figure 1, each node of the resulting search tree is labelled with a number. The triangles marked by the letter T correspond to the branching subtrees to be explored. Each subtree is generated using the conjunction of the branching constraints shown on the branches of the tree. Assume we have a starting incumbent solution X 1 at the root node 1. The left branch node 2 is created via the branching constraint (X, X 1 ) k. It corresponds to the optimization within the k-opt neighbourhood N ( X 1, k) which yields to the optimal solution X 2. This solution now becomes the new incumbent solution. The scheme is then re-applied to the right branch node 3. Note that, since we completely explored N ( X 1, k), we create node 3 after revers-

4 ing the initial branching constraint to (X, X 1 ) k + 1. Now we create the left branch node 4 via (X, X 2 ) k, which corresponds to the optimization within N ( X 2, k) N ( X 1, k). The new optimization gives the new incumbent solution X 3. Node 6 is then addressed, which corresponds to the initial problem P amended by the three constraints (X, X 1 ) k + 1, (X, X 2 ) k + 1 and (X, X 3 ) k. In the example, node 6 produces a subproblem which contains no improving solution. In this case, we can stop so as to behave in a hill climbing fashion. Alternatively, we can add (X, X 3 ) k + 1 and generate the right branch node 7, which is then explored using the same search method as of the subtrees. This makes local branching a complete search strategy. In [7], branch-and-bound MIP search is used as the tree search method, and thus to search the subtrees. The neighbourhoods are obtained through linear inequalities in the MIP model. The use of linear constraints in the MIP model clearly provides a tighter bound w.r.t. the one of the original problem. 3.2 CP-based Local Branching The local branching framework is not specific to MIP. It can be integrated in any tree search. We therefore suggest the use of general-purpose neighbourhood structures within a tree search with constraint propagation. Like in MIP, neighbourhoods can be constructed via additional constraints in the original model and such constraints can be used both to tighten the problem bound and for filtering purposes. Since this is a new constraint problem, CP can directly be used to find the initial feasible solution and search the neighbourhood, exploiting the power of constraint propagation. Thus, complex constraints can be solved and infeasible solutions in the neighbourhoods can be detected effectively by constraint propagation, which are difficult tasks for a pure local search algorithm. Moreover, solution quality improves quickly by local search around the best found solution, which we miss in the classical CP framework. Furthermore, the use of tree search for performing local search guarantees completeness unlike classical local search algorithms. Consequently, we expect the following advantages over a pure CP-based or local search: (1) to obtain better solutions within a time limit; (2) to improve solution quality quicker; (3) to prove optimality; (4) to prove optimality quicker. And finally, the technique is general and applicable to any optimisation problem modelled with constraints in contrast to many local search techniques designed only for specific problems. We stress that we do not propose a mere implementation of local branching in CP. We significantly change and extend the original framework so as to accommodate it in the CP machinery smoothly and successfully. The following summarises our contributions. Bound of the neighborhood Using a linear programming solver for computing the bound of each neighborhood is not computationally affordable in CP. We have therefore studied a lighter way to compute the bound of the neighborhood which is efficient, effective and incremental.

5 Cost-based filtering Cost-based filtering can be applied when reduced-costs are available. Its use in CP is more aggressive than in MIP where it is mainly applied during preprocessing (variable-fixing). We show how we can extract valid reduced-costs from the additive bounding procedure. Restarting In [7], diversification techniques are applied when for instance the subtree rooted at the left node is proved to contain no improving solutions (node 6 in Figure 1), instead of simply reversing the last local branching constraint and exploring the rest of the tree (node 7) using the same search method as of the subtrees. We propose a new diversification scheme which finds a new feasible solution at node 7 by keeping all the reversed branching constraints and restarts local branching from the new incumbent solution using the regular local branching framework. Discrepancy The parameter k is essential to the success of the method as it defines the size of the neighborhood. We propose two uses of k: (1) the traditional one, with k being fixed for each neighborhood; (2) the variable one, with k ranging between k 1 and k 2. In this way, the neighborhood defined by k 1 is searched first. While no improving solution is found, the neighborhood is enlarged up until k 2 -discrepancy subtree is completely explored. In addition, the neighborhoods are visited in slices. Instead of posting the constraint (X, X) k, we impose k constraints of the form (X, X) = j with j = 1... k. We show why this is important for the computation of the bound. Heuristics As search proceeds in local branching, we gather a set of incumbent solutions X 1,..., X m which are feasible and which get closer to the optimal solution of the problem. We can exploit these incumbent solutions as a guide to search the subtrees. Being independent of the specific problem solved, such heuristics would provide us with a general method suitable for this search strategy. We have developed a number of useful heuristics, however we will not discus them here for reasons of space. 4 Additive Bounding for Local Branching To integrate local branching in CP, we need a local branching constraint (X, X) k, where X is an assignment of X representing the current reference solution and k is an integer value. The constraint holds iff the sought assignment of X has at most k different values than X. This constraint enforces that the Hamming distance between X and X is at most k. The CP model we consider is: min i N C ixi (1) subject to AnySide cst(x) (2) (X, X) k (3) X i D(X i ) i N (4) where C ixi is the cost of assigning variable X i, AnySide cst(x) is any set of constraints on domain variables X. The purpose of this section is to exploit the delta constraint (3), that implicitly represents the structure of the explored

6 tree, to tighten the problem bound and to perform cost based filtering [9]. To this purpose, we have developed a local branching constraint lb cst(x, X, k, C) which combines together the constraint and the cost function C = min i N C ix i. Using this constraint alone provides very poor problem bounds, and thus poor filtering. If instead we recognize in the problem a combinatorial relaxation Rel which can be solved in polynomial time and provide a bound LB Rel and a set of reduced-costs c, we can feed the local branching constraint with c and obtain an improved bound in an additive bounding fashion. Additive bounding is an effective procedure for computing bounds for optimisation problems [8]. It consists in solving a sequence of relaxations of P, each producing an improved bound. Assume, we have a set of bounding procedures B 1,..., B m. We write B i (c) for the ith bounding procedure when applied to an instance of P with cost matrix c. Each B i returns a lower bound LB i and a reduced-cost matrix c. This cost matrix is used by the next bounding procedure B i+1 ( c). The sum of the bounds i {1,...,m} LB i is a valid lower bound for P. An example of the relaxation Rel is the Assignment Problem if the side constraints contain an alldifferent [9]. Another example is a Network Flow Problem if the side constraints contain a global cardinality constraint. Clearly, a tighter bound can always be found feeding an LP solver with the linear relaxation of the whole problem, including the local branching constraints, as done in [7]. We have experimentally noticed that this relaxation is too expensive in computational time to be used in a CP setting. To explain how additive bounding can improve the bound obtained by the local branching constraints, we use the usual mapping between CP and 0-1 Integer Programming (IP) models. Note that this model is useful to understand the structure of the problems we are considering, but it is not solved by an IP solver. We devise a special purpose linear time algorithm as we explain next. We define a directed graph G = (N, A) where arc (i, j) A iff j D(X i ) where D(X i ) N and N = {1,..., n}. The corresponding IP model contains x ij variables and x as the IP equivalent variables and X. The additive bounding procedure uses two relaxations which correspond to Rel and Loc Branch that considers the lb cst(x, X, k, C). The solution of Rel produces the optimal solution X, with value LB Rel and a reduced-cost matrix c. We can feed the second relaxation Loc Branch with the reduced-cost matrix c from Rel and we obtain: LB Loc Branch = min c ij x ij (5) i N j N s.t. (1 x ij ) k S = {(i, j) x ij = 1} (6) (i,j) S x ij {0, 1}, (i, j) A (7) To have a tighter bound, we divide this problem into k problems corresponding to each discrepancy from 1 to k and transforming the constraint (6) into (i,j) S (1 x ij) = d with d = 1 to k. The optimal solution of each of these subproblems can be computed as follows. We start from X = [ X 1,..., X n ] and then extract and sort in non-decreasing order the corresponding reduced-costs

7 c sorted = sort( c 1 X1,..., c n Xn ). The new bound LB Loc Branch is the sum of the first n d smallest reduced-costs from c sorted. Overall, a valid bound for the problem is LB = LB Loc Branch + LB Rel. The use of the additive bounding here is different from its use in [11], since we are starting from a reference solution X which is not the optimal solution X computed by the first relaxation. As an example, consider we have [X 1,..., X 5 ] each with the domain {1, 2, 3, 4, 5}. Assume the AP solution is [1, 2, 3, 4, 5] with the optimal value LB 1 and with the reduced-cost matrix c in which c ii = 0 (corresponding to the optimal solution) and c ij = j for all j i for all i {1, 2, 3, 4, 5}. Consider an initial assignment X = [2, 5, 4, 1, 3] and that we have d = 3, which means the new solution must have 3 different assignments with respect to X. Changing 3 variables means keeping 2 variable assignments the same as in X. We first extract the reduced-costs of the values in this assignment, which is [2, 5, 4, 1, 3]. These values tell us how much extra cost we would pay if we assigned X i the value in X i. Then we sort this array c sorted = [1, 2, 3, 4, 5]. Since we need to keep 2 variable assignments of X the same, we choose those that give us the minimum increase in cost. So, we take the first 2 reduced-costs in c sorted, and add their sum (3) to LB 1 to obtain a valid lower bound. Note that, the remaining 3 variables can be dropped to 0. Hence, the changes do not contribute to the computation of the bound. 4.1 Cost-based Filtering After computing the lower bound LB in an additive way as LB = LB 1 + LB 2, we need an associated reduced-cost matrix so as to apply cost-based filtering [9]. To do this, we have to compute the reduced-cost matrix of the problem: LB 2 = min c ij x ij (8) (i,j) S (i,j) S x ij = n d (9) x ij [0, 1], (i, j) S (10) This problem is a simpler version of problem (5)-(7) 3, obtained by removing variables x ij, (i, j) S, and relaxing the integrality requirement on the remaining variables. It is easy to see that the two problems are indeed equivalent. First, the integrality conditions are redundant as shown by the algorithm presented in Section 4 to solve the problem. Second, since the variables x ij, (i, j) S do not contribute to satisfy constraint (9) they can be dropped to 0 because their reduced-costs are non-negative. The dual of such a linear program is as follows: LB 2 = max(n d)π 0 + π ij (11) (i,j) S π 0 + π ij c ij, (i, j) S (12) π ij 0, (i, j) S (13) 3 More precisely, it is a simpler version of its d-th subproblem in which constraint (6) is written in equality form.

8 Let us partition set S into S min containing the n d pairs (i, j) having the smallest reduced-costs and S max containing the remaining d pairs. We also define c max = max (i,j) Smin c ij. Then, it is not difficult to prove the following result. Theorem 1. The dual solution. π 0 = c max. π ij = 0, (i, j) S max. π ij = c ij π 0, (i, j) S min is optimal for system (11)-(13). Proof. The above solution is trivially feasible since for (i, j) S max all reducedcosts are by definition greater or equal to the largest one in S min and for (i, j) S min we have an identity. Thus, both (12) and (13) are satisfied. Moreover, the solution is also optimal. Indeed, the cardinality of S min is n d, the only dual variables contributing to the objective function are the ones in S min (plus of course π 0 ), thus the first term disappears and the value becomes (i,j) S min c ij. But such a value is also the one of system (8)-(10) because the variables in S min are the ones set to 1. We can now construct the reduced-cost matrix ĉ associated with the optimal solution of system (8)-(10):. ĉ ij = 0, (i, j) S min. ĉ ij = c ij c max, (i, j) S max Finally, the reduced-costs of the variables x ij, (i, j) S do not change, i.e., ĉ ij = c ij. 4.2 Incremental Computation of the Bound While exploring the subtrees, decisions are taken on variable instantiation and as a consequence constraints are propagated and some domains are pruned. We can therefore update the problem lower bound by taking taking into account the most recent situation of the domains. The challenge is to do this in an incremental way, not incurring much extra overhead. The first bound LB 1 can be updated in the traditional way: each time a value belonging to the optimal AP solution is removed, the AP is recomputed with an O(n 2 ) algorithm (which consists in a single augmenting path) [9]. Using the new AP solution, we can update the second bound LB 2 in a simple way by using some special data structures. Consider the sets S min and S max defined previously. More precisely, we consider a list S max initially containing d ordered pairs (i, j) corresponding to the variable-value assignment of the d greatest reduced-costs from c sorted and S min containing the n d ordered pairs (i, j) corresponding to the n d smallest reduced-costs from c sorted. Whilst S min contains the assignments (variable index - value) that should remain the same w.r.t. X since they have the smallest reduced-costs, S max contains the assignments that should change w.r.t. X and that conceptually assume a value corresponding to a 0 reduced-cost. Note that initially there are n pairs whose first index goes from 1 to n in S min S max.

9 Assignment of j to X i We distinguish four cases: (i, j) S max : A variable that was supposed to change w.r.t. X is instead assigned the value in X. We should update S min and S max by maintaining them ordered, as well as LB 2 : 1) remove (i, j) from S max ; 2) remove (h, k) = max (m,n) Smin c mn from S min and add it ordered in S max ; 3) LB 2 = LB 2 + c ij c hk. (i, k) S max with k j: A variable that was supposed to change w.r.t. X, indeed changes. In the bound, this variable was assuming a value corresponding to a 0 reduced-cost, while now it assumes the value j whose reduced-cost c ij may or may not be 0. We should update LB 2 : LB 2 = LB 2 + c ij. (i, j) S min : No changes are necessary because a variable that was supposed to remain the same w.r.t. X remains the same. (i, k) S min with k j: A variable that was to remain the same w.r.t. X instead changes. We should update S min and S max as well as LB 2 : 1) remove (h, k) = min (m,n) Smax c mn from S max and insert it in the last position of S min ; 2) remove (i, k) from S min ; 3) LB 2 = LB 2 c ij + c hk. Removal of j from D(X i ) The only important case is when (i, j) S min, where a variable that was supposed to remain the same w.r.t. X and that contributed to the computation of the bound changes. We assume X i changes to a value corresponding to the smallest reduced-cost (possibly 0) whose index is k, and then update S min and S max as well as LB 2 : 1) remove (h, k) = min (m,n) Smax c mn from S max and insert it in the last position of S min ; 2) remove (i, j) from S min ; 3) LB 2 = LB 2 + c hk c ij. That is, the number of variables which must change is decreased from from d to d 1. 5 Diversification and Intensification The heuristic behaviour of local branching can be enhanced by exploiting the well known diversification techniques of metaheuristics. In [7], diversification is applied when for instance the current left node is proved to contain no improving solutions. This arises at node 6 in Figure 1. Instead of simply reversing the last local branching constraint and switching to the exploration of node 7 using CP, we propose to restart local branching as follows. At node 7, we keep all the branching constraints (X, X 1 ) k+1, (X, X 2 ) k+1 and (X, X 3 ) k+1 that we would have in the regular framework. We now find a new feasible solution X 1 which is enforced to be better than X3. Then, we restart local branching from the new incumbent solution X 1 using the regular framework. As X 1 has a better cost than X 1 and the search space is now reduced by the three reversed branching constraints, we expect local branching to be closer to find the optimal solution. Restarting could be a good idea if switching to CP in node 7 would not give us any benefit of local branching. That is, it might be too early to finish local branching if the optimal solution is still difficult to find. But the reverse could also happen. Restarting could only be a waste of time if CP is in a position of proving optimality quickly. We therefore propose conditional restarting. We

10 could for instance impose a search limit of 1000 fails on node 7 as a measure of hardness of problem solving. If during this time optimality is not proven, then we could switch to the restarting scheme as explained above. This would balance the benefits and the effort of applying local branching. Alternatively, we could diversify the search in the spirit of variable neighbourhood search [14], by restarting local branching without any upper bound constraint. The new feasible solution X 1 could have a worse cost than the incumbent solution X 3 (and even worse than X 1 ). The expected gain is that X 1 could be closer to the optimal solution in terms of Hamming distance. Another approach to enhance the heuristic behaviour is to intensify search by enlarging the neighbourhood. Again consider node 6 in Figure 1. After it is proved to contain no improving solutions, we could consider searching the neighbourhood N ( X, k + 1) by replacing the branching constraint (X, X 3 ) k with (X, X 3 ) = k + 1. If an improving solution X 4 is now found, we continue local branching as before by reversing the current branching constraint to (X, X 3 ) k + 2. Otherwise, we can try enlarging the neighbourhood further until some N ( X, k + p) is completely explored. The diversification and intensification methods could be hybridised. For instance, we could interleave enlarging the neighbourhood with restarting in node 6 as follows. First, enlarge the neighbourhood by replacing (X, X 3 ) k with (X, X 3 ) = k + 1. If this new neighbourhood also contains no improving solutions, then reverse the branching constraint to (X, X 3 ) k + 2, find a new feasible solution X 1 (either with or without the upper bound constraint) and restart local branching from this new incumbent solution. Note that these diversification and intensification techniques are not considered in [7]. 6 Experimental Results In this section, we provide some experimental results to support the expected benefits of local branching in CP as stated in Section 3.2. Experiments are conducted using ILOG Solver 6.0 and ILOG Scheduler 6.0 on a 2Ghz pentium IV processor running Windows XP with 512 MB RAM. 6.1 Problem Domain Given a set of cities and a cost associated to travelling from one city to the other, the TSP is to find a path such that each city is visited exactly once and the total cost of travelling is minimised. In Asymmetric TSP (ATSP), the cost from a city i to another city j may not be the same as the cost from j to i. TSPTW is a time constrained variant of the TSP in which each city has an associated visiting time and must be visited within a specific time window. TSPTW is a difficult problem which has applications in routing and scheduling. It is therefore extensively studied in the Operations Research and CP (see for instance [4, 10]). In the rest, we consider the asymmetric TSPTW (ATSPTW).

11 The difficulty of ATSPTW stems from its optimisation and scheduling parts. ATSP is equivalent to finding a minimum-cost Hamiltonian path in a graph, which is NP-hard. Moreover, scheduling with release and due dates are difficult satisfiability problems since they involve complex constraints at the same time. Solving an ATSPTW thus requires strong optimisation and feasibility reasoning. Scheduling problems are one of the most established application areas of CP (see for instance [1]). Use of local search techniques greatly enhances the ability of CP to tackle optimisation problems (see for instance [12] for a survey). Since the motivation of our work is to combine the power of constraint propagation for solving feasibility problems with a local search method successful in tackling optimisation problems, ATSPTW is a good choice of problem domain to test the benefits of local branching in CP. Note that we do not aim at competing with the advanced techniques developed specifically to solve this problem. We would like to show that local branching in CP can be good at solving problems with difficult feasibility and optimisation aspects at the same time. 6.2 Local Branching on ATSPTW To solve ATSPTW using local branching in CP, we have adopted the model of [10] in which each Next i variable gives the city to be visited after the city i. Each city i is associated to an activity with the variable Start i indicating the time at which the service of i begins. Apart from the other constraints, the model consists of the alldifferent(x) constraint (posted on the N ext variables) and the cost function C, therefore it can be propagated using the cost-based filtering algorithm described in [9]. Due to the existence of the local branching constraints (again posted on the N ext variables) in addition to alldifferent(x) and C, we can use the Assignment Problem as Rel and apply the cost-based filtering described in Section 4. We test different variations of local branching in CP, each tuned for our experiments as we will describe later. We compare the results with a pure CP-based search using the same model except that additive bounding does not exist and thus alldifferent(x) is propagated only by the cost-based filtering algorithm in [9]. The CP model is solved with depth-first search enhanced with constraint propagation using the primitives available in Solver and Scheduler. For a fair comparison, we improve the search efficiency of CP by using a cost-based scheduling heuristic. At each node of the search tree, the activity associated to Next i with the smallest reduced-cost value c ij in the domain is chosen and assigned j. In case of ties, the activity associated to the earliest start-time and the latest end-time is considered first. In the rest of this section, we refer to this heuristic as H 1, the method using some form of local branching in CP as LB, and the pure CP-based search as CP. The generic local branching framework is employed in the experiments as follows. The initial feasible solution is found using CP which exploits H 1. Each subtree is searched in increasing value of k with a heuristic exploiting the knowledge about the costs. While searching a subtree, we are essentially assigning

12 some variables their values in the last reference solution X m and some the values not in X m. We refer to the search phase in which we change variables w.r.t. X m as changing and the phase in which we assign the variables the corresponding values in X m as fixing. We expect the changing phase to be more difficult than the fixing phase, so we try and fix the less promising variables in the easier phase. Given the reference solution Next, we calculate δ i for each Next i which is the minimum increase in the cost function by changing it from Next i to a value in D(Next i ) \ {Next i }. We then choose the variables with the highest δ in the fixing phase and leave the others to the changing phase. Note that we have also developed gerenal purpose heuristics which use only the gathered incumbent solutions, but we do not consider them in this paper for reasons of space. As we want to create subtrees big enough to improve solution quality but small enough to prove optimality fast, we set k to 3 or 5 (called LB 3 and LB 5 resp.). In the case of node 6, before switching to CP-based search in node 7, we apply conditional restarting in which a new feasible solution is found using H 1. We also adopt a hybrid diversification method (called LB H ). We search each left subtree with the local branching constraint (X, X) 3. Before conditional restarting, we first enlarge the neighbourhood and search the additional subtree by replacing (X, X 3 ) 3 with (X, X 3 ) = 4. If an improving solution X 4 is found, we continue local branching by reversing the last branching constraint to (X, X 3 ) 5. Else if no improved solution is found, we again enlarge and search the new neighbourhood by replacing (X, X 3 ) = 4 with (X, X 3 ) = 5. Now, if an improving solution is X 4 found, we continue otherwise we apply conditional restarting, with the reverse branching constraint (X, X 3 ) 6 in node 7. In all LB methods, we switch to CP-based search only if we again encounter a node like 6 after conditional restarting. In this case, we choose the next variable to assign based on the smallest-domain-first principle. The framework therefore specialises for ATSPTW only in the phase of finding an initial feasible solution. 6.3 Experiments We have used an heterogeneous set of 242 instances 4, with the number of nodes (i.e. cities) ranging from 17 to 233. Some instances differ only w.r.t. to the strictness of time windows. As a consequence, the experiments are conducted on a wide range of instance structures. The results appear in Tables 1 and 2 in which the participating number of instances are indicated in paranthesis. Tables 1(a) and 2(a) report results when we impose a 5 minute cutoff time over all run-time. Since there is no feasible solution in the neighbourhood 2, we reduce the size of the neighbourhood in the large subtrees of LB 5. Hence, whilst the left branching constraint in LB 3 is kept as 1 (X, X) 3, it is 3 (X, X) 5 in LB 5. In addition, we exploit edge finding [2] in LB 5 in order for a more effective exploration of the large neighbourhood. In Table 1(a), we report statistics on the instances for which the methods provide solutions of different cost by the cutoff time. We show in the second 4 Available at

13 (a) 5 minutes cutoff. Method # times Average % gap (86) providing from the best solution best solution CP LB 3 LB 5 The best CP LB LB Best CP LB LB (b) 30 minutes cutoff. Method # times Average % gap (54) providing from the best solution best solution CP LB 3 LB 5 LB H The best CP LB LB LB H Best CP 0 N/A N/A N/A N/A LB LB LB H Table 1. Instances where the methods provide different solution cost. column how many times a method provides the best solution and in the third the average gap of the provided solution w.r.t. the best one obtained from the other methods. The first group of rows are about the instances for which only 1 method gives the best solution, whilst in the second multiple methods provide the best solution. We here are not interested in any run-time. All methods are run until the cutoff time and thus we are interested in the best solution possible. The aim is to support that we can obtain better solutions within a time limit. We observe that LB 5 and LB 3 are the winners in the groups, respectively. Moreover, for the instances where CP is the preferred method (the CP rows in both groups), the average gap of the solutions provided by LB 3 and LB 5 are much smaller than the gap of the CP solutions when LB 3 or LB 5 is preferred. In Table 2(a), we report statistics on the instances for which the methods provide equally good solutions by the cutoff time. We give statistics by dividing the instances into three disjoint groups. In the first, none of the methods can prove optimality by the cutoff time, so we compare the average time to find the best solution. In the second, all methods prove optimality, so we check the average total run-time required to prove optimality. The third group contains the instances which are a mix w.r.t. completeness of the search. Since we cannot compare run-times in this case, we look at the number of times each method proves optimality. Such statistics help us support our theses that we can improve solution quality quicker and we can prove optimality (quicker). We observe that LB 3 is the winner in all groups, with LB 5 being preferrable over CP in two. Local branching thus provides clear benefits over CP. The results in the tables support our theses. It is however difficult to decide between LB 3 and LB 5. In Tables 1(b) and 2(b), similar results are reported when we impose a 30 minute cutoff time instead. In addition, we compare with the previous methods also the hybrid method LB H. Moreover, the neighbourhood defined by 2 is skipped and edge finding propagation is incorporated in all methods. Even if more time is given to all methods to complete the search, the benefits of local branching, in particular of LB H, are apparent. The results indicate that LB H is the most robust method, combining the advantages of LB 3 and LB 5.

14 Method (a) 5 minutes cutoff. Average comparison All are cutoff (7) Best solution time CP LB LB None is cutoff (132) Total run-time CP LB LB Min. one & max. two # times proving are cutoff (13) optimality CP 5 LB 3 11 LB 5 7 Methods (b) 30 minutes cutoff. Average comparison All are cutoff (2) Best solution time CP LB LB LB H None is cutoff (175) Total run-time CP LB LB LB H Min. one & max. three # times proving are cutoff (6) optimality CP 1 LB 3 5 LB 5 5 LB H 4 Table 2. Instances where the methods provide the same solution cost. 7 Related Work and Conclusions The local branching technique as introduced in [7] is a complete search method designed for providing solutions of better and better quality in the early stages of search by systematically defining and exploring large neighbourhoods. On the other hand, the idea has been used mainly in an incomplete manner since [7] (e.g. the current implementation in ILOG-Cplex 9.1 ): linear constraints defining large neighbourhoods are iteratively added and the neighbourhoods are explored, sometimes in a non-exhaustive way. When this is done within a local search method, the overall algorithm follows the spirit of both large neighbourhood search [16] and variable neighbourhood search [14]. The main peculiarity of local branching is that the neighbourhoods and their exploration are general purpose. Related work includes the use of propagation within a large neighbourhood search algorithm [15], relaxations in propagation [18], local search to speed up complete search [17], and Hamming distance to find similar and diverse solutions in CP [6]. The local branching framework constitutes of many basic components that the list of related work is by no means complete. In summary, we have shown that CP-based local branching raises a variety of issues which do not concern only the implementation. The neighbourhoods are not defined by linear inequalities and not explored by MIP techniques, but surprisingly, both the definition and the exploration are as general as in the MIP context. One can also use a k-discrepancy constraint [11] to define the neighbourhood and a discrepancy-based technique (in the spirit of limited discrepancy search [5]) for its effective exploration. In this way, both the modeling and the search of neighbourhoods would benefit from the important features of CP, as done in the MIP counterpart. CP-based local branching, however, benefits from diverse areas: power of propagation and effective heuristics of CP, cost-based filtering and additive bounding borrowed from MIP (for proving optimality), and intensification and diversification borrowed from local search (for finding good solutions earlier). Our experiments show the benefits of local branching in CP.

15 Future research directions include the computation of a stronger and more sophisticated lower bound which takes into account more than one reference solution and the analysis of problems involving different relaxations. References 1. Baptiste P., Le Pape C., Nuijten W.: Constraint-based scheduling. Kluwer Academic Publishers (2001) 2. Carlier, J., Pinson E.: An algorithm for solving job shop scheduling. Management Science 35 (1995) Caseau, Y., Laburthe, F.: Solving various weighted matching problems with constraints. In: Proc. of CP-97. Volume of LNCS., Springer (1997) Desrosiers J., Dumas Y., Solomon M.M., Soumis F.: Time constrained routing and scheduling. Network Routing (1995) Harvey, W., Ginsberg, M.L.: Limited discrepancy search. In: Proc. of IJCAI-95, Morgan Kaufmann (1995) Hebrard, E., Hnich, B., O Sullivan, B., Walsh, T.: Finding diverse and similar solutions in constraint programming. In: Proc. of AAAI-05, AAAI Press / The MIT Press (2005) Fischetti, M., Lodi, A.: Local branching. Mathematical Programming 98 (2003) Fischetti, M., Toth, P.: An additive bounding procedure for combinatorial optimization problems. Operations Research 37 (1989) Focacci, F., Lodi, A., Milano, M.: Optimization-oriented global constraints. Constraints 7 (2002) Focacci, F., Lodi, A., Milano, M.: A hybrid exact algorithm for the TSPTW. Informs Journal on Computing 14(4) (2002) Lodi, A., Milano, M., Rousseau, L.M.: Discrepancy based additive bounding for the alldifferent constraint. In: Proc. of CP-03. Volume 2833 of LNCS., Springer (2003) Milano, M.: Constraint and Integer Programming: Toward a Unified Methodology. Kluwer Academic Publishers (2003) 13. Milano, M., van Hoeve, W.: Reduced cost-based ranking for generating promising subproblems. In: Proc. CP-02. Volume 2470 of LNCS., Springer (2002) Mladenovic, N., Hansen, P.: Variable neighbourhood search. Computers and Operations Research 24 (1997) Perron, L., Shaw, P., Furnon, V.: Propagation guided large neighbourhood search. In: Proc. of CP-04. Volume 3258 of LNCS., Springer (2004) Shaw, P.: Using constraint programming and local search methods to solve vehicle routing problems. In: Proc. of CP-98. Volume 1520 of LNCS., Springer (1998) Sellmann, M., Ansotegui, C.: Disco - Novo - GoGo: integrating local search and complete search with restarts. In: Proc. of AAAI-06, AAAI Press (2006) Sellmann, M., Fahle, T.: Constraint programming based lagrangian relaxation for the automatic recording problem. Annals of Operations Research 118 (2003), 17 33

Improving CP-based Local Branching via Sliced Neighborhood Search

Improving CP-based Local Branching via Sliced Neighborhood Search Improving CP-based Local Branching via Sliced Neighborhood Search Fabio Parisini D.E.I.S., University of Bologna, Italy fabio.parisini@unibo.it Michela Milano D.E.I.S., University of Bologna, Italy michela.milano@unibo.it

More information

Hybrid Constraint Programming and Metaheuristic methods for Large Scale Optimization Problems

Hybrid Constraint Programming and Metaheuristic methods for Large Scale Optimization Problems Hybrid Constraint Programming and Metaheuristic methods for Large Scale Optimization Problems Fabio Parisini Tutor: Paola Mello Co-tutor: Michela Milano Final seminars of the XXIII cycle of the doctorate

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Lagrangian Relaxation in CP

Lagrangian Relaxation in CP Lagrangian Relaxation in CP Willem-Jan van Hoeve CPAIOR 016 Master Class Overview 1. Motivation for using Lagrangian Relaxations in CP. Lagrangian-based domain filtering Example: Traveling Salesman Problem.

More information

The Branch & Move algorithm: Improving Global Constraints Support by Local Search

The Branch & Move algorithm: Improving Global Constraints Support by Local Search Branch and Move 1 The Branch & Move algorithm: Improving Global Constraints Support by Local Search Thierry Benoist Bouygues e-lab, 1 av. Eugène Freyssinet, 78061 St Quentin en Yvelines Cedex, France tbenoist@bouygues.com

More information

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem L. De Giovanni M. Di Summa The Traveling Salesman Problem (TSP) is an optimization problem on a directed

More information

Graph Coloring via Constraint Programming-based Column Generation

Graph Coloring via Constraint Programming-based Column Generation Graph Coloring via Constraint Programming-based Column Generation Stefano Gualandi Federico Malucelli Dipartimento di Elettronica e Informatica, Politecnico di Milano Viale Ponzio 24/A, 20133, Milan, Italy

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:

More information

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

Range and Roots: Two Common Patterns for Specifying and Propagating Counting and Occurrence Constraints

Range and Roots: Two Common Patterns for Specifying and Propagating Counting and Occurrence Constraints Range and Roots: Two Common Patterns for Specifying and Propagating Counting and Occurrence Constraints Christian Bessiere LIRMM, CNRS and U. Montpellier Montpellier, France bessiere@lirmm.fr Brahim Hnich

More information

Decision Diagrams for Solving Traveling Salesman Problems with Pickup and Delivery in Real Time

Decision Diagrams for Solving Traveling Salesman Problems with Pickup and Delivery in Real Time Decision Diagrams for Solving Traveling Salesman Problems with Pickup and Delivery in Real Time Ryan J. O Neil a,b, Karla Hoffman a a Department of Systems Engineering and Operations Research, George Mason

More information

2. Modeling AEA 2018/2019. Based on Algorithm Engineering: Bridging the Gap Between Algorithm Theory and Practice - ch. 2

2. Modeling AEA 2018/2019. Based on Algorithm Engineering: Bridging the Gap Between Algorithm Theory and Practice - ch. 2 2. Modeling AEA 2018/2019 Based on Algorithm Engineering: Bridging the Gap Between Algorithm Theory and Practice - ch. 2 Content Introduction Modeling phases Modeling Frameworks Graph Based Models Mixed

More information

The Heuristic (Dark) Side of MIP Solvers. Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL

The Heuristic (Dark) Side of MIP Solvers. Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL The Heuristic (Dark) Side of MIP Solvers Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL 1 Table of content [Lodi], The Heuristic (Dark) Side of MIP Solvers, Hybrid Metaheuristics, 273-284,

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Heuristics in MILP. Group 1 D. Assouline, N. Molyneaux, B. Morén. Supervisors: Michel Bierlaire, Andrea Lodi. Zinal 2017 Winter School

Heuristics in MILP. Group 1 D. Assouline, N. Molyneaux, B. Morén. Supervisors: Michel Bierlaire, Andrea Lodi. Zinal 2017 Winter School Heuristics in MILP Group 1 D. Assouline, N. Molyneaux, B. Morén Supervisors: Michel Bierlaire, Andrea Lodi Zinal 2017 Winter School 0 / 23 Primal heuristics Original paper: Fischetti, M. and Lodi, A. (2011).

More information

arxiv: v1 [cs.dm] 6 May 2009

arxiv: v1 [cs.dm] 6 May 2009 Solving the 0 1 Multidimensional Knapsack Problem with Resolution Search Sylvain Boussier a, Michel Vasquez a, Yannick Vimont a, Saïd Hanafi b and Philippe Michelon c arxiv:0905.0848v1 [cs.dm] 6 May 2009

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept

More information

A Fast Arc Consistency Algorithm for n-ary Constraints

A Fast Arc Consistency Algorithm for n-ary Constraints A Fast Arc Consistency Algorithm for n-ary Constraints Olivier Lhomme 1 and Jean-Charles Régin 2 1 ILOG, 1681, route des Dolines, 06560 Valbonne, FRANCE 2 Computing and Information Science, Cornell University,

More information

The Benefits of Global Constraints for the Integration of Constraint Programming and Integer Programming

The Benefits of Global Constraints for the Integration of Constraint Programming and Integer Programming The Benefits of Global Constraints for the Integration of Constraint Programming and Integer Programming Michela Milano Λ Greger Ottosson y Philippe Refalo z Erlendur S. Thorsteinsson x mmilano@deis.unibo.it

More information

Heuristics in Commercial MIP Solvers Part I (Heuristics in IBM CPLEX)

Heuristics in Commercial MIP Solvers Part I (Heuristics in IBM CPLEX) Andrea Tramontani CPLEX Optimization, IBM CWI, Amsterdam, June 12, 2018 Heuristics in Commercial MIP Solvers Part I (Heuristics in IBM CPLEX) Agenda CPLEX Branch-and-Bound (B&B) Primal heuristics in CPLEX

More information

Constraint Programming

Constraint Programming Depth-first search Let us go back to foundations: DFS = Depth First Search Constraint Programming Roman Barták Department of Theoretical Computer Science and Mathematical Logic 2 3 4 5 6 7 8 9 Observation:

More information

The goal of this paper is to develop models and methods that use complementary

The goal of this paper is to develop models and methods that use complementary for a Class of Optimization Problems Vipul Jain Ignacio E. Grossmann Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, 15213, USA Vipul_Jain@i2.com grossmann@cmu.edu

More information

Motivation for Heuristics

Motivation for Heuristics MIP Heuristics 1 Motivation for Heuristics Why not wait for branching? Produce feasible solutions as quickly as possible Often satisfies user demands Avoid exploring unproductive sub trees Better reduced

More information

Unifying and extending hybrid tractable classes of CSPs

Unifying and extending hybrid tractable classes of CSPs Journal of Experimental & Theoretical Artificial Intelligence Vol. 00, No. 00, Month-Month 200x, 1 16 Unifying and extending hybrid tractable classes of CSPs Wady Naanaa Faculty of sciences, University

More information

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007) In the name of God Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm Spring 2009 Instructor: Dr. Masoud Yaghini Outlines Problem Definition Modeling As A Set Partitioning

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Hybrid Modelling for Robust Solving

Hybrid Modelling for Robust Solving Hybrid Modelling for Robust Solving Brahim Hnich 1, Zeynep Kiziltan 2, Ian Miguel 3, and Toby Walsh 1 1 Cork Constraint Computation Centre University College Cork Cork, Ireland {brahim, tw}@4c.ucc.ie 2

More information

Column Generation Method for an Agent Scheduling Problem

Column Generation Method for an Agent Scheduling Problem Column Generation Method for an Agent Scheduling Problem Balázs Dezső Alpár Jüttner Péter Kovács Dept. of Algorithms and Their Applications, and Dept. of Operations Research Eötvös Loránd University, Budapest,

More information

Learning techniques for Automatic Algorithm Portfolio Selection

Learning techniques for Automatic Algorithm Portfolio Selection Learning techniques for Automatic Algorithm Portfolio Selection Alessio Guerri and Michela Milano 1 Abstract. The purpose of this paper is to show that a well known machine learning technique based on

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Search and Lookahead Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 4/6, 2012 Nebel, Hué and Wölfl (Universität Freiburg) Constraint

More information

Notes for Lecture 24

Notes for Lecture 24 U.C. Berkeley CS170: Intro to CS Theory Handout N24 Professor Luca Trevisan December 4, 2001 Notes for Lecture 24 1 Some NP-complete Numerical Problems 1.1 Subset Sum The Subset Sum problem is defined

More information

A Case Study on Earliness/Tardiness Scheduling by Constraint Programming

A Case Study on Earliness/Tardiness Scheduling by Constraint Programming A Case Study on Earliness/Tardiness Scheduling by Constraint Programming Jan Kelbel, Zdeněk Hanzálek Department of Control Engineering, Faculty of Electrical Engineering Czech Technical University in Prague

More information

Solving Large Aircraft Landing Problems on Multiple Runways by Applying a Constraint Programming Approach

Solving Large Aircraft Landing Problems on Multiple Runways by Applying a Constraint Programming Approach Solving Large Aircraft Landing Problems on Multiple Runways by Applying a Constraint Programming Approach Amir Salehipour School of Mathematical and Physical Sciences, The University of Newcastle, Australia

More information

A Totally Unimodular Description of the Consistent Value Polytope for Binary Constraint Programming

A Totally Unimodular Description of the Consistent Value Polytope for Binary Constraint Programming A Totally Unimodular Description of the Consistent Value Polytope for Binary Constraint Programming Ionuţ D. Aron, Daniel H. Leventhal, Meinolf Sellmann Brown University, Department of Computer Science

More information

Crossword Puzzles as a Constraint Problem

Crossword Puzzles as a Constraint Problem Crossword Puzzles as a Constraint Problem Anbulagan and Adi Botea NICTA and Australian National University, Canberra, Australia {anbulagan,adi.botea}@nicta.com.au Abstract. We present new results in crossword

More information

CSE 417 Branch & Bound (pt 4) Branch & Bound

CSE 417 Branch & Bound (pt 4) Branch & Bound CSE 417 Branch & Bound (pt 4) Branch & Bound Reminders > HW8 due today > HW9 will be posted tomorrow start early program will be slow, so debugging will be slow... Review of previous lectures > Complexity

More information

LOCAL SEARCH FOR THE MINIMUM FUNDAMENTAL CYCLE BASIS PROBLEM

LOCAL SEARCH FOR THE MINIMUM FUNDAMENTAL CYCLE BASIS PROBLEM LOCAL SEARCH FOR THE MINIMUM FUNDAMENTAL CYCLE BASIS PROBLEM Abstract E.Amaldi, L.Liberti, N.Maculan, F.Maffioli DEI, Politecnico di Milano, I-20133 Milano amaldi,liberti,maculan,maffioli @elet.polimi.it

More information

Solving small VRPTWs with Constraint Programming Based Column Generation

Solving small VRPTWs with Constraint Programming Based Column Generation Proceedings CPAIOR 02 Solving small VRPTWs with Constraint Programming Based Column Generation Louis-Martin Rousseau, Michel Gendreau, Gilles Pesant Center for Research on Transportation Université de

More information

How to use your favorite MIP Solver: modeling, solving, cannibalizing. Andrea Lodi University of Bologna, Italy

How to use your favorite MIP Solver: modeling, solving, cannibalizing. Andrea Lodi University of Bologna, Italy How to use your favorite MIP Solver: modeling, solving, cannibalizing Andrea Lodi University of Bologna, Italy andrea.lodi@unibo.it January-February, 2012 @ Universität Wien A. Lodi, How to use your favorite

More information

Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving

Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving Christian Bessiere Anais Fabre* LIRMM-CNRS (UMR 5506) 161, rue Ada F-34392 Montpellier Cedex 5 (bessiere,fabre}@lirmm.fr Ulrich

More information

Column Generation Based Primal Heuristics

Column Generation Based Primal Heuristics Column Generation Based Primal Heuristics C. Joncour, S. Michel, R. Sadykov, D. Sverdlov, F. Vanderbeck University Bordeaux 1 & INRIA team RealOpt Outline 1 Context Generic Primal Heuristics The Branch-and-Price

More information

This is the search strategy that we are still using:

This is the search strategy that we are still using: About Search This is the search strategy that we are still using: function : if a solution has been found: return true if the CSP is infeasible: return false for in : if : return true return false Let

More information

6. Lecture notes on matroid intersection

6. Lecture notes on matroid intersection Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm

More information

A Computational Study of Bi-directional Dynamic Programming for the Traveling Salesman Problem with Time Windows

A Computational Study of Bi-directional Dynamic Programming for the Traveling Salesman Problem with Time Windows A Computational Study of Bi-directional Dynamic Programming for the Traveling Salesman Problem with Time Windows Jing-Quan Li California PATH, University of California, Berkeley, Richmond, CA 94804, jingquan@path.berkeley.edu

More information

A Re-examination of Limited Discrepancy Search

A Re-examination of Limited Discrepancy Search A Re-examination of Limited Discrepancy Search W. Ken Jackson, Morten Irgens, and William S. Havens Intelligent Systems Lab, Centre for Systems Science Simon Fraser University Burnaby, B.C., CANADA V5A

More information

Approximation Algorithms: The Primal-Dual Method. My T. Thai

Approximation Algorithms: The Primal-Dual Method. My T. Thai Approximation Algorithms: The Primal-Dual Method My T. Thai 1 Overview of the Primal-Dual Method Consider the following primal program, called P: min st n c j x j j=1 n a ij x j b i j=1 x j 0 Then the

More information

Constraint (Logic) Programming

Constraint (Logic) Programming Constraint (Logic) Programming Roman Barták Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic bartak@ktiml.mff.cuni.cz Sudoku Combinatorial puzzle, whose goal is to enter

More information

Contents. Index... 33

Contents. Index... 33 Contents Semidefinite Programming and Constraint Programming Willem-Jan van Hoeve... 1 1 Introduction.................................................... 1 2 Constraint Programming..........................................

More information

Improved algorithm for finding (a,b)-super solutions

Improved algorithm for finding (a,b)-super solutions Improved algorithm for finding (a,b)-super solutions Emmanuel Hebrard National ICT Australia and University of New South Wales Sydney, Australia. ehebrard@cse.unsw.edu.au Brahim Hnich Cork Constraint Computation

More information

Disjoint, Partition and Intersection Constraints for Set and Multiset Variables

Disjoint, Partition and Intersection Constraints for Set and Multiset Variables Disjoint, Partition and Intersection Constraints for Set and Multiset Variables Christian Bessiere ½, Emmanuel Hebrard ¾, Brahim Hnich ¾, and Toby Walsh ¾ ¾ ½ LIRMM, Montpelier, France. Ö Ð ÖÑÑ Ö Cork

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

Consistency and Set Intersection

Consistency and Set Intersection Consistency and Set Intersection Yuanlin Zhang and Roland H.C. Yap National University of Singapore 3 Science Drive 2, Singapore {zhangyl,ryap}@comp.nus.edu.sg Abstract We propose a new framework to study

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

CS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem

CS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem CS61: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem Tim Roughgarden February 5, 016 1 The Traveling Salesman Problem (TSP) In this lecture we study a famous computational problem,

More information

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution

More information

Using auxiliary variables and implied constraints to model non-binary problems

Using auxiliary variables and implied constraints to model non-binary problems Using auxiliary variables and implied constraints to model non-binary problems Barbara Smith School of Computer Studies University of Leeds Leeds LS2 9JT England bms@scs.leeds.ac.uk Kostas Stergiou Department

More information

Two Set-Constraints for Modeling and Efficiency

Two Set-Constraints for Modeling and Efficiency Two Set-Constraints for Modeling and Efficiency Willem-Jan van Hoeve 1 and Ashish Sabharwal 2 1 Tepper School of Business, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, U.S.A. vanhoeve@andrew.cmu.edu

More information

5. Lecture notes on matroid intersection

5. Lecture notes on matroid intersection Massachusetts Institute of Technology Handout 14 18.433: Combinatorial Optimization April 1st, 2009 Michel X. Goemans 5. Lecture notes on matroid intersection One nice feature about matroids is that a

More information

Model Restarts for Structural Symmetry Breaking

Model Restarts for Structural Symmetry Breaking Model Restarts for Structural Symmetry Breaking Daniel Heller, Aurojit Panda, Meinolf Sellmann, and Justin Yip Brown University, Department of Computer Science 5 Waterman Street, P.O. Box 9, Providence,

More information

Computational Integer Programming. Lecture 12: Branch and Cut. Dr. Ted Ralphs

Computational Integer Programming. Lecture 12: Branch and Cut. Dr. Ted Ralphs Computational Integer Programming Lecture 12: Branch and Cut Dr. Ted Ralphs Computational MILP Lecture 12 1 Reading for This Lecture Wolsey Section 9.6 Nemhauser and Wolsey Section II.6 Martin Computational

More information

A Bi-directional Resource-bounded Dynamic Programming Approach for the Traveling Salesman Problem with Time Windows

A Bi-directional Resource-bounded Dynamic Programming Approach for the Traveling Salesman Problem with Time Windows Submitted manuscript A Bi-directional Resource-bounded Dynamic Programming Approach for the Traveling Salesman Problem with Time Windows Jing-Quan Li California PATH, University of California, Berkeley,

More information

Improved Filtering for Weighted Circuit Constraints

Improved Filtering for Weighted Circuit Constraints Constraints manuscript No. (will be inserted by the editor) Improved Filtering for Weighted Circuit Constraints Pascal Benchimol Willem-Jan van Hoeve Jean-Charles Régin Louis-Martin Rousseau Michel Rueher

More information

Constraint Programming

Constraint Programming Constraint In Pursuit of The Holly Grail Roman Barták Charles University in Prague Constraint programming represents one of the closest approaches computer science has yet made to the Holy Grail of programming:

More information

Machine Learning for Software Engineering

Machine Learning for Software Engineering Machine Learning for Software Engineering Introduction and Motivation Prof. Dr.-Ing. Norbert Siegmund Intelligent Software Systems 1 2 Organizational Stuff Lectures: Tuesday 11:00 12:30 in room SR015 Cover

More information

Computational Biology Lecture 12: Physical mapping by restriction mapping Saad Mneimneh

Computational Biology Lecture 12: Physical mapping by restriction mapping Saad Mneimneh Computational iology Lecture : Physical mapping by restriction mapping Saad Mneimneh In the beginning of the course, we looked at genetic mapping, which is the problem of identify the relative order of

More information

A Note on the Separation of Subtour Elimination Constraints in Asymmetric Routing Problems

A Note on the Separation of Subtour Elimination Constraints in Asymmetric Routing Problems Gutenberg School of Management and Economics Discussion Paper Series A Note on the Separation of Subtour Elimination Constraints in Asymmetric Routing Problems Michael Drexl March 202 Discussion paper

More information

A Benders decomposition approach for the robust shortest path problem with interval data

A Benders decomposition approach for the robust shortest path problem with interval data A Benders decomposition approach for the robust shortest path problem with interval data R. Montemanni, L.M. Gambardella Istituto Dalle Molle di Studi sull Intelligenza Artificiale (IDSIA) Galleria 2,

More information

Hybrid strategies of enumeration in constraint solving

Hybrid strategies of enumeration in constraint solving Scientific Research and Essays Vol. 6(21), pp. 4587-4596, 30 September, 2011 Available online at http://www.academicjournals.org/sre DOI: 10.5897/SRE11.891 ISSN 1992-2248 2011 Academic Journals Full Length

More information

ACO and other (meta)heuristics for CO

ACO and other (meta)heuristics for CO ACO and other (meta)heuristics for CO 32 33 Outline Notes on combinatorial optimization and algorithmic complexity Construction and modification metaheuristics: two complementary ways of searching a solution

More information

Propagating separable equalities in an MDD store

Propagating separable equalities in an MDD store Propagating separable equalities in an MDD store T. Hadzic 1, J. N. Hooker 2, and P. Tiedemann 3 1 University College Cork t.hadzic@4c.ucc.ie 2 Carnegie Mellon University john@hooker.tepper.cmu.edu 3 IT

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

Approximability Results for the p-center Problem

Approximability Results for the p-center Problem Approximability Results for the p-center Problem Stefan Buettcher Course Project Algorithm Design and Analysis Prof. Timothy Chan University of Waterloo, Spring 2004 The p-center

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Coping with the Limitations of Algorithm Power Exact Solution Strategies Backtracking Backtracking : A Scenario

Coping with the Limitations of Algorithm Power Exact Solution Strategies Backtracking Backtracking : A Scenario Coping with the Limitations of Algorithm Power Tackling Difficult Combinatorial Problems There are two principal approaches to tackling difficult combinatorial problems (NP-hard problems): Use a strategy

More information

B553 Lecture 12: Global Optimization

B553 Lecture 12: Global Optimization B553 Lecture 12: Global Optimization Kris Hauser February 20, 2012 Most of the techniques we have examined in prior lectures only deal with local optimization, so that we can only guarantee convergence

More information

A New Algorithm for Singleton Arc Consistency

A New Algorithm for Singleton Arc Consistency A New Algorithm for Singleton Arc Consistency Roman Barták, Radek Erben Charles University, Institute for Theoretical Computer Science Malostranské nám. 2/25, 118 Praha 1, Czech Republic bartak@kti.mff.cuni.cz,

More information

LECTURES 3 and 4: Flows and Matchings

LECTURES 3 and 4: Flows and Matchings LECTURES 3 and 4: Flows and Matchings 1 Max Flow MAX FLOW (SP). Instance: Directed graph N = (V,A), two nodes s,t V, and capacities on the arcs c : A R +. A flow is a set of numbers on the arcs such that

More information

1. Lecture notes on bipartite matching

1. Lecture notes on bipartite matching Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans February 5, 2017 1. Lecture notes on bipartite matching Matching problems are among the fundamental problems in

More information

Constraint Propagation: The Heart of Constraint Programming

Constraint Propagation: The Heart of Constraint Programming Constraint Propagation: The Heart of Constraint Programming Zeynep KIZILTAN Department of Computer Science University of Bologna Email: zeynep@cs.unibo.it URL: http://zeynep.web.cs.unibo.it/ What is it

More information

General properties of staircase and convex dual feasible functions

General properties of staircase and convex dual feasible functions General properties of staircase and convex dual feasible functions JÜRGEN RIETZ, CLÁUDIO ALVES, J. M. VALÉRIO de CARVALHO Centro de Investigação Algoritmi da Universidade do Minho, Escola de Engenharia

More information

Constraint Programming. Global Constraints. Amira Zaki Prof. Dr. Thom Frühwirth. University of Ulm WS 2012/2013

Constraint Programming. Global Constraints. Amira Zaki Prof. Dr. Thom Frühwirth. University of Ulm WS 2012/2013 Global Constraints Amira Zaki Prof. Dr. Thom Frühwirth University of Ulm WS 2012/2013 Amira Zaki & Thom Frühwirth University of Ulm Page 1 WS 2012/2013 Overview Classes of Constraints Global Constraints

More information

Conflict based Backjumping for Constraints Optimization Problems

Conflict based Backjumping for Constraints Optimization Problems Conflict based Backjumping for Constraints Optimization Problems Roie Zivan and Amnon Meisels {zivanr,am}@cs.bgu.ac.il Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105,

More information

VNS-based heuristic with an exponential neighborhood for the server load balancing problem

VNS-based heuristic with an exponential neighborhood for the server load balancing problem Available online at www.sciencedirect.com Electronic Notes in Discrete Mathematics 47 (2015) 53 60 www.elsevier.com/locate/endm VNS-based heuristic with an exponential neighborhood for the server load

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2018 04 24 Lecture 9 Linear and integer optimization with applications

More information

Constraint-Based Scheduling: An Introduction for Newcomers

Constraint-Based Scheduling: An Introduction for Newcomers Constraint-Based Scheduling: An Introduction for Newcomers Roman Barták * Charles University in Prague, Faculty of Mathematics and Physics Malostranské námestí 2/25, 118 00, Praha 1, Czech Republic bartak@kti.mff.cuni.cz

More information

Variable Neighborhood Search for Solving the Balanced Location Problem

Variable Neighborhood Search for Solving the Balanced Location Problem TECHNISCHE UNIVERSITÄT WIEN Institut für Computergraphik und Algorithmen Variable Neighborhood Search for Solving the Balanced Location Problem Jozef Kratica, Markus Leitner, Ivana Ljubić Forschungsbericht

More information

Principles of Optimization Techniques to Combinatorial Optimization Problems and Decomposition [1]

Principles of Optimization Techniques to Combinatorial Optimization Problems and Decomposition [1] International Journal of scientific research and management (IJSRM) Volume 3 Issue 4 Pages 2582-2588 2015 \ Website: www.ijsrm.in ISSN (e): 2321-3418 Principles of Optimization Techniques to Combinatorial

More information

Selected Topics in Column Generation

Selected Topics in Column Generation Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a

More information

Branch-and-bound: an example

Branch-and-bound: an example Branch-and-bound: an example Giovanni Righini Università degli Studi di Milano Operations Research Complements The Linear Ordering Problem The Linear Ordering Problem (LOP) is an N P-hard combinatorial

More information

Constraint Programming

Constraint Programming Constraint Programming - An overview Examples, Satisfaction vs. Optimization Different Domains Constraint Propagation» Kinds of Consistencies Global Constraints Heuristics Symmetries 7 November 0 Advanced

More information

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018 CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 2018 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved.

More information

Branch-price-and-cut for vehicle routing. Guy Desaulniers

Branch-price-and-cut for vehicle routing. Guy Desaulniers Guy Desaulniers Professor, Polytechnique Montréal, Canada Director, GERAD, Canada VeRoLog PhD School 2018 Cagliari, Italy, June 2, 2018 Outline 1 VRPTW definition 2 Mathematical formulations Arc-flow formulation

More information

Constructive and destructive algorithms

Constructive and destructive algorithms Constructive and destructive algorithms Heuristic algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Constructive algorithms In combinatorial optimization problems every

More information

Computational problems. Lecture 2: Combinatorial search and optimisation problems. Computational problems. Examples. Example

Computational problems. Lecture 2: Combinatorial search and optimisation problems. Computational problems. Examples. Example Lecture 2: Combinatorial search and optimisation problems Different types of computational problems Examples of computational problems Relationships between problems Computational properties of different

More information

Other approaches. Marco Kuhlmann & Guido Tack Lecture 11

Other approaches. Marco Kuhlmann & Guido Tack Lecture 11 Other approaches Marco Kuhlmann & Guido Tack Lecture 11 Plan for today CSP-based propagation Constraint-based local search Linear programming CSP-based propagation History of CSP-based propagation AI research

More information