Nogood Recording for Valued Constraint Satisfaction Problems.

Size: px
Start display at page:

Download "Nogood Recording for Valued Constraint Satisfaction Problems."

Transcription

1 Nogood Recording for Valued Constraint Satisfaction Problems. Pierre DAGO and Gérard VERFAIIE ONERA-CERT 2, avenue Edouard Belin, BP Toulouse Cedex, France fdago, Abstract In the frame of classical Constraint Satisfaction Problems (CSPs), the backtrack tree search, combined with learning methods, presents a double advantage : for static solving, it improves the search speed by avoiding redundant explorations; for dynamic solving (after a slight change of the problem), it reuses the previous searches to build a new solution quickly. Backtrack reasoning concludes the reject of certain combinatorial choices. Nogood Recording memorizes these choices in order not to reproduce. We aim to use NogoodRecording in the wider scope of the Valued CSP framework(vcsp) to enhance the branch and bound algorithm. Therefore, nogoods are used to increase the lower bound used by the branch and bound to prune the search. This issue leads to the definition of the Valued Nogoods and their use. This study focuses particularly on penalty and dynamic VCSPs which require special developments. However, our results give an extension of the Nogood Recording to the general VCSP framework. 1 Introduction. Complete solving algorithms of constraint satisfaction problems (CSPs) follow the frame of the basic backtrack tree search. Various mechanisms have enhanced this initial frame Forward-checking, Real Full ookahead[5], Backjumping[2, 7], Nogood Recording[2, 8]. Real problems often require an optimization capability when it is not possible to satisfy all the constraints, or when the constraints are preferential rather than absolute. The definition of valued constraint satisfaction problems (VCSPs)[3] and the corresponding complete solving tool[4, 3] based on the depth first branch and bound algorithm solves this point. The enhancements proposed for this basic algorithm (based on the comparison between an upper bound and a lower bound) focus on two points. The first consists in finding an upper bound as low as possible; related methods are heuristic and tend to construct solutions closer to the optimum earlier. The second issue is to produce a lower bound as hight as possible. Corresponding algorithm follow the principle of the lookahead[4, 3, 10]. The aim of this study is to construct a better upper bound using learning methods. More precisely, we are to extend the Nogood Recording in the scope of VCSPs. Section 2 is a recall of CSP and VCSP models. Sections 3 and 4 give a definition of the Valued Nogoods. Then, in the last two sections, we propose several ways to use them and discuss their performances for static and dynamic[8, 9] Penalty CSP[4] solving. 2 Classical CSP & valued CSP A CSP is a pair (V; C) where V is a set of variables and C a set of constraints. A finite set of values is associated with each variable. We denote x i either the value i of the variable x, or the assignment of the value i to the variable x (x = i). A constraint determines the forbidden combinations of values for a set of variables fx c1 ; x c2 ; : : : ; x cn g. It can be represented as : c = [(x c1 6= i 11 ) _ : : : _ (x cn 6= i n1 )] ^ [(x c1 6= i 12 ) _ : : : _ (x cn 6= i n2 )] : : : ^ [(x c1 6= i 1m ) _ : : : _ (x cn 6= i nm )] We note c = fx c1 ; x c2 ; : : : ; x cn g the set of variables involved in the constraint c, and C the set of variables involved in the constraints in the set C. We call partial assignment a subset of assigned variables A = fx k ik g. When all of the variables are assigned, the assignment is said to becomplete. We note à = fx k g the subset of variables which are assigned by A.

2 et A be a partial assignment and x be a variable in V? Ã, the assignment A [ fx i g is called extension of A on the variable x with the value x i. et A 0 be a partial assignment such that à 0 = V? Ã, A [ A 0 is called a complete extension of A. A solution of the CSP is a complete assignment that satisfies every constraint. A partial assignment A which satisfies all of the constraints it assigns (constraints of which all variables belong to the assignment) is defined as locally consistent : 8c 2 C= c Ã; A j= c. A partial assignment A such that there exists a complete extension of A, which is a solution, is said to be globally consistent. The VCSP framework defined in [3] does not aim at satisfying all constraints but at minimizing a valuation depending on the satisfaction of the constraints. More precisely, we need : a set of valuations E (to evaluate constraints and assignments), with a total order (to compare valuations), a minimum element? (expressing the satisfaction of a constraint or the consistency of an assignment) and a maximum element > (expressing an unacceptable violation). an application from C to E such that (c) expresses the importance of a constraint c. an operator to aggregate valuations, which is commutative, associative, monotonic with, and such that? be a neutrality, and > an absorbing element. So, the valuation of a complete assignment is the aggregation of the valuations of the unsatisfied constraints. A solution is a complete assignment whose valuation is lower than >. It is an optimal solution if there is no other solution with a lower valuation. We call the valuation of a problem the valuation of its optimal solutions. The local valuation of a partial assignment is the aggregation of the valuations of the constraints assigned by A ( c Ã) and violated (A j= :c): v(a) = (c). c2c=aj=:c We call global valuation of an assignment A the minimum of the valuations of the complete extensions of A. Because of the monotonicity of the aggregation operator, the local valuation of any partial assignment is less than or equal to its global valuation. A partial assignment is said to be locally (resp. globally) consistent if its local (resp. global) valuation is lower than >. Several classes of VCSPs belong to this frame. We can quote some of them : possibilistic VCSP, (E; ;?; >; ) = ([0; 1]; <; 0; 1; max). penalty VCSP, (E; ;?; >; ) = (N [ f+1g; <; 0; +1; +). Partial CSP[4] is the subset of penalty VCSP with 8c; (c) = 1. classical CSP, (E; ;?; >; ) =(f0; 1g; <; 0; 1; _). 0 means satisfaction, 1 violation. 3 Valued nogoods. 3.1 Definitions. Definition 3.1 A classical nogood is a partial assignment which is globally inconsistent. A valued nogood is a pair (A; v), where A is a partial assignment and v a valuation such that every complete extension of A has a valuation greater than or equal to v. (v is less than or equal to the global valuation of A). CSPs are peculiar instances of VCSPs and a classical nogood A corresponds to the valued nogood (A; >). We present some basic properties of valued nogoods. Properties If (A; v) is a nogood, v 0 v and A 0 A, then (A; v 0 ) is a nogood. (A 0 ; v) is a nogood. Proof : (3.0.1) The first point is obvious : if v is less than or equal to the global valuation of A, v 0 v too. For the second, let A 00 be a complete extension of A 0. It is also a complete extension of A, then v(a 00 ) v. So, some nogoods can be defined from others. We will focus on the most relevant nogoods, with a high valuation and based on a small number of variables. 3.2 Building nogoods. The aim of the two properties presented below is to build nogoods with the information collected during the search, as partial assignments are evaluated. Property 3.1 8A; (A; v(a)) is a valued nogood. Proof : (3.1) The local valuation of any partial assignment A is less than or equal to its global valuation. The next unites of a peculiar set of nogoods in order to produce a more relevant nogood. Property 3.2 et A be a partial assignment and x 62 à be a variable. et A 1,: : :,A n be all the possibles extensions of A with each possible value x 1,: : :,x n for x. If (A 1 ; v 1 ),: : :,(A n ; v n ) are nogoods, then (A; min(v i )) is a nogood. i

3 Proof : (3.2) et A 0 be a complete extension of A. A 0 is complete so a value x k is assigned to x. A 0 is therefore a complete extension (vi). of Ak = A [ fx k g. Then v(a 0 ) vk min i 4 Nogoods for penalty & dynamic VCSP. In classical CSPs, it is interesting to consider the nogoods that are produced as new constraints, and to include them in the problem. A tuple (or partial assignment) that is forbidden by a nogood is then added to the list of the forbidden tuples of the constraint that links the variables of the nogood (if necessary, a new constraint is created). Adding constraints induced by the CSP necessarily produces redundancies in the violations. This method can only be valid if the aggregation operator is idempotent (8v; v v = v). Otherwise, duplicating a constraint with a valuation v would come to replace its cost by v v. More generally, each redundancy can increase the assignment valuations and therefore change the problem definition. Now, the aggregation of valuations is all the more important in the non idempotent cases : a constraint or a nogood does not usually produce an inconsistency without being added to other constraints. Therefore, we tend to keep in nogoods relevant information about the constraints upon which they are based, in order to avoid any duplication of valuations. Moreover, as a nogood is a consequence of the CSP constraints, any relaxation of the CSP jeopardizes the collected information. We must be able to distinguish the nogoods which are still valid from those which are no longer justified. To take into account the dynamic problems, it is necessary to justify a nogood by the constraints it is based on. Therefore, two reasons lead us to give a justification to the nogoods : the ability to deal with dynamic CSPs, and the aggregation of nogoods with a non idempotent operator. It is then necessary to state a more complete definition of the valued nogoods. The following still takes place in the general scope of the VCSP, although it is only useful for dynamic or non idempotent problems (penalty VCSP for instance). 4.1 Definition. We note A #V the projection of a partial assignment A on the subset of variables V : A #V = fx i 2 A=x 2 Vg. We denote v C (A) the valuation of a partial assignment A on the problem restricted to the subset of constraints C : v C (A) = (c). Note that v C (A) = v C (A # C ). c2c=aj=:c Definition 4.1 A valued nogood is a triple (A; v; C), where A is a partial assignment, v a valuation and C a set of constraints (called justification ) such that v is less than or equal to the global valuation of A on the problem restricted to the constraints in C. (8A 0 complete extension of A, v v C (A 0 ).) A classical nogood A corresponds to the valued nogood (A; >; C). Properties (sorting) et (A; v; C) be a nogood, A 0 A, v 0 v and C 0 C. (A 0 ; v; C), (A; v 0 ; C) and (A; v; C 0 ) are nogoods. Proof : (4.0.1) The first two nogoods follow from properties applied to the problem restricted to C. The third follows from the monotonicity of : let A 00 be a complete extension of A, C C 0 =) v C 0(A 00 ) v C(A 00 ) v. This properties implicitly define the more relevant nogoods, based on few variables and constraints and with a high valuation. In practice, a compromise between these three criteria is necessary. The set of constraints justifying the nogood allows us to state an additional theorem to remove useless values from the forbidden assignment. Theorem 4.1 (projection) et (A; v; C) be a nogood. (A # C ; v; C) is a nogood. Proof : (4.1) et A 0 be a complete extension of A # C. et A 00 be the complete extension of A with A 0 #V?Ã. We have A00 = A # C 0 # C. Then v C(A 0 ) = v C(A 0 # C) = v C(A 00 ) = # C ) vc(a00 v. The following theorems are translations of the building properties 3.1 and 3.2 of the previous section. Theorem 4.2 (building) 8A; C, (A; v C (A); C) is a nogood. A corollary is that, if C is the set of constraints unsatisfied by A, (A; v(a); C) is a nogood. (v(a) = v C (A)) Proof : (4.2) This is a consequence of property 3.1 applied to the problem restricted to C Theorem 4.3 (building) et A be a partial assignment and x 62 Ã be a variable. et A 1,: : :,A n be all the possible extensions of A with each possible value x 1,: : :,x n for x. (A; min S (v i ); i i If (A 1 ; v 1 ; C 1 ),: : :,(A n ; v n ; C n ) are nogoods, then C i ) is a nogood. Proof : (4.3) Using property 4.0.1,we can state that(ak; vk; S C i) are nogoods. The result S is obtained by applying property 3.2 to the problem restricted to C i. i 4.2 Adding nogoods. The new definition of nogoods allows us to aggregate several nogoods in a new one, with a higher valuation. i

4 Property 4.1 (Adding) et A and A 0 be two subsets of the same partial assignment and, (A; v; C) and (A 0 ; v 0 ; C 0 ) be two nogoods such that C \C 0 = ;. Then (A[A 0 ; v v 0 ; C [ C 0 ) is a nogood. Proof : (4.1) et A 00 be a complete extension of A [ A 0. v C[C 0(A 00 ) = But C \ C 0 = ;, thus v C[C 0(A 00 ) = c2c[c 0 =A 00 j=:c c2c=a 00 j=:c (c)! (c) c2c 0 =A 00 j=:c! (c) v C[C 0(A 00 ) = v C(A 00 ) v C 0(A 00 ) v C[C 0(A 00 ) v v 0 If we are to add nogoods with overlapping justifications, it is necessary to subtract redundant valuations. Therefore, we define the equivalent of a subtraction operator. We must lay down an additional condition on the valuation structure : let (E; ;?; >; ) be a valuation structure, 8y x, a valuation z 2 E such that x = y z must exist. Definition 4.2 et (E; ;?; >; ) be a valid valuation structure, the subtraction operator is defined by : if y x, x y = min(z= x = y z): otherwise x y =?: Some properties are deduced from this definition : Properties et x and y be two valuations, (x y) y x. et x and y be two valuations, (x y) y x. is monotonic : let x, y and z be three valuations, if x y then x z y z. Proof : (4.1.1) The first point is obvious : if y x, (x (x y) y = x by definition. y) y = y x; else The second is also obvious : necessarily, y x y then, y, is the minimum valuation z such that by definition, (x y) z y = x y. (x y) y is therefore less than x. et us prove the third. If z y, y z =? and the property is satisfied. Otherwise, let umin = y z = min(u=u z = y). If x y, with the first property, (x z) z y. We have the choice between two cases, either (x z) z = y then x z umin = y z because umin is a lower bound, or (x z)z y = uminz and we conclude with the monotonicity of that x z y z. In peculiar cases : x y is equal to x if y x, and 0 otherwise in possibilistic VCSPs. x y = max(0; x? y) in penalty VCSPs. This definition allows us to state two last theorems. The first reduces the set of constraints in a justification. The most important objective is to remove duplicated constraints in the justifications of two nogoods in order to satisfy the hypothesis of the property 4.1 in any case; this is the second theorem. The first theorem would also be useful to remove constraints whose penalty have been decreased by a dynamic change. Theorem 4.4 (subtraction) et (A; v; C) be a nogood and C 0 be a subset of C. (A; v ; C? C 0 ) is a nogood. c2c 0 (c) Proof : (4.4) et A 0 be a complete extension of A. Because of the monotonicity? of the aggregation operator, (c) v c2c 0 C 0(A 0 ) ) v C?C 0(A 0 )? (c) v c2c 0 C?C 0(A 0 ) v C 0(A 0 ) ) v C?C 0(A 0 )? (c) v c2c C(A 0 ) v 0 Finally, properties4.1.1 lead to the conclusion v C?C 0(A 0 ) v (c). c2c 0 Theorem 4.5 (adding) et (A; v; C) and (A 0 ; v 0 ; C 0 ) be two nogoods. (A [ A 0 ; (v v 0 ) ; C [ C 0 ) is a nogood. c2c\c 0 (c) Proof : (4.5) Following theorem 4.4, (A 0 ; v 0 c2c\c 0 (c) ; C 0? (C \ C 0 )) is a nogood. Adding (A; v; C) with property 4.1 shows that 1 (A [ A 0 ;(v v 0 ) 5 Using nogoods. 5.1 A basic algorithm. (c) c2c\c 0 ; C [ C 0 ) is a nogood. We describe here, for the purpose of explanation, a basic way to use valued nogoods during a branch and bound tree search. Three functions are useful : the first, Evaluate, computes a lower bound on the valuations of the complete extensions of a partial assignment and returns the corresponding nogood. The second, Union, unites nogoods when the failure of all the values of a variable occurs. The third, Store, records relevant nogoods. et (V; C) be a VCSP, and an upper bound initially set to >. A generic algorithm is given by function STEP (figure 1) called with A = ; STEP(;) returns a nogood (;; v min ; C min ) where v min is the problem valuation, and C min is a sufficient subset of constraints which implies that no complete assignment 1 (v 0 v 00 ) v 00 v 0 ) v (v 0 v 00 ) v 00 (v v 0 ) ) v (v 0 v 00 ) (v v 0 ) v 00.

5 STEP(A) If à = V then A is a solution with a valuation lower than. v(a) return (A; v(a); C) Else et x be variable in V? Ã, N = ; For each possible value i for x et n Evaluate(A [ fx i g) If the valuation of n then N N [ fng Else N N [ STEP(A [ fx i g) et n union Union(N ) Store(A; n union) return n union Stored Nogoods Tree-search diagram Evaluation (lower bound < upper bound) Evaluation (lower bound upper bound) Nogood returned from Evaluation Nogoods Union and Recording Nogood returned from Union Figure 1. Algorithm. has a valuation less than v min (the optimal valuation of the problem restricted to the constraints in C min is also v min ). Evaluate(A) aims to produce a nogood based on the assignment A. This nogood being used to prune the search, the aim is to maximize its valuation (the nogood gives a lower bound of A s global valuation. If this valuation is not acceptable ( ), it is useless to develop the branch). Evaluate is based on the backward checking and builds a nogood V = (A; v(a); C), using theorem 4.2. This nogood V corresponds to the lower bound used by an ordinary branch and bound search. In order to improve it, Evaluate checks whether there exists a stored nogood (A i ; v i ; C i ) which is appropriate (A i A) and whose valuation is greater than v(a). The nogood with the highest valuation is returned. The aim of the function Union(N ) is to synthesize the information contained in a set of nogoods. It takes place when all the possible values for a variable have been rejected. Then, we have a set of nogoods N. Each one gives a lower bound of the global valuation of the extension of the current assignment with a value of the explored variable. Then, the function produces a new nogood with the building theorem 4.3. Store(A; n) records a nogood n built during the attempt to extend the assignment A. When a nogood is produced, all the variables in the current partial assignment are involved in the assignment of the nogood. As a tree search never explores the same partial assignment twice, we must reduce the assignment of the nogood. The most natural way to do this, used in classical nogood recording, is to use the projection theorem 4.1. It does not change the valuation nor the justification of the nogood but removes useless values. Then, if some values have been eliminated, the nogood will be stored Example. X1 a b c X2 a b c C1 : X1>X3 C1 : X2>X3 X3 a b C1 : X4>X3 C1 : X5>X3 Figure 2. Example. X4 a b C1 : X5>X4 X5 a b Consider the problem in figure 2. Figure 3 illustrate the tree explored by a standard branch and bound and by our basic algorithm (in grey bold). We also represented the nogood stored during the search (in bold those s that are useful to prune the search). The production of the first interesting nogood occurs while trying to extend the partial assignment A = fx 1 b ; X3 a ; X4 a g. The extension with X 5 a and X 5 b returns the nogoods (fx 1 b ; X3 a ; X4 a ; X5 a g; 4; fc 1 ; C 3 ;

6 {X1=a,X3=a},2,{C1,C3,C4,C5} {X1=a},2,{C1,C3,C4,C5} {X3=a},1,{C3,C4,C5} {X2=a},1,{C2} {X1=a,X3=b},2,{C1,C3} {X2=a,X3=a},2,{C1,C3,C4,C5} {X1=b},1,{C1,C3,C4,C5} {X2=b},1,{C2,C3,C4,C5},1,{C3,C4,C5} {x3=b},1,{c3} {x3=a},1,{c3,c4,c5} a a b c a b c a b c a b a b a b a b a b a b a b a b a b b,1,{c3,c4,c5} c {x3=b},1,{c3} X1 X2 X3 C1 : X3<X1 C2 : X3<X2 {x3=a,x4=b},1,{c4,c5} a b a b a b a b a b a b a b a b a b a b a b X4 C3 : X4>X3 a b a b a b a b a b a b a b a b a b a b a b {X1=a,X3=a, X4=a},2,{C1,C3,C4,C5} {X1=a,X3=a, X4=b},2,{C1,C4,C5} {X2=a,X3=a, X4=b},2,{C1,C4,C5} {X3=a, X4=b},1,{C4,C5} X5 C4 : X5>X3 C5 : X5>X4 α Relevant Nogood Recording Basic Nogood Recording Branch & Bound Figure 3. Tree search. C 4 ; C 5 g) and (fx 1 b ; X3 a ; X4 a ; X b 5 g; 2; fc 1 ; C 3 g). Their union (fx 1 b ; X3 a ; X4 a g; 2; fc 1 ; C 3 ; C 4 ; C 5 g) is then carried out. The projection of this nogood shows that X 2 is not involved in the justification. This value is therefore removed and the resulting nogood stored. The projection has the same effect when backtracking on X 3 and a new nogood (fx 1 a ; X3 a g; 2; fc 1 ; C 3 ; C 4 ; C 5 g) is stored; this one will be useful. While trying to extend fx 1 c ; X3 a g the backward checking gives a lower bound of 1. But the nogood n can be used and gives a higher lower bound, high enough to prune the search. There are two main criticisms of this basic version : Although several nogoods are produced, only a few of them are useful. Even after a projection, the assignment is much too big to be reproduced during the search. The number of nogoods grows (in the worst case) exponentially with the size of the problem. A practical algorithm will have to elect the nogoods that should be stored. 5.2 Improvements. The nogoods, as they are built into the basic algorithm, give a lower bound of the subtree extending the current assignment. But the backward checking (the nogood V ) already provides a part of this information. In other words, the nogoods can be redundant with the definition of the problem and learn some information obviously accessible with a simple backward checking. For instance, the first nogood stored, (fx 1 a ; X3 a ; X4 a g; 2; fc 1 ; C 3 ; C 4 ; C 5 g) is redundant with the constraint C 1 : if the nogood can be used, then, at least, X 1 a and X 3 a are included in the partial assignment. Therefore, it is obvious to see that the constraint C 1 is not satisfied. The only information in the nogood that should be stored is the over-cost of 1 justified by the other constraints. The functions are then redefined as follow : Store(A; n) - let C be the set of constraints in the justification of n violated by the assignment A (given by the backward checking). - subtract C from n and project (theorems 4.4 and 4.1) Evaluate(A) - construct V = (A; v(a); C), C being the set of constraints that appear violated with a backward checking on A. - add V to every stored nogood (theorem 4.5). - return the nogood with the highest valuation. This operation (subtracting and adding) does not lose any information while reducing the justification. The projection is then more efficient, thus the nogoods more useful. et us reconsider the example and figure 3. a a a While exploring the partial assignment fx 1 ; X2 ; X3 ; X b a a a 4 g with a current nogood V = (fx 1 ; X2 ; X3 ; b X4 g; 2; fc 1 ; C 2 g), the extensions with X 5 return the nogood a a a (fx 1 ; X2 ; X3 ; b X4 3; fc 1 ; C 2 ; C 4 ; C 5 g). The violations of C 1 and C 2 are already included in V. their subtraction leads to the smaller nogood (fx 1 ; X2 ; X3 ; b X4 g; 1; a a a fc 4 ; C 5 g) that can be reduced by projection and then stored. The number of nogoods stored is still exponential. As we

7 do not know how to select the interesting nogoods nor how to avoid redundancies in the set of stored nogoods, we will apply the strategy used in the classical Nogood Recording : the nogoods will not be stored if its arity (cardinality of its assignment) exceed a fixed constant. 6 Experiments. 6.1 Results. Experiments have been carried out with several bounds on the arity of the recorded nogoods : 0, 2, 4 and 6 (corresponding to NR 0, NR 2,...). A comparison is made with the standard branch and bound algorithm ( B & B ). The algorithm uses the enhancement of the conflict directed backjumping because the nogoods built all the information useful to know where to backtrack. Although unnecessary, the algorithms keep a static order on the variables (less domains first) and their values. We measure their capability to record information for a dynamic use by a second run that keeps the nogood recorded but do not store any more nogoods. Results show the number of nodes visited and the run time needed to reach an optimal solution and to prove its optimality. It produces relevant information about the search effort and the overhead of the algorithm. The number n of nogood stored is also given Famous problems We first present results on standard problems[6] (Zebra, SEND+MORE=MONEY) considered as Partial CSPs : the aim is to minimize the number of constraints unsatisfied. When the time limit of 1800s is exceeded, a sign # is used, followed by the distance between the best valuation obtained and the optimum (instead of the CPU time). In this case, the node number counts correspond to the number of nodes necessary to reach the best valuation found. Zebra algorithm Static run Dynamic run time nodes n time nodes B & B 62s NR 2 14s s NR 4 24s s NR 6 46s s Send algorithm Static run Dynamic run time nodes n time nodes B & B 318s NR 2 576s s NR 4 # 1 # >1000 # 1 # Random problems The following experiments use random binary Partial CSPs, with 20 variables and a random number of possible values between 2 and 8 for each variable. The connectivity of the constraint graph, and the tightness of the constraints (ratio between the number of forbidden pairs of values and the number of possible pairs) are set at 40%. A logarithmic mean (due to the exponential behavior of CSP algorithms) over 90 samples is computed. Random VCSPs algorithm Static run Dynamic run time nodes n time nodes B & B 15s NR 2 24s s NR 4 25s s NR 6 47s s Real problems We now present results on the daily management problems for an earth observation satellite[1, 6] : given a set of photographs of various importance (expressed by a weight) which can be taken the next day; given material constraints (non overlapping between two successive photographs on the same instrument and data flow limitation); the aim is to select a feasible subset of photographs which maximizes the sum of their weights. These problems can be cast as penalty VCSPs : a variable is associated to each photograph; its domain contains one value for each mean of achieving the photograph and one value to express that the photograph is not selected. Then, a unary constraint is set to forbid the rejection value with a valuation equal to the weight of the photograph. The material constraints are translated into binary and ternary imperative constraints (valuation equal to >). We ran the branch and bound algorithm and our nogood recording (bound on the arity of the nogoods equal to 0) on 5 problems of 100 variables. The following table shows only the time needed to reach an optimal solution and to prove it s optimality, and the average number of nogoods produced. When the time limit of 1800s is exceeded, a sign # is used, followed by the distance between the best valuation obtained and the optimum. Satellite scheduling problems algorithm problem number n B & B # 2 # 4 # 3 # 0 # 2 NR 0 5.4s 3.1s 15.2s 1.7s 4.4s 57

8 6.2 Discussion The results on random VCSPs show that the nogood recording is penalised by an important overhead. In producing very few nogoods, the search time is multiplied by 2. The second part of the overhead is directly related to the maximum arity of the nogoods it stores : as the latter grows, the number of stored nogoods increases and as does the time necessary to make the additions and the comparisons in the function Evaluate. The reduction of the search space can be significant, particularly on dynamic runs, and also increases with the number of stored nogoods, but not always enough to save time. It appears from these experiments alone that the selection of relevant nogoods (or the efficient use of a large number of nogoods) can be a very important issue. However, on non random problems, the nogood recording algorithm seems to be able to find and store interesting nogoods. By comparison with random experiments, the stored nogoods are much more useful, with a comparable number of nogoods memorized. Nogood recording matches exceptionally well the satellite scheduling problems, probably because the structure of its constraint graph (chain like) offers very interesting nogoods with a small arity. The Zebra problem contains a set of nogoods of small arity which corresponds to an interesting compromise between their quantity and their usefulness. On the contrary, for the Cross problem, when the number of nogoods is large enough to imply a significant pruning, it also destructs the CPU time efficiency. It follows from those experiments that the nogood recording efficiently increases the lower bound of a branch and bound, particularly for dynamic runs. However, with a time criterion, it can be very inefficient if the nogood selection do not match the problem (or when there are no useful nogoods). References [1] J. Agnese, N. Bataille, E. Bensana, D. Blumstein, and G. Verfaillie. Exact and approximate methods for the daily management of an earth observation satellite. In Based Systems for Space, [2] R. Dechter. Enhancement schemes for constraint processing : Backjumping, learning and cutset decomposition. Artificial Intelligence, 41(3): , [3] H. Fargier, T. Schiex, and G. Verfaillie. Valued constraint satisfaction problems. In IJCAI 95, pages , [4] E. Freuder and R. Wallace. Partial constraint satisfaction. Artificial Intelligence, 58:21 70, [5] R. Haralick and G. Elliot. Increasing tree search efficiency for constraint satisfaction problems. Artificial Intelligence, 14(3): , [6] //ftp.cert.fr/pub/ftp/pub/lemaitre/lvcsp/pbs. ftp site. [7] P. Prosser. Hybrid algorithms for the constraint satisfaction problems. Computational Intelligence, 9(3): , [8] T. Schiex and G. Verfaillie. Nogood recording for static and dynamic constraint satisfaction problems. International Journal on Artificial Intelligence Tools, 3(2): , [9] G. Verfaillie and T. Schiex. Maintien de solution dans les problèmes dynamiques de satisfaction de contraintes : bilan de quelques approches. Revue d Intelligence Artificielle, 95. [10] R. Wallace. Enhancements of branch and bound methods for the maximal constraint satisfaction problem. CP 95 workshop on Over Constrained System, sep Cassis, France. 6.3 Conclusion We have showed the feasibility of Nogood Recording in the VCSP scope. The algorithms we have proposed lead to interesting improvements for static or dynamic Partial CSP solving. They represent a few possibilities among the various ways of using valued nogoods as they are only partial answers to the questions : how to extract the best lower bound from a set of nogoods and how to select nogoods that should be recorded within reasonable time overhead and memory space use. The concept of valued nogoods paves the way for the extensions of all the classical enhancements based on the adjunction of deduced constraints (Path-consistency, Dynamic Backtracking, etc.).

Third ILOG International Users Meeting, 9 & 10 July 1997, Paris, France 1. Michel Lema^tre, Gerard Verfaillie ONERA/CERT

Third ILOG International Users Meeting, 9 & 10 July 1997, Paris, France 1. Michel Lema^tre, Gerard Verfaillie ONERA/CERT Third ILOG International Users Meeting, 9 & 10 July 1997, Paris, France 1 Daily management of an earth observation satellite : comparison of ILOG Solver with dedicated algorithms for Valued Constraint

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

Solution Reuse in Dynamic Constraint Satisfaction Problems

Solution Reuse in Dynamic Constraint Satisfaction Problems From: AAAI-94 Proceedings. Copyright 1994, AAAI (www.aaai.org). All rights reserved. Solution Reuse in Dynamic Constraint Satisfaction Problems Gkrard Verfaillie and Thomas Schiex ONERA-CERT 2 avenue Edouard

More information

Example: Bioinformatics. Soft Constraint Processing. From Optimal CSP to Soft CSP. Overview. From Optimal CSP to Soft CSP.

Example: Bioinformatics. Soft Constraint Processing. From Optimal CSP to Soft CSP. Overview. From Optimal CSP to Soft CSP. Example: Bioinformatics Soft Constraint Processing 16.412J/6.834J Cognitive Robotics Martin Sachenbacher (Using material from Thomas Schiex) RNA is single-strand molecule, composed of A,U,G,C Function

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Constraint Optimization Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg July 17, 2012 Nebel, Hué and Wölfl (Universität Freiburg) Constraint

More information

A fuzzy constraint assigns every possible tuple in a relation a membership degree. The function

A fuzzy constraint assigns every possible tuple in a relation a membership degree. The function Scribe Notes: 2/13/2013 Presenter: Tony Schnider Scribe: Nate Stender Topic: Soft Constraints (Ch. 9 of CP handbook) Soft Constraints Motivation Soft constraints are used: 1. When we seek to find the best

More information

Semi-Independent Partitioning: A Method for Bounding the Solution to COP s

Semi-Independent Partitioning: A Method for Bounding the Solution to COP s Semi-Independent Partitioning: A Method for Bounding the Solution to COP s David Larkin University of California, Irvine Abstract. In this paper we introduce a new method for bounding the solution to constraint

More information

Conflict based Backjumping for Constraints Optimization Problems

Conflict based Backjumping for Constraints Optimization Problems Conflict based Backjumping for Constraints Optimization Problems Roie Zivan and Amnon Meisels {zivanr,am}@cs.bgu.ac.il Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105,

More information

A Uniform View of Backtracking

A Uniform View of Backtracking A Uniform View of Backtracking Fahiem Bacchus 1 Department. of Computer Science, 6 Kings College Road, University Of Toronto, Toronto, Ontario, Canada, M5S 1A4, fbacchus@cs.toronto.edu? Abstract. Backtracking

More information

Constraint (Logic) Programming

Constraint (Logic) Programming Constraint (Logic) Programming Roman Barták Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic bartak@ktiml.mff.cuni.cz Sudoku Combinatorial puzzle, whose goal is to enter

More information

A generic framework for solving CSPs integrating decomposition methods

A generic framework for solving CSPs integrating decomposition methods A generic framework for solving CSPs integrating decomposition methods L. Blet 1,3, S. N. Ndiaye 1,2, and C. Solnon 1,3 1 Université de Lyon - LIRIS 2 Université Lyon 1, LIRIS, UMR5205, F-69622 France

More information

Consistency and Set Intersection

Consistency and Set Intersection Consistency and Set Intersection Yuanlin Zhang and Roland H.C. Yap National University of Singapore 3 Science Drive 2, Singapore {zhangyl,ryap}@comp.nus.edu.sg Abstract We propose a new framework to study

More information

Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving

Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving Christian Bessiere Anais Fabre* LIRMM-CNRS (UMR 5506) 161, rue Ada F-34392 Montpellier Cedex 5 (bessiere,fabre}@lirmm.fr Ulrich

More information

Soft constraints: Polynomial classes, Applications

Soft constraints: Polynomial classes, Applications Soft constraints: Polynomial classes, Applications T. Schiex INRA - Toulouse, France Padova 2004 - Soft constraints p. 1 Polynomial classes structural classes: when the constraint (hyper)-graph has good

More information

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems*

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems* Appeared in Proc of the 14th Int l Joint Conf on Artificial Intelligence, 558-56, 1995 On the Space-Time Trade-off in Solving Constraint Satisfaction Problems* Roberto J Bayardo Jr and Daniel P Miranker

More information

Unifying and extending hybrid tractable classes of CSPs

Unifying and extending hybrid tractable classes of CSPs Journal of Experimental & Theoretical Artificial Intelligence Vol. 00, No. 00, Month-Month 200x, 1 16 Unifying and extending hybrid tractable classes of CSPs Wady Naanaa Faculty of sciences, University

More information

Arc Consistency for Dynamic CSPs

Arc Consistency for Dynamic CSPs Arc Consistency for Dynamic CSPs Malek Mouhoub mouhoubm@cs.uregina.ca Department of Computer Science, University of Regina 3737 Waskana Parkway, Regina SK, Canada, S4S 0A2 ABSTRACT Constraint Satisfaction

More information

Local Consistency in Weighted CSPs and Inference in Max-SAT

Local Consistency in Weighted CSPs and Inference in Max-SAT Local Consistency in Weighted CSPs and Inference in Max-SAT Student name: Federico Heras Supervisor name: Javier Larrosa Universitat Politecnica de Catalunya, Barcelona, Spain fheras@lsi.upc.edu,larrosa@lsi.upc.edu

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Frank C. Langbein F.C.Langbein@cs.cf.ac.uk Department of Computer Science Cardiff University 13th February 2001 Constraint Satisfaction Problems (CSPs) A CSP is a high

More information

The Branch & Move algorithm: Improving Global Constraints Support by Local Search

The Branch & Move algorithm: Improving Global Constraints Support by Local Search Branch and Move 1 The Branch & Move algorithm: Improving Global Constraints Support by Local Search Thierry Benoist Bouygues e-lab, 1 av. Eugène Freyssinet, 78061 St Quentin en Yvelines Cedex, France tbenoist@bouygues.com

More information

A Parameterized Local Consistency for Redundant Modeling in Weighted CSPs

A Parameterized Local Consistency for Redundant Modeling in Weighted CSPs A Parameterized Local Consistency for Redundant Modeling in Weighted CSPs Y.C. Law, J.H.M. Lee, and M.H.C. Woo Department of Computer Science and Engineering The Chinese University of Hong Kong, Shatin,

More information

Dynamic heuristics for branch and bound search on tree-decomposition of Weighted CSPs

Dynamic heuristics for branch and bound search on tree-decomposition of Weighted CSPs Dynamic heuristics for branch and bound search on tree-decomposition of Weighted CSPs Philippe Jégou, Samba Ndojh Ndiaye, and Cyril Terrioux LSIS - UMR CNRS 6168 Université Paul Cézanne (Aix-Marseille

More information

Conflict Directed Backjumping for Max-CSPs

Conflict Directed Backjumping for Max-CSPs Conflict Directed Backjumping for Max-CSPs Roie Zivan and Amnon Meisels, Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel Abstract Max-CSPs are Constraint

More information

Constraint Satisfaction Problems. Chapter 6

Constraint Satisfaction Problems. Chapter 6 Constraint Satisfaction Problems Chapter 6 Constraint Satisfaction Problems A constraint satisfaction problem consists of three components, X, D, and C: X is a set of variables, {X 1,..., X n }. D is a

More information

A strong local consistency for constraint satisfaction

A strong local consistency for constraint satisfaction A strong local consistency for constraint satisfaction Romuald Debruyne Ecole des Mines de Nantes 4, rue Alfred Kastler, La Chantrerie 4437 Nantes cedex 3 - France Email: debruyne@lirmm.fr Abstract Filtering

More information

Optimization I : Brute force and Greedy strategy

Optimization I : Brute force and Greedy strategy Chapter 3 Optimization I : Brute force and Greedy strategy A generic definition of an optimization problem involves a set of constraints that defines a subset in some underlying space (like the Euclidean

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Search and Lookahead Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 4/6, 2012 Nebel, Hué and Wölfl (Universität Freiburg) Constraint

More information

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept

More information

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2011 CS 188: Artificial Intelligence Fall 2011 Lecture 5: CSPs II 9/8/2011 Dan Klein UC Berkeley Multiple slides over the course adapted from either Stuart Russell or Andrew Moore 1 Today Efficient Solution

More information

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence Introduction to Artificial Intelligence 22.0472-001 Fall 2009 Lecture 5: Constraint Satisfaction Problems II Announcements Assignment due on Monday 11.59pm Email search.py and searchagent.py to me Next

More information

Constraint Optimization Techniques for MultiObjective Branch and Bound Search

Constraint Optimization Techniques for MultiObjective Branch and Bound Search Constraint Optimization Techniques for MultiObjective Branch and Bound Search Emma Rollon 1 and Javier Larrosa 1 Universitat Politècnica de Catalunya Jordi Girona 1-3, Edificio Omega 08034 Barcelona, Spain

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

General Methods and Search Algorithms

General Methods and Search Algorithms DM811 HEURISTICS AND LOCAL SEARCH ALGORITHMS FOR COMBINATORIAL OPTIMZATION Lecture 3 General Methods and Search Algorithms Marco Chiarandini 2 Methods and Algorithms A Method is a general framework for

More information

Improving Asynchronous Backtracking for Dealing with Complex Local Problems

Improving Asynchronous Backtracking for Dealing with Complex Local Problems Improving Asynchronous Backtracking for Dealing with Complex Local Problems Arnold Maestre 1 and Christian Bessiere 1 Abstract. Distributed constraint satisfaction, in its most general acceptation, involves

More information

Modelling Soft Constraints: A Survey

Modelling Soft Constraints: A Survey Modelling Soft Constraints: A Survey Roman Barták * Charles University in Prague, Faculty of Mathematics and Physics Malostranské námestí 2/25, 118 00, Praha 1, Czech Republic bartak@kti.mff.cuni.cz Abstract.

More information

Part 2: Algorithms. CP-2001 Tutorial - Cyprus November 2001 Algorithms 1

Part 2: Algorithms. CP-2001 Tutorial - Cyprus November 2001 Algorithms 1 Part 2: Algorithms CP-2001 Tutorial - Cyprus November 2001 Algorithms 1 CP-2001 Tutorial - Cyprus November 2001 Algorithms 2 Overview Problem formulation: Optimization task Exact algorithms: Branch and

More information

An Extension of Complexity Bounds and Dynamic Heuristics for Tree-Decompositions of CSP

An Extension of Complexity Bounds and Dynamic Heuristics for Tree-Decompositions of CSP An Extension of Complexity Bounds and Dynamic Heuristics for Tree-Decompositions of CSP Philippe Jégou, Samba Ndojh Ndiaye, and Cyril Terrioux LSIS - UMR CNRS 6168 Université Paul Cézanne (Aix-Marseille

More information

Maintaining Arc-Consistency within Dynamic Backtracking

Maintaining Arc-Consistency within Dynamic Backtracking Maintaining Arc-Consistency within Dynamic Backtracking Narendra Jussien, Romuald Debruyne, and Patrice Boizumault École des Mines de Nantes 4 rue Alfred Kastler BP 20722 F-44307 Nantes Cedex 3 {jussien,rdebruyn,boizu}@emn.fr

More information

A Fast Arc Consistency Algorithm for n-ary Constraints

A Fast Arc Consistency Algorithm for n-ary Constraints A Fast Arc Consistency Algorithm for n-ary Constraints Olivier Lhomme 1 and Jean-Charles Régin 2 1 ILOG, 1681, route des Dolines, 06560 Valbonne, FRANCE 2 Computing and Information Science, Cornell University,

More information

CONSTRAINT SATISFACTION PROBLEM: A CASE STUDY

CONSTRAINT SATISFACTION PROBLEM: A CASE STUDY Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 5, May 2015, pg.33

More information

A Greedy Approach to Establish Singleton Arc Consistency

A Greedy Approach to Establish Singleton Arc Consistency A Greedy Approach to Establish Singleton Arc Consistency Christophe Lecoutre and Stéphane Cardon CRIL-CNRS FRE 2499, Université d Artois Lens, France {lecoutre, cardon}@cril.univ-artois.fr Abstract In

More information

EFFICIENT METHODS FOR ASYNCHRONOUS DISTRIBUTED CONSTRAINT OPTIMIZATION ALGORITHM

EFFICIENT METHODS FOR ASYNCHRONOUS DISTRIBUTED CONSTRAINT OPTIMIZATION ALGORITHM EFFICIENT METHODS FOR ASYNCHRONOUS DISTRIBUTED CONSTRAINT OPTIMIZATION ALGORITHM Toshihiro Matsui, Hiroshi Matsuo and Akira Iwata Department of Electrical and Computer Engineering Nagoya Institute of Technology

More information

A "Logic-Constrained" Knapsack Formulation and a Tabu Algorithm for the Daily Photograph Scheduling of an Earth Observation Satellite

A Logic-Constrained Knapsack Formulation and a Tabu Algorithm for the Daily Photograph Scheduling of an Earth Observation Satellite A "Logic-Constrained" Knapsack Formulation and a Tabu Algorithm for the Daily Photograph Scheduling of an Earth Observation Satellite Michel Vasquez and Jin-Kao Hao * (Submitted to Journal of Computational

More information

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems*

On the Space-Time Trade-off in Solving Constraint Satisfaction Problems* On the Space-Time Trade-off in Solving Constraint Satisfaction Problems* Roberto J. Bayardo Jr. and Daniel P. Miranker Department of Computer Sciences and Applied Research Laboratories The University of

More information

Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8

Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8 Constraint t Satisfaction Problems Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Tuesday: Chapter 7 Thursday: Chapter 8 Outline What is a CSP Backtracking for CSP Local search for

More information

VALCSP solver : a combination of Multi-Level Dynamic Variable Ordering with Constraint Weighting

VALCSP solver : a combination of Multi-Level Dynamic Variable Ordering with Constraint Weighting VALCS solver : a combination of Multi-Level Dynamic Variable Ordering with Constraint Weighting Assef Chmeiss, Lakdar Saïs, Vincent Krawczyk CRIL - University of Artois - IUT de Lens Rue Jean Souvraz -

More information

General properties of staircase and convex dual feasible functions

General properties of staircase and convex dual feasible functions General properties of staircase and convex dual feasible functions JÜRGEN RIETZ, CLÁUDIO ALVES, J. M. VALÉRIO de CARVALHO Centro de Investigação Algoritmi da Universidade do Minho, Escola de Engenharia

More information

Multi-Objective Russian Doll Search

Multi-Objective Russian Doll Search Multi-Objective Russian Doll Search Emma Rollon and Javier Larrosa Universitat Politecnica de Catalunya Jordi Girona 1-3, 08034 Barcelona, Spain erollon@lsi.upc.edu, larrosa@lsi.upc.edu Abstract Russian

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Constraint Satisfaction Problems slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Standard search problems: State is a black box : arbitrary data structure Goal test

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems In which we see how treating states as more than just little black boxes leads to the invention of a range of powerful new search methods and a deeper understanding of

More information

Constraint Optimisation Problems. Constraint Optimisation. Cost Networks. Branch and Bound. Dynamic Programming

Constraint Optimisation Problems. Constraint Optimisation. Cost Networks. Branch and Bound. Dynamic Programming Summary Network Search Bucket Elimination Soft Soft s express preferences over variable assignments Preferences give dierent values over variable assignment A student can follow only one class at a time

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Last update: February 25, 2010 Constraint Satisfaction Problems CMSC 421, Chapter 5 CMSC 421, Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local

More information

Outline. Best-first search

Outline. Best-first search Outline Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Beam search Simulated annealing search Genetic algorithms Constraint Satisfaction Problems

More information

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2008 CS 188: Artificial Intelligence Fall 2008 Lecture 4: CSPs 9/9/2008 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 1 Announcements Grading questions:

More information

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens CS 188: Artificial Intelligence Fall 2008 Announcements Grading questions: don t panic, talk to us Newsgroup: check it out Lecture 4: CSPs 9/9/2008 Dan Klein UC Berkeley Many slides over the course adapted

More information

arxiv: v1 [cs.dm] 6 May 2009

arxiv: v1 [cs.dm] 6 May 2009 Solving the 0 1 Multidimensional Knapsack Problem with Resolution Search Sylvain Boussier a, Michel Vasquez a, Yannick Vimont a, Saïd Hanafi b and Philippe Michelon c arxiv:0905.0848v1 [cs.dm] 6 May 2009

More information

Preference Modeling and Reasoning

Preference Modeling and Reasoning CHAPTER 2 Preference Modeling and Reasoning 3 Representing and reasoning about preferences is an area of increasing theoretical and practical interest inai.preferencesandconstraintsoccurinreal-life problems

More information

CS 771 Artificial Intelligence. Constraint Satisfaction Problem

CS 771 Artificial Intelligence. Constraint Satisfaction Problem CS 771 Artificial Intelligence Constraint Satisfaction Problem Constraint Satisfaction Problems So far we have seen a problem can be solved by searching in space of states These states can be evaluated

More information

PCP and Hardness of Approximation

PCP and Hardness of Approximation PCP and Hardness of Approximation January 30, 2009 Our goal herein is to define and prove basic concepts regarding hardness of approximation. We will state but obviously not prove a PCP theorem as a starting

More information

Constraint Satisfaction Problems (CSPs)

Constraint Satisfaction Problems (CSPs) 1 Hal Daumé III (me@hal3.name) Constraint Satisfaction Problems (CSPs) Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 7 Feb 2012 Many

More information

CMU-Q Lecture 6: Planning Graph GRAPHPLAN. Teacher: Gianni A. Di Caro

CMU-Q Lecture 6: Planning Graph GRAPHPLAN. Teacher: Gianni A. Di Caro CMU-Q 15-381 Lecture 6: Planning Graph GRAPHPLAN Teacher: Gianni A. Di Caro PLANNING GRAPHS Graph-based data structure representing a polynomial-size/time approximation of the exponential search tree Can

More information

Constraint Programming

Constraint Programming Depth-first search Let us go back to foundations: DFS = Depth First Search Constraint Programming Roman Barták Department of Theoretical Computer Science and Mathematical Logic 2 3 4 5 6 7 8 9 Observation:

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

Improving search using indexing: a study with temporal CSPs

Improving search using indexing: a study with temporal CSPs Improving search using indexing: a study with temporal CSPs Nikos Mamoulis Dimitris Papadias Department of Computer Science Hong Kong University of Science Technology Clear Water Bay, Hong Kong http://www.es.ust.hk/{-mamoulis,

More information

Chapter S:V. V. Formal Properties of A*

Chapter S:V. V. Formal Properties of A* Chapter S:V V. Formal Properties of A* Properties of Search Space Graphs Auxiliary Concepts Roadmap Completeness of A* Admissibility of A* Efficiency of A* Monotone Heuristic Functions S:V-1 Formal Properties

More information

CONSTRAINT-BASED SCHEDULING: AN INTRODUCTION FOR NEWCOMERS. Roman Barták

CONSTRAINT-BASED SCHEDULING: AN INTRODUCTION FOR NEWCOMERS. Roman Barták In Proceedings of 7th IFAC Workshop on Intelligent Manufacturing Systems (IMS 2003), Elsevier Science, 2003 (to appear). CONSTRAINT-BASED SCHEDULING: AN INTRODUCTION FOR NEWCOMERS Roman Barták Charles

More information

Dealing with Incomplete Preferences in Soft Constraint Problems

Dealing with Incomplete Preferences in Soft Constraint Problems Dealing with Incomplete Preferences in Soft Constraint Problems Mirco Gelain, Maria Silvia Pini, Francesca Rossi, and K. Brent Venable Dipartimento di Matematica Pura ed Applicata, Università di Padova,

More information

ROBUSTNESS IN DYNAMIC CONSTRAINT SATISFACTION PROBLEMS. Laura Climent, Miguel A. Salido and Federico Barber. Received January 2011; revised May 2011

ROBUSTNESS IN DYNAMIC CONSTRAINT SATISFACTION PROBLEMS. Laura Climent, Miguel A. Salido and Federico Barber. Received January 2011; revised May 2011 International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 4, April 2012 pp. 2513 2532 ROBUSTNESS IN DYNAMIC CONSTRAINT SATISFACTION

More information

3 Fractional Ramsey Numbers

3 Fractional Ramsey Numbers 27 3 Fractional Ramsey Numbers Since the definition of Ramsey numbers makes use of the clique number of graphs, we may define fractional Ramsey numbers simply by substituting fractional clique number into

More information

Towards Parallel Non Serial Dynamic Programming for Solving Hard Weighted CSP

Towards Parallel Non Serial Dynamic Programming for Solving Hard Weighted CSP Towards Parallel Non Serial Dynamic Programming for Solving Hard Weighted CSP D. Allouche, S. de Givry, and T. Schiex Unité de Biométrie et Intelligence Artificielle, UR 875, INRA, F-31320 Castanet Tolosan,

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Chapter 6 of Russell & Norvig] Constraint satisfaction problems (CSPs) Standard search problem:

More information

Optimal and Suboptimal Singleton Arc Consistency Algorithms

Optimal and Suboptimal Singleton Arc Consistency Algorithms Optimal and Suboptimal Singleton Arc Consistency Algorithms Christian Bessiere LIRMM (CNRS / University of Montpellier) 161 rue Ada, Montpellier, France bessiere@lirmm.fr Romuald Debruyne École des Mines

More information

CS 188: Artificial Intelligence. Recap Search I

CS 188: Artificial Intelligence. Recap Search I CS 188: Artificial Intelligence Review of Search, CSPs, Games DISCLAIMER: It is insufficient to simply study these slides, they are merely meant as a quick refresher of the high-level ideas covered. You

More information

Implementation of Arc Consistency

Implementation of Arc Consistency Fall Semester, 2012 CSCE 421/821: Foundations of Constraint Processing B.Y. Choueiry Homework 2 Implementation of Arc Consistency Assigned: Friday, Sep 7, 2012 Due: Monday, Sep 17, 2012 Total value: 100

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

Mathematical Programming Formulations, Constraint Programming

Mathematical Programming Formulations, Constraint Programming Outline DM87 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 Mathematical Programming Formulations, Constraint Programming 1. Special Purpose Algorithms 2. Constraint Programming Marco Chiarandini DM87 Scheduling,

More information

Optimal Packing of High-Precision Rectangles

Optimal Packing of High-Precision Rectangles Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence Optimal Packing of High-Precision Rectangles Eric Huang Palo Alto Research Center 3333 Coyote Hill Rd. Palo Alto, CA 94304 ehuang@parc.com

More information

Constraint Optimization Literature Review

Constraint Optimization Literature Review ARL-CR-0787 NOV 2015 US Army Research Laboratory Constraint Optimization Literature Review prepared by Peter J Schwartz ORSA Corporation 1003 Old Philadelphia Road, #103 Aberdeen, MD under contract W91CRB-11D-0007

More information

Constraint Networks. Constraint networks. Definition. Normalized. Constraint Networks. Deduction. Constraint. Networks and Graphs. Solving.

Constraint Networks. Constraint networks. Definition. Normalized. Constraint Networks. Deduction. Constraint. Networks and Graphs. Solving. 1 Satisfaction Problems Albert-Ludwigs-Universität Freiburg networks networks and Stefan Wölfl, Christian Becker-Asano, and Bernhard Nebel October 27, 2014 October 27, 2014 Wölfl, Nebel and Becker-Asano

More information

CS 188: Artificial Intelligence. Recap: Search

CS 188: Artificial Intelligence. Recap: Search CS 188: Artificial Intelligence Lecture 4 and 5: Constraint Satisfaction Problems (CSPs) Pieter Abbeel UC Berkeley Many slides from Dan Klein Recap: Search Search problem: States (configurations of the

More information

Reduced branching-factor algorithms for constraint satisfaction problems

Reduced branching-factor algorithms for constraint satisfaction problems Reduced branching-factor algorithms for constraint satisfaction problems Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

Approximation Algorithms: The Primal-Dual Method. My T. Thai

Approximation Algorithms: The Primal-Dual Method. My T. Thai Approximation Algorithms: The Primal-Dual Method My T. Thai 1 Overview of the Primal-Dual Method Consider the following primal program, called P: min st n c j x j j=1 n a ij x j b i j=1 x j 0 Then the

More information

On the Max Coloring Problem

On the Max Coloring Problem On the Max Coloring Problem Leah Epstein Asaf Levin May 22, 2010 Abstract We consider max coloring on hereditary graph classes. The problem is defined as follows. Given a graph G = (V, E) and positive

More information

Conflict Graphs for Combinatorial Optimization Problems

Conflict Graphs for Combinatorial Optimization Problems Conflict Graphs for Combinatorial Optimization Problems Ulrich Pferschy joint work with Andreas Darmann and Joachim Schauer University of Graz, Austria Introduction Combinatorial Optimization Problem CO

More information

Decomposable Constraints

Decomposable Constraints Decomposable Constraints Ian Gent 1, Kostas Stergiou 2, and Toby Walsh 3 1 University of St Andrews, St Andrews, Scotland. ipg@dcs.st-and.ac.uk 2 University of Strathclyde, Glasgow, Scotland. ks@cs.strath.ac.uk

More information

Computational Complexity of Multi-way, Dataflow Constraint Problems

Computational Complexity of Multi-way, Dataflow Constraint Problems Computational Complexity of Multi-way, Dataflow Constraint Problems Gilles Trombettoni and Bertrand Neveu Projet Contraintes, CERMICS/INRIA, 2004 route des lucioles, 06902 Sophia-Antipolis Cedex, B.P.

More information

Constraint Programming

Constraint Programming Volume title 1 The editors c 2006 Elsevier All rights reserved Chapter 1 Constraint Programming Francesca Rossi, Peter van Beek, Toby Walsh 1.1 Introduction Constraint programming is a powerful paradigm

More information

Constraint Solving by Composition

Constraint Solving by Composition Constraint Solving by Composition Student: Zhijun Zhang Supervisor: Susan L. Epstein The Graduate Center of the City University of New York, Computer Science Department 365 Fifth Avenue, New York, NY 10016-4309,

More information

Dynamic Ordering for Asynchronous Backtracking on DisCSPs

Dynamic Ordering for Asynchronous Backtracking on DisCSPs Dynamic Ordering for Asynchronous Backtracking on DisCSPs Roie Zivan and Amnon Meisels, Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel No Institute Given

More information

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1 Lecture 18 Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1 Outline Chapter 6 - Constraint Satisfaction Problems Path Consistency & Global Constraints Sudoku Example Backtracking

More information

Last Conflict based Reasoning

Last Conflict based Reasoning Last Conflict based Reasoning Christophe Lecoutre and Lakhdar Sais and Sébastien Tabary and Vincent Vidal 1 Abstract. In this paper, we propose an approach to guide search to sources of conflicts. The

More information

Constraint-Based Scheduling: An Introduction for Newcomers

Constraint-Based Scheduling: An Introduction for Newcomers Constraint-Based Scheduling: An Introduction for Newcomers Roman Barták * Charles University in Prague, Faculty of Mathematics and Physics Malostranské námestí 2/25, 118 00, Praha 1, Czech Republic bartak@kti.mff.cuni.cz

More information

Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior

Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior PART III: Search Outline Depth-first Search Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior Best-First Search

More information

Maximum Betweenness Centrality: Approximability and Tractable Cases

Maximum Betweenness Centrality: Approximability and Tractable Cases Maximum Betweenness Centrality: Approximability and Tractable Cases Martin Fink and Joachim Spoerhase Chair of Computer Science I University of Würzburg {martin.a.fink, joachim.spoerhase}@uni-wuerzburg.de

More information

A Re-examination of Limited Discrepancy Search

A Re-examination of Limited Discrepancy Search A Re-examination of Limited Discrepancy Search W. Ken Jackson, Morten Irgens, and William S. Havens Intelligent Systems Lab, Centre for Systems Science Simon Fraser University Burnaby, B.C., CANADA V5A

More information

A more efficient algorithm for perfect sorting by reversals

A more efficient algorithm for perfect sorting by reversals A more efficient algorithm for perfect sorting by reversals Sèverine Bérard 1,2, Cedric Chauve 3,4, and Christophe Paul 5 1 Département de Mathématiques et d Informatique Appliquée, INRA, Toulouse, France.

More information

Constraint Satisfaction. AI Slides (5e) c Lin

Constraint Satisfaction. AI Slides (5e) c Lin Constraint Satisfaction 4 AI Slides (5e) c Lin Zuoquan@PKU 2003-2018 4 1 4 Constraint Satisfaction 4.1 Constraint satisfaction problems 4.2 Backtracking search 4.3 Constraint propagation 4.4 Local search

More information

Decision Problems. Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not.

Decision Problems. Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not. Decision Problems Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not. Definition: The class of problems that can be solved by polynomial-time

More information

Multi-Objective Propagation in Constraint Programming

Multi-Objective Propagation in Constraint Programming Multi-Objective Propagation in Constraint Programming Emma Rollon and Javier Larrosa Abstract. Bounding constraints are used to bound the tolerance of solutions under certain undesirable features. tandard

More information