Exploring A Two-Solver Architecture for Clause Learning CSP Solvers. Ozan Erdem

Size: px
Start display at page:

Download "Exploring A Two-Solver Architecture for Clause Learning CSP Solvers. Ozan Erdem"

Transcription

1 Exploring A Two-Solver Architecture for Clause Learning CSP Solvers by Ozan Erdem A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Computer Science University of Toronto c Copyright 2017 by Ozan Erdem

2 Abstract Exploring A Two-Solver Architecture for Clause Learning CSP Solvers Ozan Erdem Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2017 Many real world problems can be encoded as Constraint Satisfaction Problems (CSPs). Constraint satisfaction problems contain variables over nite domains, as well as constraints over these variables. There exists many solvers that can solve these problems eciently. Once these problems are solved, their solutions correspond to solutions of the real world problem the CSP encodes. Similarly, many problems can be encoded as instances of the Satisability Problem (SAT). In SAT all the variables are binary propositions, and the only kind of constraint they allow is the disjunction of a set of variables. SAT and CSP are closely related, and they both are NP-complete. These two approaches have their own advantages and disadvantages. SAT solvers perform clause learning, which prevents them from visiting similar subtrees in the search tree. The advantage of CSP solvers is that the CSP instances contain higher level constraints, which allows better reasoning. Recently, methods combining the advantages of SAT and CSP solvers have emerged. In such methods, solvers maintain a propositional representation of the CSP problem, and perform clause learning over this representation. Such solvers are called clause learning CSP solvers. However, the question of how to handle constraints in such solvers is unresolved. For each constraint, a solver must choose whether to encode the constraint entirely in SAT or use a special purpose propagator, and this choice also determines the propagation strength and eciency. Dierent choices perform better in dierent scenarios, hence hybrid methods which can be adaptive are promising. In this thesis, we propose an alternative architecture for clause learning CSP solvers, which incorporates two solvers of dierent strength. One of the solvers directs the search using fast and weak propagators. Once it gets into a dead end, it produces a conict clause, and asks the other solver to enhance it using stronger but slower propagators. In this way, the slower propagators are run only whenever they are likely to be needed, instead of each node in the search tree. We instantiate this architecture in dierent ways, and perform experiments to demonstrate the trade-os between these instantiations. ii

3 Contents 1 Introduction Contributions Background Constraint Satisfaction Problems Satisability Problem Watched literals Assumption mechanism Clause learning CSP solvers Historical background Getting reasons from GAC propagators Getting reasons from forward checking Comparison of reason clauses Global constraints The New Solver Architecture Introduction Overview of the new architecture Related Work Lazy Clause Generation Satisability Modulo Theories Logic-Based Benders Decomposition Probe backtrack search iii

4 3.3.5 Stubbornness MiniRed Non-intrusive conict minimization Clause database Clauses learnt by auxiliary solver VSIDS scores Unit clauses Building the assumptions Skipping calls to auxiliary solver Reducing the trail Instantiating the Main Solver as a Forward Checker Introduction Design choices GAC reason clauses generated in the auxiliary solver Setting the VSIDS scores Analysis of the reduced conict Skipping calls to the auxiliary solver Number of learnt clauses per conict Experiments Benchmarks Methodology Results Discussion Propagation strength of forward checking Reasons from forward checking Eciency of forward checking Conclusions Main Solver with alldifferent Decomposition Introduction iv

5 5.2 Propagating the all-dierent constraint Decomposing the all-dierent constraint Implementation Theoretical comparison Generating synthetic problems The basic scheme Extending the generated constraints Details of the generation Experiments MiniZinc Benchmarks Discussion Testing the VSIDS scoring Testing the blocked branches Testing the learnt clauses Conclusions Main solver with cumulative Decomposition Introduction Formal description of the cumulative constraint Decomposing cumulative Energetic Reasoning Getting reasons from energetic reasoning Experiments Scheduling Problems (RCPSP) Stacking Number Comparison with minicsp using the energy based propagator Conclusions Conclusion Future Work v

6 Bibliography 134 vi

7 Chapter 1 Introduction Constraint satisfaction is a powerful paradigm for solving combinatorial problems. To make use of this paradigm, one only needs to model problems using constraints over nite domain variables. Such a model is called a Constraint Satisfaction Problem (CSP), and a CSP solver can be used to compute a solution to this problem. For example, let P be a CSP over the variables X 1, X 2 and X 3 with the domain {1, 2, 3, 4}. Assuming that P contains the constraints X 1 + X 2 < 4 and X 1 X 2 = X 3, a possible solution to P is {X 1 1, X 2 2, X 3 2}. However, the assignments {X 1 2, X 2 2, X 3 2} is not a solution since it violates the rst constraint. Another similar paradigm is Satisability (SAT), where the problems are modelled using only logical constraints over propositional variables. For example, the formula (x 1 x 2 ) ( x 1 x 2 ) is a SAT instance over the two propositional variables x 1 and x 2, where the assignment of x 1 to true and the assignment of x 2 to false is a solution to this instance. The SAT and CSP paradigms can be used to solve NP-complete problems as SAT is the canonical NP-complete problem, and CSP is a generalization of SAT. These two paradigms have their own advantages and disadvantages, as we describe as follows. The most compelling advantage of SAT over CSP is a technique called clause learning during search. The main idea behind clause learning is to learn a new clause that prevents similar mistakes in the future by analyzing dead-ends. Clause learning has theoretical benets, and has demonstrated a huge success in practice [54, 55, 80]. Traditionally, CSP solvers have also used learning, called nogood learning [24, 31], however it is more restricted than clause 1

8 Chapter 1. Introduction 2 learning [46, 45]. For this reason, nogood learning was not as widely used in CSP solvers, since it did not yield speedups to justify the bookkeeping. The advantage of CSP solvers over SAT solvers is the expressivity of the constraints. For instance, a CSP can contain the well-known alldifferent(x 1, x 2,..., x n ) constraint, which denotes that the variables x 1, x 2,..., x n should take dierent values [49]. It is possible to model this constraint in SAT by rst encoding the CSP variables and their domains as propositional variables, and using propositional clauses denoting x i x j for all i, j [1.. n]. However, reasoning with this decomposition is less eective than using the well known propagators for the alldifferent constraint, such as the generalized arc consistency (GAC) propagator [64]. It was shown in [15] that there is no SAT encoding of equivalent power to GAC propagators for alldifferent which is polynomial in size. Traditional CSP solvers combine backtracking search with doing inference at each node. Dierent levels of inference correspond to dierent levels of consistency in the solvers, and traditional CSP solvers maintain a high order of consistency, usually GAC, at each node. However, especially for global constraints, maintaining this consistency comes at a high runtime cost, which is to the disadvantage of the CSP solvers. For instance, the alldifferent constraint can be made GAC in O(n 2.5 ) time [64], and it is an NP-hard problem to make the cumulative constraint GAC [57]. One can imagine many scenarios where enforcing such a high order of consistency performs poorly. For instance, expensive propagations might result in pruned values that do not play any role in the conict found along the current path. These pruned values will be restored on backtrack and thus no information is obtained from them. Lately, state-of-the-art CSP solvers have adopted many features of SAT solvers, most importantly clause learning [46, 45]. In order to use clause learning, two simple changes are needed in a CSP solver. Firstly, the domains of the CSP variables need to be encoded as propositional variables. There are various encodings which can achieve this [20, 6, 78]. Secondly, each propagation of the solver must be explained by a reason clause in order for conict analysis to work. In a SAT solver, clauses in the input naturally serve as these reasons, however CSP solvers need additional mechanisms to generate them. One way to obtain these reasons in a CSP solver is to encode the constraint in SAT [8, 72], and another way is to craft propagators which produce a reason clause for each inference [58, 45]. After making these enhancements

9 Chapter 1. Introduction 3 to the CSP solver, it gains additional theoretical power, and performs better than a standard CSP solver in practice [58, 45]. However it is still not clear how to achieve the best of the both worlds. One of the most important questions which researchers deal with is how should the constraints be handled?, especially for constraints requiring heavy propagation such as global constraints. In clause learning CSP solvers, every constraint can be handled in various ways: (1) it can be decomposed into SAT variables and clauses, if there exists such an encoding, (2) a propagator which supplies clausal explanations can be used, or (3) hybrid methods can be used, such as a lazy decomposition [2], or decomposing some of the constraints on-the-y [1]. In practice, none of these methods dominate the other ones, as there are various trade-os. Clause learning solvers solvers are eective in practice, since they learn information during search that can be used to make the remaining search more ecient. Even though SAT solvers enforce a very weak form of consistency via unit propagation, the overall search is very eective. This indicates that good clause learning can be useful in controlling the search. Enforcing higher degrees of consistency in a SAT solver, as CSP solvers do, can increase the eectiveness of clause learning as it enables deriving shorter clauses at the expense of more computation at each node. In this thesis, we investigate whether using higher degrees of propagation in a clause learning CSP solver can be used only to derive better clauses, or are there other benets of higher degree propagation during search? In order to achieve that, we suggest an alternative approach for handling constraints which require heavy duty propagation. We propose an architecture where we use a solver which does cheap propagation throughout search. Once the solver nds a conict it computes a weak conict clause, which is due to the cheap propagators that can only produce weak reasons. It is then possible to make use of another solver with strong propagators, which does not do search at all but works only to improve the given conict clause. In general strong propagators yield shorter reason clauses for propagations, which result in better learnt clauses. This way, the learnt clause in the solver can be improved by the additional help from the stronger solver. This method is likely to demonstrate better performance in some classes of problems. For instance, in a satisable problem instance with many redundant constraints, enforcing GAC on only a subset of the constraints would suce to nd a solution. However, this would require

10 Chapter 1. Introduction 4 identifying this subset, and in order to do that we can utilize a weaker solver that explores the search space eciently. The stronger solver would reason about these constraints using highly specialized and more expensive methods, only after getting a conict. Similarly, in an unsatisable problem instance it is often possible to prove unsatisability using (and enforcing GAC on) only a subset of the constraints. If some of the constraints in the problem are not in this subset, we would like to prevent propagating these constraints using the expensive algorithms, instead propagating them only after they prove themselves relevant is more desirable. The architecture we propose in this thesis is aimed at exploiting these ideas, and we investigate the behaviour of the architecture by instantiating it in dierent ways. 1.1 Contributions In Chapter 2 we present the formal background that will be used in the rest of the thesis. In Chapter 3 we introduce a novel architecture for solving CSP problems, which is based on two solvers. The main solver of the architecture performs weak propagation while directing the search, and the auxiliary solver performs stronger propagation to enhance clause learning. We also discuss the technical implementation in detail. The main solver and the auxiliary solver in this architecture can be instantiated in many ways. In general, the main solve performs weaker propagation than the auxiliary solver. In Chapter 4, we instantiate this architecture with a solver that performs forward checking on all constraints, and perform experiments using this instantiation. In Chapter 5 we follow a more rened approach. In Chapter 4 all the constraints were propagated using forward checking, even though GAC was cheap for many types of constraints. For this reason we picked the alldifferent constraint, which is one of the constraints for which GAC propagation is expensive, as the only constraint propagated dierently in both solvers. We also prove that this particular instantiation of our architecture does not lose any theoretical power over a standard clause learning solver. Most CSP solvers propagate alldifferent using GAC propagators at each node, even though they are expensive propagators, since the propagations pay o at the end. As an alternative, in Chapter 6 we pick a propagator which is not used widely in practice, which is the

11 Chapter 1. Introduction 5 energy propagator for the cumulative constraint that runs in O(n 3 ) time [11]. Our contributions in this chapter include introducing methods for obtaining reasons from the cumulative propagator to be used in a clause learning CSP solver. We then demonstrate empirically that our architecture works well in some domains. In Chapter 7, we summarize our contributions, and suggest some future directions for our research.

12 Chapter 2 Background 2.1 Constraint Satisfaction Problems A constraint satisfaction problem (CSP) is a triple P = V, D, C, where V is a set of variables, D is a domain function which maps each variable V V to a nite domain D(V ), and C is a set of constraints which species the allowed values to the variables. Given a problem P we denote its variables with V(P) and its constraints with C(P). When the problem is clear from the context, we refer to the variables as just V. When there is an ordering for the elements of D(V ), we denote the lower bound of D(V ) as lb(v ) and the upper bound as ub(v ). An assignment is a variable value pair, (V, d), also written as V d where d D(V ). An assignment set is a set of assignments A = {V 1 d 1, V 2 d 2,..., V n d n } such that every variable in A is assigned to only one value. If the distinction between an assignment and an assignment set is not important, we refer to the assignment sets simply as assignments. Each constraint C involves a set of variables called scope(c), and maps assignment sets that include all of the variables in scope(c) to true or false. Whenever an assignment set is mapped to true by a constraint we say that it satises the constraint. An assignment set is a solution to a CSP problem if it contains all the variables in the problem, and it satises all the constraints. A satisfying tuple of a constraint C is an assignment set containing an assignment to every variable in scope(c) which satises C. A satisfying tuple of C which contains the assignment (V, d) is called a support for (V, d) on C. Also, if a satisfying tuple containing (V, d) also contains an assignment (V, d ) with d [min(d(v )).. max(d(v ))], the tuple is called a 6

13 Chapter 2. Background 7 interval support. A constraint is generalized arc-consistent (GAC) i for every V scope(c) and d D(V ) we have that (V, d) has support on C. In other words (V, d) can be extended to an assignment set that satises the constraint. A CSP is GAC i every constraint in the problem is GAC [66]. Furthermore, a constraint is range consistent i for every V scope(c) and d D(V ) there is an interval support on C. Algorithm 1 Backtracking algorithm for solving CSPs 1: function BT(level) 2: if all variables are assigned then return SUCCESS 3: end if 4: V pick-unassigned-variable() 5: Assigned[V] true 6: for d D(V ) do 7: D(V) {d} 8: if constraint-propagation({v}) then 9: BT(level+1) 10: end if 11: restore-values() 12: end for 13: Assigned[V] false 14: return 15: end function Solvers typically use backtracking search to solve CSPs. We present a backtracking algorithm in Algorithm 1, and the details are as follows. Lines 15. The current level of the search tree is given as input to the algorithm. At each level, either all variables are assigned to a value, in which case the current assignment set is a solution to the problem, or some variables are not already assigned and a variable is picked for assignment. If we want to nd a single solution, the algorithm might be terminated on Line 1 instead. Lines 612. The picked variable V is tentatively assigned to the possible values in D(V ), and some form of propagation is performed, which ensures some level of consistency. Constraint propagation either modies the domain of some of the variables, or reports a conict. If there is no conict, the search continues from the next level. Lines In the case where all possible values in the domain of V is tried, and none

14 Chapter 2. Background 8 of them led to a solution V gets unassigned and the algorithm returns. Algorithm 2 Algorithm to enforce GAC 1: function enforce-gac(vars) 2: constraintqueue update-cons-queue(, Vars) 3: while constraintqueue null do 4: C constraintqueue.pop() 5: prunedvariables enforce-gac-on-constraint(c) 6: if prunedvalues == then return false 7: else 8: constraintqueue update-cons-queue(constraintqueue, prunedvariables) 9: end if 10: end while 11: return true 12: end function 13: function enforce-gac-on-constraint(c, ν) 14: prunedvariables 15: for V scope(c) do 16: for d D(V ) do 17: if (V, d) cannot be extended to a assignment set which satises C then 18: D(V ) D(V ) {d} 19: if D(V ) == then return 20: end if 21: prunedvariables prunedvariables {V } 22: end if 23: end for 24: end for 25: return prunedvariables 26: end function 27: function update-cons-queue(q, prunedvariables) 28: for V prunedvariables do 29: for C : V scope(c ) do 30: Q Q {C } 31: end for 32: end for 33: return Q 34: end function The constraint-propagation procedure in Algorithm 1 can be instantiated in several ways. One common instantiation enforces GAC on the problem at each node, which is also referred to as the maintaining GAC algorithm [52]. We present the GAC enforcement algorithm in Algorithm 2. Some details of this algorithm are as follows. Line 2. The algorithm maintains a constraint queue during this process, and updates it

15 Chapter 2. Background 9 whenever there are new inferences using the update-cons-queue function. Lines 39. The constraint queue is processed one constraint at a time. For each constraint, GAC is enforced on it using the enforce-gac-on-constraint function. After GAC is enforced on the constraint, if there was not a conict the constraint queue is updated by a call to update-cons-queue. Lines To enforce GAC on a constraint, each possible variable-value pair (V, d) in the scope of the constraint is checked whether it can be extended to a satisfying assignment set. If this is not possible, d is pruned from the domain of V. In practice, it is often sucient to iterate over only a subset of the variable-value pairs by taking the structure of the specic constraint into consideration. Line 21. The set of pruned values is updated by adding the variable-value pairs which denote the prunings. These values will be used to update the constraint queues in updatecons-queue function. Lines In the update-cons-queue function, each variable which had a value pruned is examined and the constraints which have that variable in their scope is added to the constraint queue. Another way of instantiating constraint-propagation is using forward checking (FC), which is weaker than enforcing GAC. FC was initially developed for binary constraint networks [36], however it can be easily generalized to n-ary networks as described in [74]. FC has been used in a clause learning CSP solver in [45], with methods for generating reasons from them. In the general version, forward checking picks variables that are left as the only unassigned variable in the scope of a constraint, tries out all the values for that variable, and repeats this until a xpoint. We present this procedure formally in Algorithm Satisability Problem A proposition p is a Boolean variable. A literal is either a proposition p (a positive literal), its negation p (a negative literal),, or. The complement of a literal l is denoted l where the

16 Chapter 2. Background 10 Algorithm 3 Forward checking algorithm 1: function ForwardCheck(Vars) 2: Q all pairs (C, V f ) s.t. constraint C, scope(c) Vars, a unique V f scope(c) with D(V f ) > 1 3: while Q is not empty do 4: (C, V f ) Q.pop() 5: for each d D(V f ) do 6: if V f d violates C then 7: D(V f ) D(V f ) {d} 8: if D(V f ) == then 9: return false 10: end if 11: if D(V f ) == {d} then 12: for C : V f scope(c ) do 13: if a unique V f ) with D(V f ) > 1 then 14: Q.push((C, V f )) 15: end if 16: end for 17: end if 18: end if 19: end for 20: end while 21: return true 22: end function

17 Chapter 2. Background 11 complement is l if l is positive, is l if l is a negative literal l, is if l is, and otherwise. A clause C = (l 1 l 2... l n ) is a disjunction of literals, and we also treat it as a set of literals. A clause with a single literal is called a unit clause. Sometimes it is useful to view a clause as an implication: An implication of the form p 0 p 1... p n l is equaivalent to the clause ( p o p 1... p n l). A propositional formula is in conjunctive normal form (CNF) if it is a conjunction of clauses. We say that a clause C subsumes another clause C if C C. In the SAT context, an assignment is a mapping from a variable to either or. Given an assignment ν, we can view it as a mapping from a literal as well: For a variable v, let l be a literal over this variable. Then, ν(l) = ν(v) if l is positive and ν( l) = l if l is negative. In some cases it is convenient to denote an assignment as literal, i.e., for a variable v, the literal v denotes that ν(v) = and v denotes that ν(v) =. Moreover, an assignment set, in the context of SAT, is a set of literals, which we refer to as assignments, again when the meaning is clear. An assignment set satises a clause if one of its literals appear in the clause. An assignment set is a solution to a CNF if it satises each clause of the CNF. The satisability problem (SAT) is to determine whether there is a solution to a given CNF [33]. Resolution is an inference rule over two clauses C {x} and C { x}, which produces the clause C C [65]. Resolution can be used as the only rule for the resolution proof system, which is a complete proof system for unsatisable formulas, i.e., given an unsatisable formula it is always possible to derive the empty clause using resolution. Modern SAT solvers are based on the conict directed clause learning (CDCL) algorithm, which is an improvement over the well-known DPLL algorithm [65]. DPLL searches for an assignment to variables in a depth-rst manner, and at each node it performs unit propagation. Unit propagation is a method for simplifying a formula after a unit clause {l} is detected. During unit propagation, all the clauses containing l are removed and l is deleted from all the clauses. In Algorithm 4 we present the unit propagation algorithm. The inputs to the algorithm are a CNF ϕ and an assignment set ν. The algorithm looks for unit clauses {l} in the input theory, and assigns each such literal l to true. The assignment of l shortens the clauses which contain l, and the clauses which contain l are removed from the theory. Shortening of some clauses may yield other unit clauses, and this procedure is repeated until a xpoint is achieved. In Algorithm 5 we present a typical conict directed clause learning (CDCL) algorithm [16].

18 Chapter 2. Background 12 Algorithm 4 Unit propagation 1: function UnitPropagation(ϕ, ν) 2: for each clause C ϕ do 3: if C ν then 4: ϕ ϕ C 5: else 6: C C { l l ν} 7: end if 8: end for 9: if C ϕ such that C = 0 then 10: return CONFLICT 11: end if 12: if C ϕ such that C = 1 then 13: for all C ϕ such that C = 1 do 14: ϕ ϕ {C} 15: ν ν C 16: end for 17: return UnitPropagation(ϕ, ν) 18: end if 19: end function Algorithm 5 Typical CDCL algorithm 1: function CDCL(ϕ, ν) 2: dl 0 3: while true do 4: if UnitPropagation(ϕ, ν) == CONFLICT then 5: if dl == 0 then return UNSAT 6: else 7: conict ConflictAnalysis(ϕ, ν) 8: Backtrack(ϕ, ν, conict.assertionlevel) 9: add(conict) 10: end if 11: else 12: (x, v) PickBranchingVariable(ϕ, ν) 13: if x does not exist then return SAT 14: end if 15: dl dl : ν ν {(x, v)} 17: end if 18: end while 19: end function

19 Chapter 2. Background 13 CDCL is an improvement over DPLL. The key dierence is that CDCL learns new clauses, which are implied by the input formula. These new clauses enable non-chronological backtracking. They are also added to the formula and used to improve the remaining search. The details of the CDCL algorithm are as follows: Line 4. The algorithm performs unit propagation. Line 79. The conict analysis procedure derives a clause to be learned, the solver backtracks to the assertion level of the newly learnt clause [16] and adds it to the database. Line 12. Next variable to branch on is selected according to the VSIDS heuristic [55]. This heuristic scores each variable with respect to its occurrence in the learnt clauses (also called conict clauses), and more recent occurrences are scored higher. Search continues until either all variables are assigned, in which case the current set of assignments is a solution, or a conict is detected at decision level 0, in which case the input formula is unsatisable. Conict analysis in SAT Behind the scenes CDCL keeps more information to be used in conict analysis, which are described as follows. In CDCL there are two ways a variable can be assigned: Decisions: These are made on Line 16 of Algorithm 5. Unit propagation: On Line 4 of Algorithm 5 unit propagation can uncover some of the consequences of the literals assigned so far. Whenever all but one of the literals of a clause C are assigned to false, the remaining literal l of the clause is assigned to true. In this case we say clause C forces l. For each assigned literal l the algorithm keeps track of why it was assigned, called the reason of l, and we denote this with reason(l). This is either the clause which forced l to be assigned, or it is the empty reason in the case l was a decision literal. If the reason of l is not empty, it contains l itself, and a number of other literals that were assigned to false which led to forcing of l by unit propagation.

20 Chapter 2. Background 14 Whenever the literals in a clause are all assigned to false after propagation, the clause is identied as a conict. For any literal in the conict clause, x such that x has a reason, the conict clause and x's reason can be resolved. The result is a new (conict) clause that also is falsied and that does not contain x (nor x). Literals that have become true are put on a stack called the trail in the order they are assigned. Conict analysis works on the trail in reverse chronological order by resolving away the literals with their reasons until a stopping condition is met [55, 54]. All that is needed for it to work is for every propagated literal to have an associated reason containing only the propagated literal, and the literals made false by previous propagations and decisions (i.e., the literals that forced the propagated literal). The stopping condition widely used in SAT solvers is called the 1UIP condition, which is that there is only one literal in the conict clause which was assigned at the current decision level [54]. A clause obtained by conict analysis using this condition is called a 1UIP clause. 1UIP clauses are special cases of asserting clauses [80]. An asserting clause at level L of a solver is a clause that has only one literal l assigned at level L. For such an asserting clause, we call l the asserting literal. Learning asserting clauses α l in solvers is useful, since if such a clause is derived (1) l was an actual implication of α but it was missed (2) learning α l will make the solver catch this implication in the future. 1UIP clauses are only one form of asserting clauses. It is possible to continue resolving away literals after 1UIP condition is met, and dierent asserting clauses can be obtained. However, in practice 1UIP clauses have proven to be the most useful type of asserting clauses [80, 16]. Asserting clauses have the property of 1-empowerment [61]. A clause of the form α l is said to be 1-empowering with respect to a CNF ϕ via l if: ϕ = (α l) Unit resolution on ϕ α cannot derive l. By the denition of 1-empowerment, an asserting clause α l is 1-empowering via l. Learning 1-empowering clauses is benecial to the solvers, as each time such a clause is learned unit propagation can discover an implication which was missed before.

21 Chapter 2. Background 15 Quality of learnt clauses The quality of dierent learnt clauses can sometimes be compared. Given two learnt clauses C C, we can say that C is a better conict clause since it subsumes C. This means that in every path in search that makes C false also makes C false. Moreover, literals blocks distance (LBD) is another important metric for the quality of the learnt clauses [7]. Given a conict clause C, the assignment levels of the literals in it partition C into n dierent blocks, and we say that the LBD of C is n. 1UIP clauses with an LBD of exactly 2 are called glue clauses, and they are found to be highly useful in solvers [7]. Intuitively, these clauses glue the asserting literal to the other block in the clause to create a single block Watched literals A high performance SAT solver needs to handle unit propagation eciently. For this reason, modern SAT solvers maintain two watch literals per clause [55], where the watches are picked from literals that are not assigned to false. Every watch literal is associated with an array of clauses it watches, and whenever the literal is assigned to false a new watch has to be found. The following example illustrates how watched literals are assigned. Example 1. Let C = (x 1 x 2 x 3 x 4 ) be a clause in a solver at its initial state, where none of the variables in the problem are assigned to a value yet. At this point, the solver can pick x 1 and x 2 as the two watch literals for C. However, if at some point in search x 2 gets assigned to false, then it will not be a valid watch for C anymore. The solver then will look for another variable which is not to assigned to false to replace the invalid watch, which can be either x 3 or x 4. Assume that after x 1 is assigned to false, and in the next decision level both x 2 and x 3 are assigned to false as well. The solver will fail to pick two dierent watches for C at this point, and will force x 4 to be true as the result of unit propagation. However, if after assigning x 1, x 2, x 3, the variable x 4 is forced to false as well as a consequence of some constraints, then the solver will fail to nd any replacement watches. In this case, it will identify this situation as a conict, as C becomes the empty clause. This approach is ecient in two ways: (1) assigning any of the unwatched literals occurring

22 Chapter 2. Background 16 in the clause does not require any update, and (2) since backtracking does not invalidate watches, no updates are needed to the clauses when the solver backtracks Assumption mechanism It is possible to query some solvers, such as minisat [26], to nd an inconsistent subset of a sequence of literals L. This mechanism works as follows: The solver considers the next literal l in L (L has a xed but arbitrary order). Only after all literals in L have been assigned will the solver pick a variable using its branching heuristic. If l is unassigned, it is assigned true as the next decision and propagated. If l is already assigned true, it is skipped over. If l is assigned to false, then we know that the previous literals of L, which have been made true, have forced l to be false. This means that the set of assumptions L is inconsistent (they cannot all be made true). Conict analysis is then employed to determine a smaller subset of the currently true literals of L, that suce to force l. This works as follows: If l was already assigned to false, then it was forced by a clause. Every literal in the reason clause of l is either an assumption, or was forced by a clause. We can start with reason( l) and start resolving away literals that were forced, until a clause C of only assumptions is obtained. Then C {l} is an inconsistent subset of L, and it is returned. 2.3 Clause learning CSP solvers Historical background Traditionally, CSP solvers made use of many techniques in combination with backtracking algorithms. One popular technique is intelligent backtracking [62], where the solver can jump

23 Chapter 2. Background 17 back over irrelevant assignments after a conict is found. In order to achieve that, a solver can record the reason of each pruning, and these reasons are called nogoods. A nogood is a set of assignments {V 1 d 1, V 2 d 2,..., V n d n } that cannot be extended to a solution, which is similar to reason clauses in SAT. After a domain becomes empty in Algorithm 1, the nogoods that led to an empty domain can be combined to derive other nogoods, and dierent nogoods determine dierent backtrack levels. For instance, in a traditional CSP solver let the domain D(X) = {1, 2, 3} become empty at some point in search, and let the reason nogoods be the following: {Y 1 d 1, X 1} {Y 2 d 2, X 2} {Y 3 d 3, X 3} These nogood indicate that for i {1, 2, 3} the assignment of Y i to d i and the assignment of X to the value i at the same time cannot be extended to a solution. Hence, after D(X) becomes empty, it is possible to combine these nogoods to get another nogood {Y 1 d 1, Y 2 d 2, Y 3 d 3 } over the Y i variables. This way of deriving new nogoods is similar in nature to using 1UIP clauses to determine the backtrack level in SAT, however a few dierences exist. The main dierence is that these nogoods are not as powerful, and as a result these nogoods are not recorded and are only used to determine a backjump level. We explain this in more detail in the following paragraphs. There have been attempts to record the derived nogoods in order to prune the search space later on [24, 31], however they were not as successful as clause learning in SAT. This stems from the fact that these nogoods consists of only assignments to variables, which have limited power. Later on, there emerged works realizing this, and introduced generalized nogoods that also contained non-assignments [46, 45]. For instance, a standard nogood states that the given combination of assignments cannot be extended to a solution. However, in order to express that certain prunings cannot be extended to a solution an exponential number of standard

24 Chapter 2. Background 18 nogoods are required. Hence, generalized nogoods that can also express non-assignments are exponentially more powerful than the standard nogoods. The concept of generalized nogoods led to the development of CSP solvers that exploited them, such as lazy clause generation systems [58, 29, 8]. These systems make use of a SAT engine that drives the search, and derives generalized nogoods in the form of propositional clauses. Due to the dierences in SAT and CSP, several modications have to be made in the SAT engine. First, the CSP variables have to be represented as SAT variables. Second, arbitrary CSP constraints must be propagated. Encoding the CSP variables It is possible to encode CSP variables in SAT. This typically requires a number of literals to represent the domains, and a number of clauses to guarantee their consistency. For instance, to encode a CSP variable X with D(X) = [l.. u] we can introduce u l + 1 literals JX = lk, JX = l + 1K,..., JX = uk. Also, we will simplify the notation for JX = dk as JX dk. In addition to the literals, we need the following clauses to establish the consistency of the domain [78]: (JX = lk JX = l + 1K... JX = uk) (2.1) (JX ik JX jk) i j (2.2) Clause (2.1) ensures that a variable is assigned to at least one value, and clauses in (2.2) ensure that a variable is assigned to at most one value. This encoding takes O(n 2 ) space, however more ecient encodings are known, such as the linear encoding [20, 6, 58]. In this encoding, for each variable X, we have JX = dk for each d D(X) (called equality literals) and we have JX dk for lb(x) d < ub(x) (called inequality literals). Intuitively, JX = dk holds whenever X takes the value d and JX dk holds whenever X is less than or equal to d. To enforce these conditions, we post the following clauses for each variable X with domain [l.. u]:

25 Chapter 2. Background 19 JX dk JX d + 1K l d u 1 JX = dk JX dk JX = dk JX d 1K l d u l < d u JX lk JX = lk JX dk JX = dk JX d 1K l < d < u JX = uk JX u 1K The total number of clauses in the order encoding is O(n), and it has the same unit propagation power as the direct encoding [58]. In the rest of this thesis we assume that the order encoding is used, unless stated otherwise. Constraint propagation In a SAT engine, constraint propagation of arbitrary constraints can be done in two ways: If a SAT encoding of a constraint is known, it can be added to the input CNF [8]. This approach ts well into the clause learning mechanism since every pruning is automatically associated with a reason clause. However, this approach often works poorly for constraints of large scope since the solver then has to handle and propagate many clauses. Alternatively, a special purpose propagator can be used to propagate the constraint. The CSP literature contains many examples of such propagators [66]. To make this approach work with clause learning, propagators must supply clausal reasons for each pruning they make [45, 58, 34, 69, 70]. In this case, the UnitPropagation method in Algorithm 5 is replaced with a more general propagation method, which does unit propagation and also propagation of the CSP constraints. Contrary to encoding the constraints, this approach prevents the possible exponential expansion of constraints into clauses. Solvers which can make use of propagators that supply clausal reasons are called lazy clause generation (LCG) systems [58]. These systems can be viewed as generating a clausal encoding of constraints on-the-y, as they propagate literals. If a constraint is not active (i.e., if it

26 Chapter 2. Background 20 does not propagate anything) no clauses are produced for that constraint. Regardless of the variable encoding and propagation strength, we call the resulting algorithm CDCL-CSP Getting reasons from GAC propagators There are GAC propagators for various constraints in the literature [66]. For any GAC propagator, a generic reason clause can always be formed in the following way. Say that the GAC propagator for constraint C prunes d from D(V ) for some V scope(c). For each remaining variable V i in scope(c) consider the following formula Ch(V i ), which simply encodes the changes to D(V i ): JV i = d (i,v) K if V i is assigned to v Ch(V i ) = JV i d (i,0) K JV i d (i,1) K... JV i d (i,ji ) K d (i,0),..., d (i,ji ) are pruned from D(V i ) V = d: Then, this set of pruning of domain values and variable assignments led to the pruning of Ch(V i ) JV dk (2.3) V i scope(c) which is equivalent to the following clause: Ch(V i ) JV dk (2.4) V i scope(c) However, it is possible that not every variable in scope(c) or literal in Ch(V i ) is relevant to the conict. GAC propagators can make use of the structure of the constraints and produce shorter clauses than the generic reason [32, 34, 45, 69, 70]. Example 2. Consider a CSP with four variables X 0, X 1, X 2, X 3 with domains D(X i ) = {0, 1, 2, 3, 4}, and a single alldifferent constraint over these four variables. At some point in search, let the new domains be D(X 0 ) = {1}, D(X 1 ) = D(X 2 ) = {2, 3}.

27 Chapter 2. Background 21 At this point, one of the values to be pruned is X 3 = 2 since either X 1 or X 2 has to be assigned to 2. The reason clause we would get from the generic method above would be: Ch(X 0 ) Ch(X 1 ) Ch(X 2 ) JX 3 = 2K JX 0 = 1K JX 1 = 0K JX 1 = 1K JX 1 = 4K JX 2 = 0K JX 2 = 1K JX 2 = 4K JX 3 = 2K However, the assignment to X 0 is not relevant to this pruning, the changes to D(X 1 ) and D(X 2 ) are sucient to explain it: Ch(X 1 ) Ch(X 2 ) X 3 2 JX 1 = 0K JX 1 = 1K JX 1 = 4K JX 2 = 0K JX 2 = 1K JX 2 = 4K JX 3 = 2K This is a stronger clause than the generic reason, and a good GAC propagator might generate this clause instead Getting reasons from forward checking In order to use FC in a clause learning CSP solver, the FC engine must also supply a clausal reason for each pruning it makes. FC naturally yields a reason clause in the following way. When FC prunes a value V = d, then there exists a constraint C such that all the remaining variables in its domain are assigned. In other words all these assignments have led to that pruning. This is expressed with the following propositional formula: V i scope(c) {V } JV i = d i K JV = dk which is equivalent to the following clause:

28 Chapter 2. Background 22 JV i = d i K JV = dk (2.5) V i scope(c) {V } Hence the propagation engine in a CDCL algorithm can make use of FC easily by supplying such reason clauses. Conict analysis can then naturally produce a 1UIP clauses by using these clausal reasons Comparison of reason clauses Example 3. Consider the following linear equation constraint: X 1 + X 2 + X 3 + X 4 > 2 where the domain of each variable is D(X i ) = {0, 1}. Say that at some point in the search, the assignment X 1 = 0 is made. Depending on the propagation engine, there are two possible scenarios on what happens during propagation: A GAC propagator would detect that X 2 = 0, X 3 = 0 and X 4 = 0 should be pruned, since it cannot be extended to a satisfying assignment set. The reason clause for the pruning of X 4 = 0 would be C 1 = ( JX 1 = 0K JX 4 = 0K). A forward checker would not prune anything. However, if the search goes on and if the assignments X 2 = 0 and X 3 = 0 are made, X 4 = 0 would then be pruned, along with X 4 = 1. The reason clause would be C 2 = ( JX 1 = 0K JX 2 = 0K JX 3 = 0K JX 4 = 0K) in this case. This is a worse reason clause than C 1, since it is a superset of C 1. Note that after the mentioned decisions are made, a dead end is achieved. During the conict analysis after achieving this dead end, this clause would be used to produce a longer conict clause and it would induce a worse jumpback level. In general, a stronger propagator yields better reasons than a weaker propagator. This is not a coincidence: A stronger propagator makes the same set of inferences after fewer assign-

29 Chapter 2. Background 23 ments to the variables in the constraint's scope, and this assignments themselves constitute the explanations. Given two reason clauses for a literal l, C 1 and C 2, we say that C 1 is a stronger reason clause than C 2 if C 1 subsumes C 2. If C 1 is a stronger clause than C 2 it may possibly yield a better backtrack level than C 2 to assert l. Not all clauses can be compared in strength, as dierent sets of literals do not necessarily have a subset/superset relationship. 2.4 Global constraints A global constraint is a constraint of large arity that grows as the number of variables in the CSP increases [13]. In practice, global constraints are computationally expensive to propagate and for this reason there has been a signicant amount of work on building ecient propagators for them. Some examples of global constraints include the alldifferent constraint, the global cardinality constraint gcc and, the cumulative constraint cumulative. Some global constraints can be made GAC eciently, such as the alldifferent constraint which admits a GAC propagator that runs in O(n 2.5 ) time [64]. Even though it is a costly propagator, the inference pays o in practice. However, enforcing GAC on the cumulative propagator is NP-hard [57], and CSP solvers usually enforce a weaker notion of consistency.

30 Chapter 3 The New Solver Architecture 3.1 Introduction In this chapter, we describe the circumstances where a CSP solver can do redundant work, and suggest a new architecture for solving CSPs which aims to eliminate these redundancies. We also describe the design choices we make, along with their motivations. State-of-the-art CSP solvers contain propagators which reason about global constraints. However, these propagators often incur high runtime cost. For this reason, we would like to avoid excessive use of these propagators, when possible. A CSP solver can perform many propagations that do not play any role in a conict, and that are erased after a backtrack. Consider the following scenario where the CDCL-CSP algorithm would perform badly: Example 4. Assume we have a CSP with the variables X 1, X 2,..., X 100, X 101 with domains D(X i ) = {0, 1} and many alldifferent constraints of arity 100: A 1 : alldifferent(x 1, Y 1 ) A 2 : alldifferent(x 2, Y 2 )... A 100 : alldifferent(x 100, Y 100 ) 24

31 Chapter 3. The New Solver Architecture 25 Figure 3.1: A search tree. L0 JX 1 =1K L1 JX 2 =1K L2 JX 3 =1K JX 100 =1K. JX 99 =1K JX 100 =1K where each Y i is a sequence of 99 dierent variables. Also assume that the CSP contains the following linear constraints: L 1 : X 1 + X 100 X 101 X 99 < 1 L 2 : X 2 + X X 101 < 3 Let a clause learning CSP solver S GAC decide on JX 1 = 1K, JX 2 = 1K,..., JX 100 = 1K in order, and establish GAC at each node. Figure 3.1 illustrates a search tree containing this path on the left. After JX 100 = 1K is decided on, the GAC propagator for L 1 will infer JX 101 = 1K with the reason clause C 1 = ( JX 1 = 1K JX 100 = 1K JX 101 = 1K), and L 2 will fail while producing the conict clause C 2 = ( JX 2 = 1K JX 100 = 1K JX 101 = 1K). A step of resolution of the conict clause C 2 with C 1, which is the reason clause of the last assigned literal JX 100 = 1K, will yield the 1UIP clause C 1UIP = ( JX 1 = 1K JX 2 = 1K JX 100 = 1K). The solver will jump back to the second highest decision level in the 1UIP clause, which is the second decision level, and force the newly discovered unit JX 100 = 1K.

Unrestricted Nogood Recording in CSP search

Unrestricted Nogood Recording in CSP search Unrestricted Nogood Recording in CSP search George Katsirelos and Fahiem Bacchus Department of Computer Science, University Of Toronto, Toronto, Ontario, Canada [gkatsi,fbacchus]@cs.toronto.edu Abstract.

More information

Implementing Logical Connectives in Constraint Programming

Implementing Logical Connectives in Constraint Programming Implementing Logical Connectives in Constraint Programming Christopher Jefferson, Neil CA Moore, Peter Nightingale, Karen E Petrie* * School of Computing, University of Dundee, Dundee DD1 4HN, UK email:

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

Course Summary! What have we learned and what are we expected to know?

Course Summary! What have we learned and what are we expected to know? Course Summary! What have we learned and what are we expected to know? Overview! Introduction Modelling in MiniZinc Finite Domain Constraint Solving Search Linear Programming and Network Flow Mixed Integer

More information

CS-E3220 Declarative Programming

CS-E3220 Declarative Programming CS-E3220 Declarative Programming Lecture 5: Premises for Modern SAT Solving Aalto University School of Science Department of Computer Science Spring 2018 Motivation The Davis-Putnam-Logemann-Loveland (DPLL)

More information

Learning Techniques for Pseudo-Boolean Solving and Optimization

Learning Techniques for Pseudo-Boolean Solving and Optimization Learning Techniques for Pseudo-Boolean Solving and Optimization José Faustino Fragoso Fremenin dos Santos September 29, 2008 Abstract The extension of conflict-based learning from Propositional Satisfiability

More information

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Formal Methods Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley The Boolean Satisfiability Problem (SAT) Given: A Boolean formula F(x 1, x 2, x 3,, x n ) Can F evaluate to 1

More information

Linear Time Unit Propagation, Horn-SAT and 2-SAT

Linear Time Unit Propagation, Horn-SAT and 2-SAT Notes on Satisfiability-Based Problem Solving Linear Time Unit Propagation, Horn-SAT and 2-SAT David Mitchell mitchell@cs.sfu.ca September 25, 2013 This is a preliminary draft of these notes. Please do

More information

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley Project Proposals Due Friday, February 13 on bcourses Will discuss project topics on Monday Instructions

More information

Polynomial SAT-Solver Algorithm Explanation

Polynomial SAT-Solver Algorithm Explanation 1 Polynomial SAT-Solver Algorithm Explanation by Matthias Mueller (a.k.a. Louis Coder) louis@louis-coder.com Explanation Version 1.0 - December 1, 2013 Abstract This document describes an algorithm that

More information

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA GSAT and Local Consistency 3 Kalev Kask and Rina Dechter Department of Information and Computer Science University of California, Irvine, CA 92717-3425 fkkask,dechterg@ics.uci.edu Abstract It has been

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Combining forces to solve Combinatorial Problems, a preliminary approach

Combining forces to solve Combinatorial Problems, a preliminary approach Combining forces to solve Combinatorial Problems, a preliminary approach Mohamed Siala, Emmanuel Hebrard, and Christian Artigues Tarbes, France Mohamed SIALA April 2013 EDSYS Congress 1 / 19 Outline Context

More information

A proof-producing CSP solver: A proof supplement

A proof-producing CSP solver: A proof supplement A proof-producing CSP solver: A proof supplement Report IE/IS-2010-02 Michael Veksler Ofer Strichman mveksler@tx.technion.ac.il ofers@ie.technion.ac.il Technion Institute of Technology April 12, 2010 Abstract

More information

Solving 3-SAT. Radboud University Nijmegen. Bachelor Thesis. Supervisors: Henk Barendregt Alexandra Silva. Author: Peter Maandag s

Solving 3-SAT. Radboud University Nijmegen. Bachelor Thesis. Supervisors: Henk Barendregt Alexandra Silva. Author: Peter Maandag s Solving 3-SAT Radboud University Nijmegen Bachelor Thesis Author: Peter Maandag s3047121 Supervisors: Henk Barendregt Alexandra Silva July 2, 2012 Contents 1 Introduction 2 1.1 Problem context............................

More information

Constraint (Logic) Programming

Constraint (Logic) Programming Constraint (Logic) Programming Roman Barták Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic bartak@ktiml.mff.cuni.cz Sudoku Combinatorial puzzle, whose goal is to enter

More information

Constraint Satisfaction Problems. Chapter 6

Constraint Satisfaction Problems. Chapter 6 Constraint Satisfaction Problems Chapter 6 Constraint Satisfaction Problems A constraint satisfaction problem consists of three components, X, D, and C: X is a set of variables, {X 1,..., X n }. D is a

More information

Constraint Reasoning Part 2: SAT, PB, WCSP

Constraint Reasoning Part 2: SAT, PB, WCSP Constraint Reasoning Part 2: SAT, PB, WCSP Olivier ROUSSEL roussel@cril.univ-artois.fr CRIL-CNRS UMR 8188 Université d Artois Lens, France Tutorial ECAI 2012 Montpellier August, 22 2012 Constraint Reasoning

More information

Reduced branching-factor algorithms for constraint satisfaction problems

Reduced branching-factor algorithms for constraint satisfaction problems Reduced branching-factor algorithms for constraint satisfaction problems Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

Normal Forms for Boolean Expressions

Normal Forms for Boolean Expressions Normal Forms for Boolean Expressions A NORMAL FORM defines a class expressions s.t. a. Satisfy certain structural properties b. Are usually universal: able to express every boolean function 1. Disjunctive

More information

of m clauses, each containing the disjunction of boolean variables from a nite set V = fv 1 ; : : : ; vng of size n [8]. Each variable occurrence with

of m clauses, each containing the disjunction of boolean variables from a nite set V = fv 1 ; : : : ; vng of size n [8]. Each variable occurrence with A Hybridised 3-SAT Algorithm Andrew Slater Automated Reasoning Project, Computer Sciences Laboratory, RSISE, Australian National University, 0200, Canberra Andrew.Slater@anu.edu.au April 9, 1999 1 Introduction

More information

Study of efficient techniques for implementing a Pseudo-Boolean solver based on cutting planes

Study of efficient techniques for implementing a Pseudo-Boolean solver based on cutting planes DEGREE PROJECT IN COMPUTER ENGINEERING, FIRST CYCLE, 15 CREDITS STOCKHOLM, SWEDEN 2017 Study of efficient techniques for implementing a Pseudo-Boolean solver based on cutting planes ALEIX SACREST GASCON

More information

CSP- and SAT-based Inference Techniques Applied to Gnomine

CSP- and SAT-based Inference Techniques Applied to Gnomine CSP- and SAT-based Inference Techniques Applied to Gnomine Bachelor Thesis Faculty of Science, University of Basel Department of Computer Science Artificial Intelligence ai.cs.unibas.ch Examiner: Prof.

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

Chapter 2 PRELIMINARIES

Chapter 2 PRELIMINARIES 8 Chapter 2 PRELIMINARIES Throughout this thesis, we work with propositional or Boolean variables, that is, variables that take value in the set {true, false}. A propositional formula F representing a

More information

Hybrid Constraint Solvers

Hybrid Constraint Solvers Hybrid Constraint Solvers - An overview Why Hybrid Solvers CP and SAT: Lazy Clause Generation CP and LP: Reification of Linear Constraints Conclusions 9 November 2011 Pedro Barahona - EPCL - Hybrid Solvers

More information

Boolean Representations and Combinatorial Equivalence

Boolean Representations and Combinatorial Equivalence Chapter 2 Boolean Representations and Combinatorial Equivalence This chapter introduces different representations of Boolean functions. It then discusses the applications of these representations for proving

More information

Constraint Optimisation Problems. Constraint Optimisation. Cost Networks. Branch and Bound. Dynamic Programming

Constraint Optimisation Problems. Constraint Optimisation. Cost Networks. Branch and Bound. Dynamic Programming Summary Network Search Bucket Elimination Soft Soft s express preferences over variable assignments Preferences give dierent values over variable assignment A student can follow only one class at a time

More information

ESE535: Electronic Design Automation CNF. Today CNF. 3-SAT Universal. Problem (A+B+/C)*(/B+D)*(C+/A+/E)

ESE535: Electronic Design Automation CNF. Today CNF. 3-SAT Universal. Problem (A+B+/C)*(/B+D)*(C+/A+/E) ESE535: Electronic Design Automation CNF Day 21: April 21, 2008 Modern SAT Solvers ({z}chaff, GRASP,miniSAT) Conjunctive Normal Form Logical AND of a set of clauses Product of sums Clauses: logical OR

More information

A Uniform View of Backtracking

A Uniform View of Backtracking A Uniform View of Backtracking Fahiem Bacchus 1 Department. of Computer Science, 6 Kings College Road, University Of Toronto, Toronto, Ontario, Canada, M5S 1A4, fbacchus@cs.toronto.edu? Abstract. Backtracking

More information

PART II: Local Consistency & Constraint Propagation

PART II: Local Consistency & Constraint Propagation PART II: Local Consistency & Constraint Propagation Solving CSPs Search algorithm. Usually backtracking search performing a depth-first traversal of a search tree. Local consistency and constraint propagation.

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Chapter 6 of Russell & Norvig] Constraint satisfaction problems (CSPs) Standard search problem:

More information

Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior

Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior PART III: Search Outline Depth-first Search Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior Best-First Search

More information

Heuristic Backtracking Algorithms for SAT

Heuristic Backtracking Algorithms for SAT Heuristic Backtracking Algorithms for SAT A. Bhalla, I. Lynce, J.T. de Sousa and J. Marques-Silva IST/INESC-ID, Technical University of Lisbon, Portugal fateet,ines,jts,jpmsg@sat.inesc.pt Abstract In recent

More information

CDCL SAT Solvers. Joao Marques-Silva. Theory and Practice of SAT Solving Dagstuhl Workshop. April INESC-ID, IST, ULisbon, Portugal

CDCL SAT Solvers. Joao Marques-Silva. Theory and Practice of SAT Solving Dagstuhl Workshop. April INESC-ID, IST, ULisbon, Portugal CDCL SAT Solvers Joao Marques-Silva INESC-ID, IST, ULisbon, Portugal Theory and Practice of SAT Solving Dagstuhl Workshop April 2015 The Success of SAT Well-known NP-complete decision problem [C71] The

More information

PROPOSITIONAL LOGIC (2)

PROPOSITIONAL LOGIC (2) PROPOSITIONAL LOGIC (2) based on Huth & Ruan Logic in Computer Science: Modelling and Reasoning about Systems Cambridge University Press, 2004 Russell & Norvig Artificial Intelligence: A Modern Approach

More information

Range and Roots: Two Common Patterns for Specifying and Propagating Counting and Occurrence Constraints

Range and Roots: Two Common Patterns for Specifying and Propagating Counting and Occurrence Constraints Range and Roots: Two Common Patterns for Specifying and Propagating Counting and Occurrence Constraints Christian Bessiere LIRMM, CNRS and U. Montpellier Montpellier, France bessiere@lirmm.fr Brahim Hnich

More information

Integrating Probabilistic Reasoning with Constraint Satisfaction

Integrating Probabilistic Reasoning with Constraint Satisfaction Integrating Probabilistic Reasoning with Constraint Satisfaction IJCAI Tutorial #7 Instructor: Eric I. Hsu July 17, 2011 http://www.cs.toronto.edu/~eihsu/tutorial7 Getting Started Discursive Remarks. Organizational

More information

DPLL(T ):Fast Decision Procedures

DPLL(T ):Fast Decision Procedures DPLL(T ):Fast Decision Procedures Harald Ganzinger George Hagen Robert Nieuwenhuis Cesare Tinelli Albert Oliveras MPI, Saarburcken The University of Iowa UPC, Barcelona Computer Aided-Verification (CAV)

More information

EXTENDING SAT SOLVER WITH PARITY CONSTRAINTS

EXTENDING SAT SOLVER WITH PARITY CONSTRAINTS TKK Reports in Information and Computer Science Espoo 2010 TKK-ICS-R32 EXTENDING SAT SOLVER WITH PARITY CONSTRAINTS Tero Laitinen TKK Reports in Information and Computer Science Espoo 2010 TKK-ICS-R32

More information

Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1

Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) January 11, 2018 Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1 In this lecture

More information

Example: Map coloring

Example: Map coloring Today s s lecture Local Search Lecture 7: Search - 6 Heuristic Repair CSP and 3-SAT Solving CSPs using Systematic Search. Victor Lesser CMPSCI 683 Fall 2004 The relationship between problem structure and

More information

Satisfiability. Michail G. Lagoudakis. Department of Computer Science Duke University Durham, NC SATISFIABILITY

Satisfiability. Michail G. Lagoudakis. Department of Computer Science Duke University Durham, NC SATISFIABILITY Satisfiability Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 27708 COMPSCI 271 - Spring 2001 DUKE UNIVERSITY Page 1 Why SAT? Historical Reasons The first NP-COMPLETE problem

More information

Multi Domain Logic and its Applications to SAT

Multi Domain Logic and its Applications to SAT Multi Domain Logic and its Applications to SAT Tudor Jebelean RISC Linz, Austria Tudor.Jebelean@risc.uni-linz.ac.at Gábor Kusper Eszterházy Károly College gkusper@aries.ektf.hu Abstract We describe a new

More information

Eddie Schwalb, Rina Dechter. It is well known that all these tasks are NP-hard.

Eddie Schwalb, Rina Dechter.  It is well known that all these tasks are NP-hard. Coping With Disjunctions in Temporal Constraint Satisfaction Problems 3 Eddie Schwalb, Rina Dechter Department of Information and Computer Science University of California at Irvine, CA 977 eschwalb@ics.uci.edu,

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Search and Lookahead Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 4/6, 2012 Nebel, Hué and Wölfl (Universität Freiburg) Constraint

More information

2 Decision Procedures for Propositional Logic

2 Decision Procedures for Propositional Logic 2 Decision Procedures for Propositional Logic 2.1 Propositional Logic We assume that the reader is familiar with propositional logic, and with the complexity classes NP and NP-complete. The syntax of formulas

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Frank C. Langbein F.C.Langbein@cs.cf.ac.uk Department of Computer Science Cardiff University 13th February 2001 Constraint Satisfaction Problems (CSPs) A CSP is a high

More information

Algebraic Properties of CSP Model Operators? Y.C. Law and J.H.M. Lee. The Chinese University of Hong Kong.

Algebraic Properties of CSP Model Operators? Y.C. Law and J.H.M. Lee. The Chinese University of Hong Kong. Algebraic Properties of CSP Model Operators? Y.C. Law and J.H.M. Lee Department of Computer Science and Engineering The Chinese University of Hong Kong Shatin, N.T., Hong Kong SAR, China fyclaw,jleeg@cse.cuhk.edu.hk

More information

SAT/SMT Solvers and Applications

SAT/SMT Solvers and Applications SAT/SMT Solvers and Applications University of Waterloo Winter 2013 Today s Lecture Lessons learnt so far Implementation-related attacks (control-hazard, malware,...) Program analysis techniques can detect

More information

Localization in Graphs. Richardson, TX Azriel Rosenfeld. Center for Automation Research. College Park, MD

Localization in Graphs. Richardson, TX Azriel Rosenfeld. Center for Automation Research. College Park, MD CAR-TR-728 CS-TR-3326 UMIACS-TR-94-92 Samir Khuller Department of Computer Science Institute for Advanced Computer Studies University of Maryland College Park, MD 20742-3255 Localization in Graphs Azriel

More information

Propagation via Lazy Clause Generation

Propagation via Lazy Clause Generation Propagation via Lazy Clause Generation Olga Ohrimenko 1, Peter J. Stuckey 1, and Michael Codish 2 1 NICTA Victoria Research Lab, Department of Comp. Sci. and Soft. Eng. University of Melbourne, Australia

More information

GSAT and Local Consistency

GSAT and Local Consistency GSAT and Local Consistency Kalev Kask Computer Science Department University of California at Irvine Irvine, CA 92717 USA Rina Dechter Computer Science Department University of California at Irvine Irvine,

More information

A Simplied NP-complete MAXSAT Problem. Abstract. It is shown that the MAX2SAT problem is NP-complete even if every variable

A Simplied NP-complete MAXSAT Problem. Abstract. It is shown that the MAX2SAT problem is NP-complete even if every variable A Simplied NP-complete MAXSAT Problem Venkatesh Raman 1, B. Ravikumar 2 and S. Srinivasa Rao 1 1 The Institute of Mathematical Sciences, C. I. T. Campus, Chennai 600 113. India 2 Department of Computer

More information

1 Inference for Boolean theories

1 Inference for Boolean theories Scribe notes on the class discussion on consistency methods for boolean theories, row convex constraints and linear inequalities (Section 8.3 to 8.6) Speaker: Eric Moss Scribe: Anagh Lal Corrector: Chen

More information

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept

More information

Search. Krzysztof Kuchcinski. Department of Computer Science Lund Institute of Technology Sweden.

Search. Krzysztof Kuchcinski. Department of Computer Science Lund Institute of Technology Sweden. Search Krzysztof Kuchcinski Krzysztof.Kuchcinski@cs.lth.se Department of Computer Science Lund Institute of Technology Sweden January 12, 2015 Kris Kuchcinski (LTH) Search January 12, 2015 1 / 46 Search

More information

The Resolution Algorithm

The Resolution Algorithm The Resolution Algorithm Introduction In this lecture we introduce the Resolution algorithm for solving instances of the NP-complete CNF- SAT decision problem. Although the algorithm does not run in polynomial

More information

Constraint Solving by Composition

Constraint Solving by Composition Constraint Solving by Composition Student: Zhijun Zhang Supervisor: Susan L. Epstein The Graduate Center of the City University of New York, Computer Science Department 365 Fifth Avenue, New York, NY 10016-4309,

More information

Zchaff: A fast SAT solver. Zchaff: A fast SAT solver

Zchaff: A fast SAT solver. Zchaff: A fast SAT solver * We d like to build a complete decision procedure for SAT which is efficient. Generalized D-P-L algorithm: while (true) { if (! decide( )) /* no unassigned variables */ return (sat) while (! bcp ( ))

More information

Circuit versus CNF Reasoning for Equivalence Checking

Circuit versus CNF Reasoning for Equivalence Checking Circuit versus CNF Reasoning for Equivalence Checking Armin Biere Institute for Formal Models and Verification Johannes Kepler University Linz, Austria Equivalence Checking Workshop 25 Madonna di Campiglio,

More information

Seminar decision procedures: Certification of SAT and unsat proofs

Seminar decision procedures: Certification of SAT and unsat proofs Seminar decision procedures: Certification of SAT and unsat proofs Wolfgang Nicka Technische Universität München June 14, 2016 Boolean satisfiability problem Term The boolean satisfiability problem (SAT)

More information

General Methods and Search Algorithms

General Methods and Search Algorithms DM811 HEURISTICS AND LOCAL SEARCH ALGORITHMS FOR COMBINATORIAL OPTIMZATION Lecture 3 General Methods and Search Algorithms Marco Chiarandini 2 Methods and Algorithms A Method is a general framework for

More information

New Encodings of Pseudo-Boolean Constraints into CNF

New Encodings of Pseudo-Boolean Constraints into CNF New Encodings of Pseudo-Boolean Constraints into CNF Olivier Bailleux, Yacine Boufkhad, Olivier Roussel olivier.bailleux@u-bourgogne.fr boufkhad@liafa.jussieu.fr roussel@cril.univ-artois.fr New Encodings

More information

Combinational Equivalence Checking

Combinational Equivalence Checking Combinational Equivalence Checking Virendra Singh Associate Professor Computer Architecture and Dependable Systems Lab. Dept. of Electrical Engineering Indian Institute of Technology Bombay viren@ee.iitb.ac.in

More information

A Structure-Based Variable Ordering Heuristic for SAT. By Jinbo Huang and Adnan Darwiche Presented by Jack Pinette

A Structure-Based Variable Ordering Heuristic for SAT. By Jinbo Huang and Adnan Darwiche Presented by Jack Pinette A Structure-Based Variable Ordering Heuristic for SAT By Jinbo Huang and Adnan Darwiche Presented by Jack Pinette Overview 1. Divide-and-conquer for SAT 2. DPLL & variable ordering 3. Using dtrees for

More information

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989 University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science October 1989 P Is Not Equal to NP Jon Freeman University of Pennsylvania Follow this and

More information

Symmetries and Lazy Clause Generation

Symmetries and Lazy Clause Generation Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence Symmetries and Lazy Clause Generation Geoffrey Chu and Peter J. Stuckey NICTA and University of Melbourne {gchu,pjs}@csse.unimelb.edu.au

More information

CS 512, Spring 2017: Take-Home End-of-Term Examination

CS 512, Spring 2017: Take-Home End-of-Term Examination CS 512, Spring 2017: Take-Home End-of-Term Examination Out: Tuesday, 9 May 2017, 12:00 noon Due: Wednesday, 10 May 2017, by 11:59 am Turn in your solutions electronically, as a single PDF file, by placing

More information

Efficient satisfiability solver

Efficient satisfiability solver Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 2014 Efficient satisfiability solver Chuan Jiang Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/etd

More information

2 [Ben96]. However, this study was limited to a single mapping. Since the choice of mapping can have a very large impact on our ability to solve probl

2 [Ben96]. However, this study was limited to a single mapping. Since the choice of mapping can have a very large impact on our ability to solve probl Reformulating propositional satisability as constraint satisfaction Toby Walsh University of York, York, England. tw@cs.york.ac.uk Abstract. We study how propositional satisability (SAT) problems can be

More information

i 1 CONSTRAINT SATISFACTION METHODS FOR GENERATING VALID CUTS J. N. Hooker Graduate School of Industrial Administration Carnegie Mellon University Pittsburgh, PA 15213 USA http://www.gsia.cmu.edu/afs/andrew/gsia/jh38/jnh.html

More information

Preprocessing in Pseudo-Boolean Optimization: An Experimental Evaluation

Preprocessing in Pseudo-Boolean Optimization: An Experimental Evaluation Preprocessing in Pseudo-Boolean Optimization: An Experimental Evaluation Ruben Martins, Inês Lynce, and Vasco Manquinho IST/INESC-ID, Technical University of Lisbon, Portugal {ruben,ines,vmm}@sat.inesc-id.pt

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

A Fast Arc Consistency Algorithm for n-ary Constraints

A Fast Arc Consistency Algorithm for n-ary Constraints A Fast Arc Consistency Algorithm for n-ary Constraints Olivier Lhomme 1 and Jean-Charles Régin 2 1 ILOG, 1681, route des Dolines, 06560 Valbonne, FRANCE 2 Computing and Information Science, Cornell University,

More information

Boolean Satisfiability Solving Part II: DLL-based Solvers. Announcements

Boolean Satisfiability Solving Part II: DLL-based Solvers. Announcements EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Part II: DLL-based Solvers Sanjit A. Seshia EECS, UC Berkeley With thanks to Lintao Zhang (MSR) Announcements Paper readings will be

More information

4.1 Review - the DPLL procedure

4.1 Review - the DPLL procedure Applied Logic Lecture 4: Efficient SAT solving CS 4860 Spring 2009 Thursday, January 29, 2009 The main purpose of these notes is to help me organize the material that I used to teach today s lecture. They

More information

Practical SAT Solving

Practical SAT Solving Practical SAT Solving Lecture 5 Carsten Sinz, Tomáš Balyo May 23, 2016 INSTITUTE FOR THEORETICAL COMPUTER SCIENCE KIT University of the State of Baden-Wuerttemberg and National Laboratory of the Helmholtz

More information

SAT Solvers. Ranjit Jhala, UC San Diego. April 9, 2013

SAT Solvers. Ranjit Jhala, UC San Diego. April 9, 2013 SAT Solvers Ranjit Jhala, UC San Diego April 9, 2013 Decision Procedures We will look very closely at the following 1. Propositional Logic 2. Theory of Equality 3. Theory of Uninterpreted Functions 4.

More information

SAT Solver. CS 680 Formal Methods Jeremy Johnson

SAT Solver. CS 680 Formal Methods Jeremy Johnson SAT Solver CS 680 Formal Methods Jeremy Johnson Disjunctive Normal Form A Boolean expression is a Boolean function Any Boolean function can be written as a Boolean expression s x 0 x 1 f Disjunctive normal

More information

An SMT-Based Approach to Motion Planning for Multiple Robots with Complex Constraints

An SMT-Based Approach to Motion Planning for Multiple Robots with Complex Constraints 1 An SMT-Based Approach to Motion Planning for Multiple Robots with Complex Constraints Frank Imeson, Student Member, IEEE, Stephen L. Smith, Senior Member, IEEE Abstract In this paper we propose a new

More information

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions. CS 787: Advanced Algorithms NP-Hardness Instructor: Dieter van Melkebeek We review the concept of polynomial-time reductions, define various classes of problems including NP-complete, and show that 3-SAT

More information

Binary Encodings of Non-binary Constraint Satisfaction Problems: Algorithms and Experimental Results

Binary Encodings of Non-binary Constraint Satisfaction Problems: Algorithms and Experimental Results Journal of Artificial Intelligence Research 24 (2005) 641-684 Submitted 04/05; published 11/05 Binary Encodings of Non-binary Constraint Satisfaction Problems: Algorithms and Experimental Results Nikolaos

More information

DM841 DISCRETE OPTIMIZATION. Part 2 Heuristics. Satisfiability. Marco Chiarandini

DM841 DISCRETE OPTIMIZATION. Part 2 Heuristics. Satisfiability. Marco Chiarandini DM841 DISCRETE OPTIMIZATION Part 2 Heuristics Satisfiability Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. Mathematical Programming Constraint

More information

A Re-examination of Limited Discrepancy Search

A Re-examination of Limited Discrepancy Search A Re-examination of Limited Discrepancy Search W. Ken Jackson, Morten Irgens, and William S. Havens Intelligent Systems Lab, Centre for Systems Science Simon Fraser University Burnaby, B.C., CANADA V5A

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems In which we see how treating states as more than just little black boxes leads to the invention of a range of powerful new search methods and a deeper understanding of

More information

Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving

Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving Christian Bessiere Anais Fabre* LIRMM-CNRS (UMR 5506) 161, rue Ada F-34392 Montpellier Cedex 5 (bessiere,fabre}@lirmm.fr Ulrich

More information

The Satisfiability Problem [HMU06,Chp.10b] Satisfiability (SAT) Problem Cook s Theorem: An NP-Complete Problem Restricted SAT: CSAT, k-sat, 3SAT

The Satisfiability Problem [HMU06,Chp.10b] Satisfiability (SAT) Problem Cook s Theorem: An NP-Complete Problem Restricted SAT: CSAT, k-sat, 3SAT The Satisfiability Problem [HMU06,Chp.10b] Satisfiability (SAT) Problem Cook s Theorem: An NP-Complete Problem Restricted SAT: CSAT, k-sat, 3SAT 1 Satisfiability (SAT) Problem 2 Boolean Expressions Boolean,

More information

Binary Decision Diagrams

Binary Decision Diagrams Logic and roof Hilary 2016 James Worrell Binary Decision Diagrams A propositional formula is determined up to logical equivalence by its truth table. If the formula has n variables then its truth table

More information

Consistency and Set Intersection

Consistency and Set Intersection Consistency and Set Intersection Yuanlin Zhang and Roland H.C. Yap National University of Singapore 3 Science Drive 2, Singapore {zhangyl,ryap}@comp.nus.edu.sg Abstract We propose a new framework to study

More information

ABT with Clause Learning for Distributed SAT

ABT with Clause Learning for Distributed SAT ABT with Clause Learning for Distributed SAT Jesús Giráldez-Cru, Pedro Meseguer IIIA - CSIC, Universitat Autònoma de Barcelona, 08193 Bellaterra, Spain {jgiraldez,pedro}@iiia.csic.es Abstract. Transforming

More information

Symbolic Methods. The finite-state case. Martin Fränzle. Carl von Ossietzky Universität FK II, Dpt. Informatik Abt.

Symbolic Methods. The finite-state case. Martin Fränzle. Carl von Ossietzky Universität FK II, Dpt. Informatik Abt. Symbolic Methods The finite-state case Part I Martin Fränzle Carl von Ossietzky Universität FK II, Dpt. Informatik Abt. Hybride Systeme 02917: Symbolic Methods p.1/34 What you ll learn How to use and manipulate

More information

Parallelizing SAT Solver With specific application on solving Sudoku Puzzles

Parallelizing SAT Solver With specific application on solving Sudoku Puzzles 6.338 Applied Parallel Computing Final Report Parallelizing SAT Solver With specific application on solving Sudoku Puzzles Hank Huang May 13, 2009 This project was focused on parallelizing a SAT solver

More information

Satisfiability (SAT) Applications. Extensions/Related Problems. An Aside: Example Proof by Machine. Annual Competitions 12/3/2008

Satisfiability (SAT) Applications. Extensions/Related Problems. An Aside: Example Proof by Machine. Annual Competitions 12/3/2008 15 53:Algorithms in the Real World Satisfiability Solvers (Lectures 1 & 2) 1 Satisfiability (SAT) The original NP Complete Problem. Input: Variables V = {x 1, x 2,, x n }, Boolean Formula Φ (typically

More information

Joint Entity Resolution

Joint Entity Resolution Joint Entity Resolution Steven Euijong Whang, Hector Garcia-Molina Computer Science Department, Stanford University 353 Serra Mall, Stanford, CA 94305, USA {swhang, hector}@cs.stanford.edu No Institute

More information

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Constraint Satisfaction Problems slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Standard search problems: State is a black box : arbitrary data structure Goal test

More information

SOFT NOGOOD STORE AS A HEURISTIC

SOFT NOGOOD STORE AS A HEURISTIC SOFT NOGOOD STORE AS A HEURISTIC by Andrei Missine B.Sc., University of British Columbia, 2003 a Thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the

More information

On Computing Minimum Size Prime Implicants

On Computing Minimum Size Prime Implicants On Computing Minimum Size Prime Implicants João P. Marques Silva Cadence European Laboratories / IST-INESC Lisbon, Portugal jpms@inesc.pt Abstract In this paper we describe a new model and algorithm for

More information

Local Consistency in Weighted CSPs and Inference in Max-SAT

Local Consistency in Weighted CSPs and Inference in Max-SAT Local Consistency in Weighted CSPs and Inference in Max-SAT Student name: Federico Heras Supervisor name: Javier Larrosa Universitat Politecnica de Catalunya, Barcelona, Spain fheras@lsi.upc.edu,larrosa@lsi.upc.edu

More information

Horn Formulae. CS124 Course Notes 8 Spring 2018

Horn Formulae. CS124 Course Notes 8 Spring 2018 CS124 Course Notes 8 Spring 2018 In today s lecture we will be looking a bit more closely at the Greedy approach to designing algorithms. As we will see, sometimes it works, and sometimes even when it

More information