SOFT NOGOOD STORE AS A HEURISTIC

Size: px
Start display at page:

Download "SOFT NOGOOD STORE AS A HEURISTIC"

Transcription

1 SOFT NOGOOD STORE AS A HEURISTIC by Andrei Missine B.Sc., University of British Columbia, 2003 a Thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the School of Computing Science c Andrei Missine 2010 SIMON FRASER UNIVERSITY Fall 2010 All rights reserved. This work may not be reproduced in whole or in part, by photocopy or other means, without the permission of the author.

2 APPROVAL Name: Degree: Title of Thesis: Andrei Missine Doctor of Philosophy Soft Nogood Store as a Heuristic Examining Committee: Dr. Diana Cukierman Senior Lecturer in the School of Computing Science Simon Fraser University Chair Dr. William S. Havens, Senior Supervisor Professor Emeritus of Computing Science Simon Fraser University Dr. David G. Mitchell, Supervisor Associate Professor of Computing Science Simon Fraser University Dr. Arthur Kirkpatrick, Supervisor Associate Professor of Computing Science Simon Fraser University Dr. Greg Mori, Internal Examiner Assistant Professor of Computing Science Simon Fraser University Dr. Peter van Beek, External Examiner Professor of Computer Science University of Waterloo Date Approved: ii

3 Declaration of Partial Copyright Licence The author, whose copyright is declared on the title page of this work, has granted to Simon Fraser University the right to lend this thesis, project or extended essay to users of the Simon Fraser University Library, and to make partial or single copies only for such users or in response to a request from the library of any other university, or other educational institution, on its own behalf or for one of its users. The author has further granted permission to Simon Fraser University to keep or make a digital copy for use in its circulating collection (currently available to the public at the Institutional Repository link of the SFU Library website < at: < and, without changing the content, to translate the thesis/project or extended essays, if technically possible, to any medium or format for the purpose of preservation of the digital work. The author has further agreed that permission for multiple copying of this work for scholarly purposes may be granted by either the author or the Dean of Graduate Studies. It is understood that copying or publication of this work for financial gain shall not be allowed without the author s written permission. Permission for public performance, or limited permission for private scholarly use, of any multimedia materials forming part of this work, may have been granted by the author. This information may be found on the separately catalogued multimedia material and in the signed Partial Copyright Licence. While licensing SFU to permit the above uses, the author retains copyright in the thesis, project or extended essays, including the right to change the work for subsequent purposes, including editing and publishing the work in whole or in part, and licensing other parties, as the author may desire. The original Partial Copyright Licence attesting to these terms, and signed by this author, may be found in the original bound copy of this work, retained in the Simon Fraser University Archive. Simon Fraser University Library Burnaby, BC, Canada Last revision: Spring 09

4 Abstract Constraint satisfaction can be used to model problems such as graph coloring, scheduling, crossword generation and many others. The goal of solving a constraint satisfaction problem is to find a solution such that all variables are assigned without violating any constraints, or to prove that no solutions exist. Constraint satisfaction techniques are typically applied to problems that have no known polynomial time algorithms. Solving such problems requires exploration of a search space that is exponentially large with respect to the problem representation and many sophisticated techniques have been developed to do so efficiently. Non-trivial problems may take a significant amount of time to solve and during this time many infeasible states will be discovered; such states are known as nogoods. Nogoods are learned during search and are used to avoid revisiting already explored infeasible states, as discovering them may take a significant amount of time. Heuristics are used to guide search algorithms towards solutions, or to prove infeasibility. Some of the existing heuristics make use of nogoods learned during search to refine their guidance. I show how more heuristic guidance can be obtained from nogoods. I introducethenotion of soft nogoods andshow how soft nogoods can beusedas asource of heuristic guidance. Soft nogoods can be learned during search from classic nogoods and already existing soft nogoods. I describe how a set of soft nogoods can be used to estimate the amount of partial exploration of an arbitrary search space and then use this value to augment a number of existing heuristics. The accurate algorithm for computing this value is restrictively slow and so I present an approximation scheme that trades off some of its accuracy for speed. Learning all soft nogoods is impractical and to address this issue I describe learning schemes that selectively learn soft nogoods during search. I conduct an empirical evaluation of the resulting heuristics on a variety of constraint satisfaction problems to demonstrate the utility of soft nogoods for heuristic guidance. iii

5 Acknowledgments First and foremost, I would like to thank my family for their support and understanding. I would like to thank my senior supervisor, Dr. William Havens, for his guidance and patience. Thank you to Bistra Dilkina for her help with Systematic Local Search [46], to Philippe Refalo for his help with impact heuristic [75], to Dr. Peter van Beek for providing me with crossword puzzles [3] and to Dr. Christophe Lecoutre for helping me with CSP 2008 problem dataset [83]. iv

6 Contents Approval Abstract Acknowledgments Contents List of Tables List of Figures List of Algorithms ii iii iv v ix x xii 1 Introduction 1 2 Background and Related Work Definitions and Conventions Definitions for Constraint Satisfaction Problems Definitions for Satisfiability Problems Conventions Search Algorithms Overview of Constructive Search Overview of Local Search Heuristics Bayesian Networks Heuristics Based on Solution Counting and Bayesian Networks v

7 2.4 Related Work Summary Soft Nogoods Definitions Hypothesis Semi-Lattice Representation Basic Operators on Soft Nogoods Soft Nogood Contributions Soft Nogood Resolution Combining Soft Nogood Contributions Lower and Upper Bounds Summary Obtaining Heuristic Guidance from Soft Nogoods Lookup Operator Complexity Modified Lookup Operator Ability to Distinguish Between Nogood Stores Soft Nogood Store and Pruning Revisiting Bayesian Networks Constructing the Bayesian Network Using the Bayesian Network as a Lookup Operator Summary Augmenting Existing Heuristics with Soft Nogoods Overview of Heuristics First Fail Principle and its Derivatives Augmenting First Fail Principle Soft Nogoods and Derived Heuristics Augmenting the min-conflict Heuristic Augmenting impact Heuristic Summary vi

8 6 Soft Nogood Store Implementations Two Watched Literal Scheme Reviewed Approximating lookup Operator Extended Nogood States Approximation and Mitigating Its Effects Avoiding Recursion Combining Contributions Efficiently Nogood Store for Constructive Search Generalized Watched Literal Scheme Nogood Store for Local Search Time Complexity Analysis Addressing the Bottleneck Other Potential Alternatives Summary Learning Schemes Learning from Previous Search State Systematic Local Search Learning Soft Nogoods Learning from Domain Reductions and Conflict Sets Constructive Search, Consistency and Backjumping Learning Classic and Soft Nogoods Learning from Restarts way Branching Algorithms Learning Classic Nogoods Learning Soft Nogoods Summary Empirical Evaluation SysLS and min-conflict Heuristic d-way Branching, dom and impact Heuristics way Branching and dom / wdeg Heuristic Comparing lookup and lookup Problem Types vii

9 8.3.3 Benchmark Problems Random Binary CSPs Balanced Quasi-Group Completion Problems RLFAP and Crossword Puzzles Discussion Comparison to the State of the Art Heuristics Solver Summary Conclusions and Future Work Hypothesis Revisited Contributions Future Work Bibliography 167 viii

10 List of Tables 6.1 State Transitions in CHAFF Comparison of Data Structures State Transitions Augmented Two Watched Literal Scheme State Transitions n Watched Literal Scheme Sys LS Assignment State Transition Pigeons Problem Chessboard Problem Coloring Problem Random CSPs Varying Number of Variables Random CSPs Varying Domain Size Random CSPs Varying Likelihood of Constraint Presence Random CSPs Varying Constraint Tightness BQCP Binary Constraint Representation BQCP All-Different Constraint Representation RLFAP Instances Crossword Puzzles Solver Comparison ix

11 List of Figures 2.1 Two Watched Literals: A watched clause updated after an assignment Semi-lattice for CSP with 3 binary variables A, B and C Example of classic resolution Example of Soft Nogood Resolution Example: Difference in Order of Recursion of lookup Bayesian Network for a Small Problem Tree Layouts Soft nogoods on 5x5 chessboard with 2 colors (Closeup) Classic nogoods on 5x5 chessboard with 2 colors (Closeup) Soft nogoods on 5x5 chessboard with 2 colors Classic nogoods on 5x5 chessboard with 2 colors Soft nogoods on pigeons Classic nogoods on pigeons Original Two Watched Literal Scheme Two Watched Literal Example Augmented Two Watched Literal Scheme n Watched Literal Scheme (n = 3 is shown) Learning only {A = 1,B = 1} Learning {A = 1,B = 1} and {A = 1,C = 1} Learning soft nogoods from {A = 1,B = 1,C = 1} Example search tree just before a restart Learning nogoods just before a restart x

12 7.6 Learning Example: Search tree before restart Learning Example: After grouping Learning Example: Resulting tree Systematic Local Search and min-conflict Heuristic FCCBJ and first-fail Heuristic MACCBJ and impact Heuristic Comparing lookup and lookup Typical Functions for Random Binary CSPs Typical Mean Functions for BQCP Typical 90-Percentile Functions for BQCP xi

13 List of Algorithms 4.1 lookup (L,Γ) lookup, (L,Γ) lookup (L,Γ,S) SysLS(V,C) SelectAndAssign() learnnogoods(σ) learnsoftnogoods(σ,s) xii

14 Chapter 1 Introduction Combinatorial search can be viewed as finding the proverbial needle in a haystack there is an enormous number of distinct search states and only a handful of them are of interest. Sophisticated search algorithms have been developed to ease the task of finding the needle, or proving that one does not exist. Constraint satisfaction is a common way to model problems that require combinatorial search. Constraint satisfaction problems are given as a set of variables, each associated with a domain of values, and constraints on these variables. The goal is to find an assignment of values to variables such that all constraints are met. This formulation can be used to model and solve many interesting problems such as radio link frequency allocation for cell phone towers, task scheduling, resource allocation, vehicle routing, crossword puzzle generation, and many others. Search algorithms generally fall into one of two categories local search and constructive search. Local search algorithms aim at finding a single solution and only rarely are able to prove that no solution exists. Such algorithms have fewer rules imposed upon them and are typically able to explore the search space very quickly. Constructive search algorithms have a tree-based search structure imposed upon them. The downside of this restriction is loss of flexibility of local search approaches while some of the benefits are ability to prove that a solution does not exist, ability to make use of effective propagation techniques to reduce search space and ability to efficiently keep track of explored regions. Search algorithms use heuristics to guide them towards promising regions of the search space and to make search algorithms more efficient in their exploration of the search space. Heuristics generally try to analyze the given problem and provide guidance suggesting what part of the search space the algorithm should explore next or how the algorithm should 1

15 CHAPTER 1. INTRODUCTION 2 go about partitioning it. The former attempt to guide search towards a solution while the latter attempt to find the best way to split the remaining search space such that it is easy to explore quickly. Heuristics have evolved over time from simple static heuristics that only utilize information about problem structure to guide search to more sophisticated, dynamic heuristics that react to changes of the current search state and take into account information learned during search. Search algorithms are tasked with exploring a vast search space and it is important to do so efficiently. While heuristics are certainly helpful, a potential source of inefficiency is due to redundant work done by a search algorithm when the same search space is fully explored more than once without finding a solution. Such search spaces are known as nogoods [82]; in recent years an efficient mechanism for constructive search has been developed to keep track of learned nogoods and to help search algorithms avoid learned nogoods during search [70]. In my dissertation I examine the utility of nogoods as a source of heuristic guidance. More specifically, I study the heuristic utility of information about partial exploration and introduce the concept of soft nogoods to model partial exploration. Some of the more recent and successful heuristics, such as dom/wdeg [7] and impact [75] heuristics, have a somewhat similar notion that previous failures to find a solution can be used as a source of heuristic guidance. These heuristics are based on compiling basic statistics about failures to find a solution over time and using these statistics to supply heuristic guidance. Soft nogoods are much more context sensitive and thus aim at providing more exact heuristic guidance depending on what state the search algorithm is in. I show how a collection of soft nogoods can be used to provide heuristic guidance by defining the lookup operator. This operator is used to estimate partial exploration of an arbitrary search space given a collection of soft nogoods. I then show how this information can be used to augment existing heuristics and how soft nogoods can be learned during search. One of the challenges of using soft nogoods is that accurate computation of the lookup operator is restrictively expensive and so a trade off between speed and accuracy must be made in order to use soft nogoods and the lookup operator for heuristic guidance in practice. My empirical evaluation of the resulting approximation and augmented heuristics shows that heuristics using soft nogoods are 10 15% faster on non-trivial problems from numerous problem domains. Greater improvement is seen on problems with complex constraints.

16 CHAPTER 1. INTRODUCTION 3 In Chapter 2 I present an overview of search algorithms, heuristics and related concepts. In Chapter 3 I formally introduce soft nogoods and state my hypothesis regarding their heuristic utility. Chapter 4 shows how collections of soft nogoods can be used to supply heuristic guidance to search algorithms. In Chapter 5 I show how soft nogoods can be used to augment a number of popular heuristics by making use of information about partial exploration. Chapter 6 describes how ideas presented in Chapter 4 can be brought to life in a practical context by approximating the inefficient, but accurate, algorithm used to supply heuristic guidance from a collection of soft nogoods. In order for soft nogoods to be used for heuristic guidance they must be learned and in Chapter 7 I describe three approaches for learning soft nogoods during search. To confirm the validity of my hypothesis I perform empirical studies of the resulting heuristics on benchmark and real world problem instances. The results of these studies are presented in Chapter 8. Lastly, I conclude my dissertation with an overview of my contributions and ideas for future work based on my research in Chapter 9.

17 Chapter 2 Background and Related Work Constraint satisfaction is a field of artificial intelligence that deals with finding solutions to problems given by a set of constraints on a set of variables. Constraint satisfaction problems can be interesting to solve as they have practical applications, such as determining if a given set of tasks can be scheduled within a given time interval while not breaking any dependencies between tasks, specified as constraints. Satisfiability is closely related to constraint satisfaction and can be seen as a special case of constraint satisfaction where all variables are binary and constraints are given by a formula in conjunctive normal form. Satisfiability can be used as an alternative to constraint satisfaction to model and solve problems. Each approach is associated with its own strengths and weaknesses; for example, over the years constraint satisfaction solvers have developed sophisticated means to reduce the search space after each search step while satisfiability solvers have developed efficient means to keep track of previously explored spaces. In this chapter I discuss background and work related to my topic of learning from previously explored states and heuristics. I begin the chapter by formally defining what constitutes a constraint satisfaction problem and giving basic definitions that are used throughout my dissertation. I then discuss the two search types that are commonly used in constraint satisfaction and satisfiability: constructive search and local search. I also discuss nogoods, nogood stores and existing heuristics with a focus on variable ordering heuristics. Throughout my dissertation I primarily focus on constraint satisfaction and constructive search approaches. 4

18 CHAPTER 2. BACKGROUND AND RELATED WORK Definitions and Conventions Firstly, I introduce basic definitions that are commonly used in the context of constraint satisfaction and satisfiability problems. I also present general conventions that are used throughout my dissertation. Additional definitions specific to particular topics appear later in the chapter and definitions specific to my work are introduced in subsequent chapters Definitions for Constraint Satisfaction Problems Definition 2.1 Constraint Satisfaction Problem (CSP) is defined by a set of variables V and a set of constraints C. Each variable V V is associated with a finite domain, i.e. a set of values, represented by domain(v). Each constraint C C is defined on a non-empty subset of V called its scope and represented by scope(c). A constraint C is defined by a subset of Cartesian product of domains of variables in its scope any of the value tuples that are part of this subset are considered satisfactory with respect to this constraint. The overall goal of solving a CSP is to find a set of assignments of values to all variables such that all constraints are satisfied, or to prove that no such set of assignments exists. Definition 2.2 Constraint Arity of a given constraint is the number of variables involved in that constraint. Definition 2.3 Constraint Network of a given CSP is a hyper-graph where each variable is represented by a node and each constraint is represented by a hyper-arc connecting all variables that are involved in that constraint. Definition 2.4 Discrete Bounded CSP is a CSP where all variables have discrete and bounded domains. Definition 2.5 Assignment is a single assignment of a value to a variable. The value must be in the variable s domain. Definition 2.6 Support of a constraint C is a set of assignments such that the resulting tuple is considered satisfactory by C. Definition 2.7 Search Space of a CSP is the Cartesian space on all variables of that CSP, one dimension per variable. The dimension of a particular variable is defined by the set of values of that variable.

19 CHAPTER 2. BACKGROUND AND RELATED WORK 6 Definition 2.8 Assignment Match occurs between a pair of assignments involving the same variable when both are assigned to the same value. An assignment mismatch occurs between two assignments involving the same variable when their values mismatch. Assignment match / mismatch is not defined on a pair of assignments involving different variables. Definition 2.9 Label is a set of assignments where each variable appears at most once. This set may be empty. Labels can be used to represent search subspaces. Definition 2.10 Empty Label is an empty set of assignments. Definition 2.11 Complete Label / Complete Assignment is a set of assignments where each variable V V is assigned to a value. A complete label represents the most fine-grained unit of the overall search space. The overall search space can be represented as a collection of all complete labels. Similarly, all other search subspaces of a given CSP can be represented by a set of complete labels. When assignments of a complete label do not break any constraints C C such a label is a solution to the given CSP; finding such a label or proving that one does not exist is the goal of solving a given CSP. Definition 2.12 Partial Label / Partial Assignment is a label that is not complete, and hence there exists at least one variable V V that does not appear in assignments of a partial label. Definition 2.13 Size of Label is the number of assignments in the label. The larger the size the more specific and more fine grained the label is. Definition 2.14 Extending a Label is the process of adding a new assignment to an existing label. The assignment must be done to a variable that is not already assigned in the given label. A label L 1 extends a label L 2 if assignments of L 2 are a strict subset of assignments of L 1. Definition 2.15 Live Domain of a Variable is the subset of a variable s domain that has not yet been removed due to some reason such as a constraint violation or constraint propagation technique, which is defined next. Such domain reductions are commonly referred to as pruning of the variable domain. Live domains are frequently maintained throughout search by propagation techniques.

20 CHAPTER 2. BACKGROUND AND RELATED WORK 7 Definition 2.16 Consistency Maintenance / Constraint Propagation Technique is a technique that can be applied during search to remove values from variables live domains. Such removals are generally due to the corresponding propagation technique detecting that there is no solution containing the given assignment of the value to the variable and the current search state. Definition 2.17 Valid support of a constraint C is a support of C such that all values appearing in this support are in the live domains of the corresponding variables. Definition 2.18 Arc Consistency of a constraint C occurs when every live value of each variable V scope(c) appears in at least one valid support of C. Arc consistency of the overall problem occurs when all constraints are arc consistent [60]. Application and maintenance of arc consistency is a common consistency maintenance technique. Definition 2.19 Forward Checking is a common consistency maintenance technique that removes any values that no longer have any valid supports from live domains of the corresponding variables [44]. Definition 2.20 Domain Wipe Out occurs during search when constraint propagation causes some variable s live domain to become empty and signifies that a particular search state does not contain a solution. Definition 2.21 Backtrack / Backjump occurs when a search state has been fully considered without finding a solution and some variables must be unassigned in order to continue search. A backtrack simply unassigns the last assigned variable while a backjump may unassign more than one variable. Definition 2.22 Solution is a complete label satisfies all constraints of a given CSP. Definition 2.23 Nogood is a label such that there is provably no solution that extends this nogood [82]. Deriving a nogood for the empty label signifies that the problem has no solution. In order to disambiguate between regular nogoods and soft nogoods that I introduce in the next Chapter I refer to regular nogoods as classic nogoods.

21 CHAPTER 2. BACKGROUND AND RELATED WORK 8 Definition 2.24 Nogood Resolution is the process of deriving new nogoods given a set of existing nogoods. In the majority of this dissertation nogood resolution refers to NG- RES resolution [1] where for each value of a variable there exists a nogood disallowing that variable. A more formal definition of this type of resolution is given in Chapter 3. Definition 2.25 Generalized assignment is a positive assignment (equality) or negative assignment (non-equality) of a value to a variable. The value must be in the variable s domain. Definition 2.26 Generalized Nogood is a set of generalized assignments where each variable appears at most once and no solution exists that extends such an assignment [68, 54] Definitions for Satisfiability Problems Definition 2.27 Satisfiability Problem (SAT) consists of a single formula in conjunctive normal form, i.e. it is a conjunction of clauses where each clause is a disjunction of some literals. A literal is simply a boolean variable or its negation. Variables that appear in this formula are the variables of this particular instance and all variables are binary. The goal of solving a SAT problem is either to find a set of assignments of values to all variables such that the overall formula is true, or to prove that no such set of assignments exists. Definition 2.28 Implied Clause is a clause where all literals but one are falsified. Such clauses are typically derived during search as literals are assigned values by the search algorithm. Definition 2.29 Implied Literal is the last unassigned literal of an implied clause and must be assigned to true in order to continue search for a satisfiable assignment. Definition 2.30 Boolean Constraint Propagation is the process of identifying implied clauses and assigning their implied literals such that the clauses become satisfied. This process occurs iteratively until no more implied clauses are identified. Definition 2.31 Learned Clause is a clause that is learned by the solver during search and is derived from already existing clauses. Any solution to the overall SAT problem must satisfy all learned clauses. A learned clause is equivalent to a nogood in CSP.

22 CHAPTER 2. BACKGROUND AND RELATED WORK Conventions The following conventions are used throughout my dissertation: V refers to the set of all variables of a given CSP. Variables are capitalized, for example: V i. Values start in lowercase, for example: v j. domain(v i ) represents the set of all values in the domain of variable V i. When the context is unambiguous I simply use V i to represent its domain. Vi represents the live domain of a variable V i. When used with set operators a label L refers to the set of assignments in it. For example, given label L 1 = {A = 1,C = 1} and L 2 = {A = 1,B = 1} we can compute another label from the intersection of their assignment sets: L 1 L 2 = {A = 1}. V i L is true if and only if variable V i is assigned in label L. L(V i ) means retrieve the assignment of variable V i in label L. From the above example we get L 1 (A) = 1. L(V i ) is undefined if V i / L. 2.2 Search Algorithms CSP search algorithms generally fall into two categories: constructive search and local search. Each category is associated with its own strengths and weaknesses. I review these strengths and weaknesses first and then discuss each category separately. The main strengths of constructive search approaches are their ability to apply propagation techniques to eliminate regions of the search space that will not contain solutions, the ability to prove unsatisfiability, as the majority of constructive search approaches are complete, and the ability to efficiently keep track of regions that were previously explored by using an efficient nogood store. The main drawbacks of constructive search approaches are that they may spend a large amount of time in unpromising regions of the search space and may not scale as well as local search approaches on satisfiable problems. Techniques such as restarts [39] have been employed with some success to address these issues.

23 CHAPTER 2. BACKGROUND AND RELATED WORK 10 There is no known efficient way to keep track of explored states in local search as there is for constructive search, thus local search approaches must choose between completeness and efficiency. Most approaches choose the latter, and consequently forsake the ability to prove infeasibility. Similarly, most local search approaches prefer making moves faster over applying sophisticated propagation techniques after each move. The main strengths of local search approaches are that they scale very well and are able to utilize local heuristic gradients more effectively as they generally deal with complete labels whereas constructive search approaches deal with partial labels only (that is, until a solution is reached). Local search approaches can find solutions to satisfiable problems of very large size where most constructive search approaches are often unable to do so [43]. My primary focus is on constructive search approaches as my interest is in learning and using soft nogoods derived from large collections of nogoods encountered during search. This requires use of efficient nogood stores and makes local search approaches too computationally expensive on most non-trivial problems. With this in mind, I do briefly explore a local search approach that includes a nogood store and examine how soft nogoods can be learned and used to guide search in local search context as well. I now describe constructive and local search approaches in more detail Overview of Constructive Search Constructive search is typically complete. It can be used to find all solutions as well as to detect when no solutions exist. Constructive search can be viewed as a tree with the root node containing the original problem. Branches in the tree represent decisions, such as variable A is equal to 1. Taking a particular branch asserts that decision for the remainder of the subtree; once we can prove that such a decision is wrong, due to the fact that the subtree does not contain a solution, we can backtrack to the parent of this subtree s root and attempt asserting the next decision. A more sophisticated form of backtracking, called backjumping [26], can be used to examine the reasons for failing to find a solution in a particular subtree and will jump directly to the deepest ancestor node in the search tree that is responsible for not finding a solution in the current subtree. The search terminates successfully if all variables can be instantiated without breaking any constraints. If such an assignment does not exist then the search will terminate once all alternatives have been attempted and will signify that the given problem does not have a solution. I review common constructive search approaches, starting with the oldest and simplest and ending with the

24 CHAPTER 2. BACKGROUND AND RELATED WORK 11 most recent approaches. Definition 2.32 d-way Branching Search is a type of constructive search where at each decision point a variable V is chosen and branching occurs on each value in its live domain. Definition way Branching Search is a type of constructive search where at each decision point a variable V and a value v V are chosen. The two branches are to assign V to v or to remove v from V for the remainder of the subtree rooted at that branch. Constructive Search and Nogoods Nogoods were originally introduced by Stallman and Sussman in [82] where they were used to avoid revisiting subspaces that were already deemed not to contain a solution and to enhance backtracking. This is the basic essence of nogoods once we discover that there is no solution in a subspace we want to avoid revisiting the many possible extensions of this subspace. If a nogood can be derived for the empty label then the given problem has no solution and derivation of that nogood is the proof that no solution exists. Keeping track of derived nogoods is beneficial as the same nogood may be encountered many times. For example, a constraint B C will produce the nogood B = 1, C = 1, among others. Such a nogood may be discovered by a simple chronological backtracking search after setting A to 1. Now suppose that there is no solution with A = 1. After backtracking and making A = 2 it is important to remember that the original nogood still applies, and we should not bother examining any labels involving both B = 1 and C = 1. In the general case nogoods may be rediscovered exponentially many times [82], and many of those nogoods may require a significant amount of search effort to learn. It is thus valuable to learn nogoods and avoid them in the future. Nogoods make an appearance in the context of truth maintenance systems (TMS) as a way to explain contradictions in [22] and are used for dependency directed backtracking, just as in [82], to revise existing beliefs when a contradiction must be retracted. A TMS is responsible for maintaining the set of current beliefs about the world given a set of beliefs and justifications. Assumption based TMS [15] extends TMS by incrementally learning all minimal environments, i.e. subsets of assumptions, that are consistent and all minimal nogoods, which are basically environments that are not consistent. In [16] ATMS inference techniques are compared to consistency techniques of CSPs. Nogoods were introduced to CSPs in [17].

25 CHAPTER 2. BACKGROUND AND RELATED WORK 12 Various sophisticated backtracking schemes were developed to avoid revisiting explored search spaces. Conflict directed backjumping(cbj) [73] accomplishes this by having conflict sets associated with each variable. A conflict set of a given variable V keeps track of other variables that are responsible for pruning of values from V s domain [10, 77]. Once every value is pruned from V s live domain a backjump can occur the backjump will go to the deepest variable in the conflict set. Such a conflict set in fact can be viewed as resolution of a new nogood from existing nogoods [1, 68]. The existing nogoods are the conflict set entries each entry represents a nogood disallowing a particular value assignment to V, with the label being the concatenation of assignments of all variables up to and including the variable appearing in the conflict set. Since all values of V are enumerated nogood resolution can be applied [1] and the resulting nogood is exactly the nogood explaining why CBJ backjumps where it does. CBJ is in a sense a more compact version of dependency directed backtracking as actual nogoods are not maintained, just conflict sets. CBJ also does not keep track of all nogoods encountered, as conflict sets are thrown out once search backjumps and merges conflict sets. CBJ is thus less powerful than dependency directed backtracking, but does not require exponential space. Classic nogoods can be learned from conflict sets by applying jump-back learning, described in [25]. I revisit CBJ in Chapter 7 andshowhowsoftnogoodscanbelearnedfromconflictsetsinasimilarfashiontojump-back learning. Some of the older approaches in constructive search choose to keep track of a larger, but bounded number of nogoods. Size-bounded learning [18] only keeps nogoods with size less than or equal to k. Relevance-bounded learning, employed in dynamic backtracking [32], keeps track of nogoods of arbitrary size, but forgets those nogoods that are different from the current assignment by more than k variable-value pairs. The complexity analysis of size-bounded and relevance-bounded learning techniques is provided in [2]. Dynamic backtracking [32] is further extended in [63] by maintaining a partial order on variables that can then be dynamically reordered during search. When a conflict is discovered only those variables that are responsible for the conflict are modified, leaving unrelated variables untouched. Dynamic backtracking is considered in the context of distributed CSPs in [45]. In distributed CSPs each variable is typically represented by an agent capable of selecting its assignment asynchronously from other agents and with each agent maintaining its own nogood store. These agents communicate their states via message passing and work together to

26 CHAPTER 2. BACKGROUND AND RELATED WORK 13 solve a given CSP problem. It is shown that in this context the original memory bounded approach of dynamic backtracking is insufficient and two improved caching schemes are provided. The first is an unbound technique while the second takes a memory bound as parameter and never exceeds this limit by removing nogoods once the limit is reached. In [33] dynamic backtracking is explored further in the context of local search SAT solver GSAT [80]. The key idea in this paper is to combine the desirable features of each approach completeness and ability to get out of local minima of dynamic backtracking with the ability to follow a local gradient of local search. Two approaches are considered, a more restrictive, but memory bounded one, and an unrestricted unbounded one. Constructive search algorithms generally include some form of constraint propagation that is responsible for removing values from live domains of variables once it becomes clear that extending the current search state with these variable-value assignments would lead to an inconsistent state. Constraint propagation techniques are typically ran after the current search state is extended by a new assignment. Two of the more popular constraint propagation techniques that are frequently encountered in the literature are forward checking (FC) [44] and application of arc consistency (AC) [60]. In general, constraint propagation techniques can be seen as enforcement of various levels of k-consistency [24]. k-consistency states that given a particular constraint network and partial instantiation of any k 1 variables there exists a consistent instantiation of another kth variable. For CSPs involving only binary constraints 1-consistency corresponds to node-consistency, 2-consistency corresponds to arc-consistency and 3-consistency corresponds to path-consistency; node, arc and path consistency were originally introduced in [60]. Theoretically, if a k-consistency technique is applied such that k is equal to the number of variables in the problem then it will be able to determine if a solution exists; however doing so could require exponential time. In practice, arc-consistency is the most common constraint propagation technique as it strikes a good balance between ability to remove inconsistent values and its time complexity. In [64] the authors propose an approach that always applies forward checking, as it is fast to apply, and then probabilistically applies arc consistency, thus dynamically adjusting the amount of propagation depending on the current search state. Backtracking and backjumping techniques can be combined with constraint propagation techniques. For example, CBJ can be combined with FC or AC, resulting in FC-CBJ, forward checking with conflict directed backjumping [73], and MAC-CBJ, maintaining arc

27 CHAPTER 2. BACKGROUND AND RELATED WORK 14 consistency with conflict directed backjumping [74]. I consider these algorithms for some of my experiments in Chapter 8. More recently, with the introduction of efficient nogood store implementations utilizing the two watched literal scheme [70], unrestricted nogood learning has again become popular (unrestricted nogood learning was originally used in [82]). The two watched literal scheme is now in use by most constructive search algorithms that involve nogood learning. One of the first CSP approaches to utilize this scheme also learned generalized nogoods where both positive and negative assignments are allowed [54, 53]. I outline the two watched literal scheme shortly and discuss it in full detail in Chapter 6. An alternative approach for maintaining a nogood store, which did not seem to catch on, is to summarize learned nogoods using finite state automaton [76]. Significant memory savings can be made and nogood lookups are quick if the finite state automaton encoding nogoods is properly maintained. The reason why this approach did not catch on is due to the fact that this data structure is expensive to maintain as new nogoods are learned such that it retains its favorable properties with respect to storage and speed. The majority of older constructive search approaches use d-way branching where a variable is selected first and then all of its values are tried. The term d-way branching is used as at each choice point there are d choices, where d is the size of the selected variable s domain. Newer approaches favor 2-way branching where a variable and a value are chosen and two choice points exist: to assign the variable to that value, or to remove that value from the variable s live domain. A 2-way branching constructive search algorithm with nogood learning has been recently introduced in [58] where nogoods are learned from restarts. This approach enjoys the benefit of avoiding previously explored search subspaces by keeping track of nogoods and at the same time uses restarts to avoid getting stuck in unpromising regions for too long. I study an extension of this approach that uses soft nogoods in later chapters. Two Watched Literals A very interesting and efficient approach for efficiently managing clauses, both learned and those given in the problem statement, comes from the SAT community. In the CHAFF solver for SAT [70] a two watched literal scheme is employed to keep track of clause states. The two watched literal scheme distinguishes between three types of clauses: those that are already true, those that may be true and still need at least one true literal and those

28 CHAPTER 2. BACKGROUND AND RELATED WORK 15 that have exactly one unassigned literal remaining and the rest false. Clauses with one unassigned literal remaining trigger boolean constraint propagation as at this point it is possible to induce the value of the remaining unassigned variable false if the literal is negated and true otherwise. This scheme is first used in the context of CSPs in [54]. I extend this scheme in Chapter 6 to efficiently maintain a large nogood store containing both classic and soft nogoods. I now discuss the two watched literal scheme and then return to it in Chapter 6 where I present my extensions. a b c d e a false a b c d e Figure 2.1: Two Watched Literals: A watched clause updated after an assignment In the two watched literal scheme each clause can be in one of three states: two plus, implied, or satisfied. A clause is in two plus state if it has at least two unassigned literals, is implied if it has exactly one unassigned literal and the remainder assigned to false and is satisfied if at least one literal is true. Clauses in implied state cause boolean constraint propagation as CHAFF, just like the majority of other constructive solvers, does not venture out of the feasible region. Clauses in satisfied state are already satisfied given the current partial assignment and are not considered further. Note that implied clauses become satisfied immediately after boolean constraint propagation. A clause in two plus state contains at least two literals that have no assignment yet, and any literals that are assigned are false, so we cannot deduce anything from such clauses yet and keep watching any two unassigned literals of such clauses for assignment. The key insight that makes this scheme efficient is that at any given time an unsatisfied clause has exactly two watched literals, while satisfied clauses are simply ignored. Clauses in implied state transition directly to satisfied state due to boolean constraint propagation. Thus, only clauses in two plus state are actually watching literals. Updating of clauses as variables are assigned by search works as follows [70]: as soon as a watched literal of a clause is assigned the clause has either just become satisfied, or there is at least one unassigned literal. If the clause has become satisfied we no longer need to watch any literals in it and simply continue search, letting the backtrack stack take care of bringing this clause back into context, if necessary. Otherwise, we need to determine

29 CHAPTER 2. BACKGROUND AND RELATED WORK 16 if the clause is still in two plus state or just became implied, in which case we need to do boolean constraint propagation. This is done by trying to find another unassigned literal that is different from the second watched literal. If successful, the clause stays in two plus state, otherwise it becomes implied and triggers boolean constraint propagation. Boolean constraint propagation forces the clause with a single unassigned literal to be true by assigning the variable in that literal such that the literal becomes true. Clauses in satisfied and implied states return to two plus state during backtracking. Constructive Search and Restarts Restarts have become a common component of constructive search algorithms as they help algorithms recover from mistakes made early on in the search tree while at the same time allowing the search algorithm to learn something from each restart. There are two well known restart strategies that are not tailored to a specific problem type or algorithm. The first, called Luby sequence after one of the authors [59], is provably within a log factor of optimal restart strategy in the worst case when nothing is known about the runtime distribution. The second, introduced by Toby Walsh [86] and sometimes referred to as the Walsh sequence, does not have worst case bounds, but is simpler and has been shown to perform well [88] in the context of instruction scheduling problems. Another study from SAT [49] shows that the Luby sequence performs well in general; however the parameters for restart strategies are not explored. A number of other restart policies have been proposed (for example, see [49]); in my dissertation I only consider Walsh and Luby restart sequences. { f(i) = i = 2 k 1 2 k 1 (2.1) 2 k 1 i < 2 k 1 f(i 2 k 1 +1) Luby sequence grows linearly while Walsh sequence grows geometrically. Both sequences are associated with a scaling parameter n that is used to scale the final value by multiplying by n. Walsh sequence is also associated with a geometric parameter m greater than 1 that determines how quickly the restart value increases. Luby sequence is specified by a recurrence function shown in Equation 2.1 while Walsh sequence is simply f(i) = m i ; in these equations the restart counter i is 1-index based and sequences are not scaled by n.

Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior

Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior PART III: Search Outline Depth-first Search Chronological Backtracking Conflict Directed Backjumping Dynamic Backtracking Branching Strategies Branching Heuristics Heavy Tail Behavior Best-First Search

More information

Unrestricted Nogood Recording in CSP search

Unrestricted Nogood Recording in CSP search Unrestricted Nogood Recording in CSP search George Katsirelos and Fahiem Bacchus Department of Computer Science, University Of Toronto, Toronto, Ontario, Canada [gkatsi,fbacchus]@cs.toronto.edu Abstract.

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

Reduced branching-factor algorithms for constraint satisfaction problems

Reduced branching-factor algorithms for constraint satisfaction problems Reduced branching-factor algorithms for constraint satisfaction problems Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

Constraint (Logic) Programming

Constraint (Logic) Programming Constraint (Logic) Programming Roman Barták Faculty of Mathematics and Physics, Charles University in Prague, Czech Republic bartak@ktiml.mff.cuni.cz Sudoku Combinatorial puzzle, whose goal is to enter

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Frank C. Langbein F.C.Langbein@cs.cf.ac.uk Department of Computer Science Cardiff University 13th February 2001 Constraint Satisfaction Problems (CSPs) A CSP is a high

More information

A Uniform View of Backtracking

A Uniform View of Backtracking A Uniform View of Backtracking Fahiem Bacchus 1 Department. of Computer Science, 6 Kings College Road, University Of Toronto, Toronto, Ontario, Canada, M5S 1A4, fbacchus@cs.toronto.edu? Abstract. Backtracking

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Chapter 6 of Russell & Norvig] Constraint satisfaction problems (CSPs) Standard search problem:

More information

4.1 Review - the DPLL procedure

4.1 Review - the DPLL procedure Applied Logic Lecture 4: Efficient SAT solving CS 4860 Spring 2009 Thursday, January 29, 2009 The main purpose of these notes is to help me organize the material that I used to teach today s lecture. They

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Search and Lookahead Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 4/6, 2012 Nebel, Hué and Wölfl (Universität Freiburg) Constraint

More information

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989 University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science October 1989 P Is Not Equal to NP Jon Freeman University of Pennsylvania Follow this and

More information

Example: Map coloring

Example: Map coloring Today s s lecture Local Search Lecture 7: Search - 6 Heuristic Repair CSP and 3-SAT Solving CSPs using Systematic Search. Victor Lesser CMPSCI 683 Fall 2004 The relationship between problem structure and

More information

Constraint Satisfaction Problems. Chapter 6

Constraint Satisfaction Problems. Chapter 6 Constraint Satisfaction Problems Chapter 6 Constraint Satisfaction Problems A constraint satisfaction problem consists of three components, X, D, and C: X is a set of variables, {X 1,..., X n }. D is a

More information

Conflict based Backjumping for Constraints Optimization Problems

Conflict based Backjumping for Constraints Optimization Problems Conflict based Backjumping for Constraints Optimization Problems Roie Zivan and Amnon Meisels {zivanr,am}@cs.bgu.ac.il Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105,

More information

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley Project Proposals Due Friday, February 13 on bcourses Will discuss project topics on Monday Instructions

More information

CSE Theory of Computing Fall 2017 Project 1-SAT Solving

CSE Theory of Computing Fall 2017 Project 1-SAT Solving CSE 30151 Theory of Computing Fall 2017 Project 1-SAT Solving Version 3: Sept. 21, 2017 The purpose of this project is to gain an understanding of one of the most central problems of computing: Boolean

More information

CS-E3220 Declarative Programming

CS-E3220 Declarative Programming CS-E3220 Declarative Programming Lecture 5: Premises for Modern SAT Solving Aalto University School of Science Department of Computer Science Spring 2018 Motivation The Davis-Putnam-Logemann-Loveland (DPLL)

More information

Heuristic Backtracking Algorithms for SAT

Heuristic Backtracking Algorithms for SAT Heuristic Backtracking Algorithms for SAT A. Bhalla, I. Lynce, J.T. de Sousa and J. Marques-Silva IST/INESC-ID, Technical University of Lisbon, Portugal fateet,ines,jts,jpmsg@sat.inesc.pt Abstract In recent

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

A Re-examination of Limited Discrepancy Search

A Re-examination of Limited Discrepancy Search A Re-examination of Limited Discrepancy Search W. Ken Jackson, Morten Irgens, and William S. Havens Intelligent Systems Lab, Centre for Systems Science Simon Fraser University Burnaby, B.C., CANADA V5A

More information

Constraint Solving by Composition

Constraint Solving by Composition Constraint Solving by Composition Student: Zhijun Zhang Supervisor: Susan L. Epstein The Graduate Center of the City University of New York, Computer Science Department 365 Fifth Avenue, New York, NY 10016-4309,

More information

ABC basics (compilation from different articles)

ABC basics (compilation from different articles) 1. AIG construction 2. AIG optimization 3. Technology mapping ABC basics (compilation from different articles) 1. BACKGROUND An And-Inverter Graph (AIG) is a directed acyclic graph (DAG), in which a node

More information

Integrating Probabilistic Reasoning with Constraint Satisfaction

Integrating Probabilistic Reasoning with Constraint Satisfaction Integrating Probabilistic Reasoning with Constraint Satisfaction IJCAI Tutorial #7 Instructor: Eric I. Hsu July 17, 2011 http://www.cs.toronto.edu/~eihsu/tutorial7 Getting Started Discursive Remarks. Organizational

More information

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Constraint Satisfaction Problems slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Standard search problems: State is a black box : arbitrary data structure Goal test

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Formal Methods Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley The Boolean Satisfiability Problem (SAT) Given: A Boolean formula F(x 1, x 2, x 3,, x n ) Can F evaluate to 1

More information

Search Algorithms in Type Theory

Search Algorithms in Type Theory Search Algorithms in Type Theory James L. Caldwell Ian P. Gent Judith Underwood Revised submission to Theoretical Computer Science: Special Issue on Proof Search in Type-theoretic Languages May 1998 Abstract

More information

General Methods and Search Algorithms

General Methods and Search Algorithms DM811 HEURISTICS AND LOCAL SEARCH ALGORITHMS FOR COMBINATORIAL OPTIMZATION Lecture 3 General Methods and Search Algorithms Marco Chiarandini 2 Methods and Algorithms A Method is a general framework for

More information

Set 5: Constraint Satisfaction Problems Chapter 6 R&N

Set 5: Constraint Satisfaction Problems Chapter 6 R&N Set 5: Constraint Satisfaction Problems Chapter 6 R&N ICS 271 Fall 2017 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples:

More information

Massively Parallel Seesaw Search for MAX-SAT

Massively Parallel Seesaw Search for MAX-SAT Massively Parallel Seesaw Search for MAX-SAT Harshad Paradkar Rochester Institute of Technology hp7212@rit.edu Prof. Alan Kaminsky (Advisor) Rochester Institute of Technology ark@cs.rit.edu Abstract The

More information

Linear Time Unit Propagation, Horn-SAT and 2-SAT

Linear Time Unit Propagation, Horn-SAT and 2-SAT Notes on Satisfiability-Based Problem Solving Linear Time Unit Propagation, Horn-SAT and 2-SAT David Mitchell mitchell@cs.sfu.ca September 25, 2013 This is a preliminary draft of these notes. Please do

More information

Handout 9: Imperative Programs and State

Handout 9: Imperative Programs and State 06-02552 Princ. of Progr. Languages (and Extended ) The University of Birmingham Spring Semester 2016-17 School of Computer Science c Uday Reddy2016-17 Handout 9: Imperative Programs and State Imperative

More information

Constraint Satisfaction

Constraint Satisfaction Constraint Satisfaction Philipp Koehn 1 October 2015 Outline 1 Constraint satisfaction problems (CSP) examples Backtracking search for CSPs Problem structure and problem decomposition Local search for

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Constraint Satisfaction Luke Zettlemoyer Multiple slides adapted from Dan Klein, Stuart Russell or Andrew Moore What is Search For? Models of the world: single agent, deterministic

More information

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1 Lecture 18 Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1 Outline Chapter 6 - Constraint Satisfaction Problems Path Consistency & Global Constraints Sudoku Example Backtracking

More information

Symbolic Methods. The finite-state case. Martin Fränzle. Carl von Ossietzky Universität FK II, Dpt. Informatik Abt.

Symbolic Methods. The finite-state case. Martin Fränzle. Carl von Ossietzky Universität FK II, Dpt. Informatik Abt. Symbolic Methods The finite-state case Part I Martin Fränzle Carl von Ossietzky Universität FK II, Dpt. Informatik Abt. Hybride Systeme 02917: Symbolic Methods p.1/34 What you ll learn How to use and manipulate

More information

6.034 Notes: Section 3.1

6.034 Notes: Section 3.1 6.034 Notes: Section 3.1 Slide 3.1.1 In this presentation, we'll take a look at the class of problems called Constraint Satisfaction Problems (CSPs). CSPs arise in many application areas: they can be used

More information

Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving

Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving Propagate the Right Thing: How Preferences Can Speed-Up Constraint Solving Christian Bessiere Anais Fabre* LIRMM-CNRS (UMR 5506) 161, rue Ada F-34392 Montpellier Cedex 5 (bessiere,fabre}@lirmm.fr Ulrich

More information

Propositional Logic. Part I

Propositional Logic. Part I Part I Propositional Logic 1 Classical Logic and the Material Conditional 1.1 Introduction 1.1.1 The first purpose of this chapter is to review classical propositional logic, including semantic tableaux.

More information

Chapter S:II. II. Search Space Representation

Chapter S:II. II. Search Space Representation Chapter S:II II. Search Space Representation Systematic Search Encoding of Problems State-Space Representation Problem-Reduction Representation Choosing a Representation S:II-1 Search Space Representation

More information

Notes on Non-Chronologic Backtracking, Implication Graphs, and Learning

Notes on Non-Chronologic Backtracking, Implication Graphs, and Learning Notes on Non-Chronologic Backtracking, Implication Graphs, and Learning Alan J. Hu for CpSc 5 Univ. of British Columbia 00 February 9 These are supplementary notes on these aspects of a modern DPLL-style

More information

Satisfiability Solvers

Satisfiability Solvers Satisfiability Solvers Part 1: Systematic Solvers 600.325/425 Declarative Methods - J. Eisner 1 Vars SAT solving has made some progress 100000 10000 1000 100 10 1 1960 1970 1980 1990 2000 2010 Year slide

More information

Consistency and Set Intersection

Consistency and Set Intersection Consistency and Set Intersection Yuanlin Zhang and Roland H.C. Yap National University of Singapore 3 Science Drive 2, Singapore {zhangyl,ryap}@comp.nus.edu.sg Abstract We propose a new framework to study

More information

Outline. Best-first search

Outline. Best-first search Outline Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Beam search Simulated annealing search Genetic algorithms Constraint Satisfaction Problems

More information

The Resolution Algorithm

The Resolution Algorithm The Resolution Algorithm Introduction In this lecture we introduce the Resolution algorithm for solving instances of the NP-complete CNF- SAT decision problem. Although the algorithm does not run in polynomial

More information

A proof-producing CSP solver: A proof supplement

A proof-producing CSP solver: A proof supplement A proof-producing CSP solver: A proof supplement Report IE/IS-2010-02 Michael Veksler Ofer Strichman mveksler@tx.technion.ac.il ofers@ie.technion.ac.il Technion Institute of Technology April 12, 2010 Abstract

More information

Lecture 6: Constraint Satisfaction Problems (CSPs)

Lecture 6: Constraint Satisfaction Problems (CSPs) Lecture 6: Constraint Satisfaction Problems (CSPs) CS 580 (001) - Spring 2018 Amarda Shehu Department of Computer Science George Mason University, Fairfax, VA, USA February 28, 2018 Amarda Shehu (580)

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Constraint satisfaction problems Backtracking algorithms for CSP Heuristics Local search for CSP Problem structure and difficulty of solving Search Problems The formalism

More information

Solving 3-SAT. Radboud University Nijmegen. Bachelor Thesis. Supervisors: Henk Barendregt Alexandra Silva. Author: Peter Maandag s

Solving 3-SAT. Radboud University Nijmegen. Bachelor Thesis. Supervisors: Henk Barendregt Alexandra Silva. Author: Peter Maandag s Solving 3-SAT Radboud University Nijmegen Bachelor Thesis Author: Peter Maandag s3047121 Supervisors: Henk Barendregt Alexandra Silva July 2, 2012 Contents 1 Introduction 2 1.1 Problem context............................

More information

Horn Formulae. CS124 Course Notes 8 Spring 2018

Horn Formulae. CS124 Course Notes 8 Spring 2018 CS124 Course Notes 8 Spring 2018 In today s lecture we will be looking a bit more closely at the Greedy approach to designing algorithms. As we will see, sometimes it works, and sometimes even when it

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems 4.1 Instructional Objective The students should understand the formulation of constraint satisfaction problems Given a problem description, the student should

More information

CS 4100 // artificial intelligence

CS 4100 // artificial intelligence CS 4100 // artificial intelligence instructor: byron wallace Constraint Satisfaction Problems Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials

More information

Outline. Best-first search

Outline. Best-first search Outline Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Beam search Simulated annealing search Genetic algorithms Constraint Satisfaction Problems

More information

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2008 CS 188: Artificial Intelligence Fall 2008 Lecture 4: CSPs 9/9/2008 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 1 Announcements Grading questions:

More information

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens CS 188: Artificial Intelligence Fall 2008 Announcements Grading questions: don t panic, talk to us Newsgroup: check it out Lecture 4: CSPs 9/9/2008 Dan Klein UC Berkeley Many slides over the course adapted

More information

Local Consistency in Weighted CSPs and Inference in Max-SAT

Local Consistency in Weighted CSPs and Inference in Max-SAT Local Consistency in Weighted CSPs and Inference in Max-SAT Student name: Federico Heras Supervisor name: Javier Larrosa Universitat Politecnica de Catalunya, Barcelona, Spain fheras@lsi.upc.edu,larrosa@lsi.upc.edu

More information

Boolean Representations and Combinatorial Equivalence

Boolean Representations and Combinatorial Equivalence Chapter 2 Boolean Representations and Combinatorial Equivalence This chapter introduces different representations of Boolean functions. It then discusses the applications of these representations for proving

More information

Constraint Satisfaction Problems. Chapter 6

Constraint Satisfaction Problems. Chapter 6 Constraint Satisfaction Problems Chapter 6 Office hours Office hours for Assignment 1 (ASB9810 in CSIL): Sep 29th(Fri) 12:00 to 13:30 Oct 3rd(Tue) 11:30 to 13:00 Late homework policy You get four late

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems In which we see how treating states as more than just little black boxes leads to the invention of a range of powerful new search methods and a deeper understanding of

More information

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems CS 188: Artificial Intelligence Constraint Satisfaction Problems What is Search For? Assumptions about the world: a single agent, deterministic actions, fully observed state, discrete state space Planning:

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2014 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

A generic framework for solving CSPs integrating decomposition methods

A generic framework for solving CSPs integrating decomposition methods A generic framework for solving CSPs integrating decomposition methods L. Blet 1,3, S. N. Ndiaye 1,2, and C. Solnon 1,3 1 Université de Lyon - LIRIS 2 Université Lyon 1, LIRIS, UMR5205, F-69622 France

More information

CSP- and SAT-based Inference Techniques Applied to Gnomine

CSP- and SAT-based Inference Techniques Applied to Gnomine CSP- and SAT-based Inference Techniques Applied to Gnomine Bachelor Thesis Faculty of Science, University of Basel Department of Computer Science Artificial Intelligence ai.cs.unibas.ch Examiner: Prof.

More information

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2011 Announcements Project 1: Search is due next week Written 1: Search and CSPs out soon Piazza: check it out if you haven t CS 188: Artificial Intelligence Fall 2011 Lecture 4: Constraint Satisfaction 9/6/2011

More information

1 Inference for Boolean theories

1 Inference for Boolean theories Scribe notes on the class discussion on consistency methods for boolean theories, row convex constraints and linear inequalities (Section 8.3 to 8.6) Speaker: Eric Moss Scribe: Anagh Lal Corrector: Chen

More information

Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8

Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8 Constraint t Satisfaction Problems Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Tuesday: Chapter 7 Thursday: Chapter 8 Outline What is a CSP Backtracking for CSP Local search for

More information

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept

More information

Constraint Satisfaction Problems (CSPs)

Constraint Satisfaction Problems (CSPs) 1 Hal Daumé III (me@hal3.name) Constraint Satisfaction Problems (CSPs) Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 7 Feb 2012 Many

More information

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 30 January, 2018

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 30 January, 2018 DIT411/TIN175, Artificial Intelligence Chapter 7: Constraint satisfaction problems CHAPTER 7: CONSTRAINT SATISFACTION PROBLEMS DIT411/TIN175, Artificial Intelligence Peter Ljunglöf 30 January, 2018 1 TABLE

More information

CS 771 Artificial Intelligence. Constraint Satisfaction Problem

CS 771 Artificial Intelligence. Constraint Satisfaction Problem CS 771 Artificial Intelligence Constraint Satisfaction Problem Constraint Satisfaction Problems So far we have seen a problem can be solved by searching in space of states These states can be evaluated

More information

CS 188: Artificial Intelligence. Recap: Search

CS 188: Artificial Intelligence. Recap: Search CS 188: Artificial Intelligence Lecture 4 and 5: Constraint Satisfaction Problems (CSPs) Pieter Abbeel UC Berkeley Many slides from Dan Klein Recap: Search Search problem: States (configurations of the

More information

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA GSAT and Local Consistency 3 Kalev Kask and Rina Dechter Department of Information and Computer Science University of California, Irvine, CA 92717-3425 fkkask,dechterg@ics.uci.edu Abstract It has been

More information

Practical SAT Solving

Practical SAT Solving Practical SAT Solving Lecture 5 Carsten Sinz, Tomáš Balyo May 23, 2016 INSTITUTE FOR THEORETICAL COMPUTER SCIENCE KIT University of the State of Baden-Wuerttemberg and National Laboratory of the Helmholtz

More information

B553 Lecture 12: Global Optimization

B553 Lecture 12: Global Optimization B553 Lecture 12: Global Optimization Kris Hauser February 20, 2012 Most of the techniques we have examined in prior lectures only deal with local optimization, so that we can only guarantee convergence

More information

Exploring A Two-Solver Architecture for Clause Learning CSP Solvers. Ozan Erdem

Exploring A Two-Solver Architecture for Clause Learning CSP Solvers. Ozan Erdem Exploring A Two-Solver Architecture for Clause Learning CSP Solvers by Ozan Erdem A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Computer

More information

Mathematical Programming Formulations, Constraint Programming

Mathematical Programming Formulations, Constraint Programming Outline DM87 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 Mathematical Programming Formulations, Constraint Programming 1. Special Purpose Algorithms 2. Constraint Programming Marco Chiarandini DM87 Scheduling,

More information

An Introduction to SAT Solvers

An Introduction to SAT Solvers An Introduction to SAT Solvers Knowles Atchison, Jr. Fall 2012 Johns Hopkins University Computational Complexity Research Paper December 11, 2012 Abstract As the first known example of an NP Complete problem,

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Adrian Groza Department of Computer Science Technical University of Cluj-Napoca 12 Nov 2014 Outline 1 Constraint Reasoning 2 Systematic Search Methods Improving backtracking

More information

Learning Techniques for Pseudo-Boolean Solving and Optimization

Learning Techniques for Pseudo-Boolean Solving and Optimization Learning Techniques for Pseudo-Boolean Solving and Optimization José Faustino Fragoso Fremenin dos Santos September 29, 2008 Abstract The extension of conflict-based learning from Propositional Satisfiability

More information

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability. CS 188: Artificial Intelligence Fall 2008 Lecture 5: CSPs II 9/11/2008 Announcements Assignments: DUE W1: NOW P1: Due 9/12 at 11:59pm Assignments: UP W2: Up now P2: Up by weekend Dan Klein UC Berkeley

More information

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2008 CS 188: Artificial Intelligence Fall 2008 Lecture 5: CSPs II 9/11/2008 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 1 Assignments: DUE Announcements

More information

VALCSP solver : a combination of Multi-Level Dynamic Variable Ordering with Constraint Weighting

VALCSP solver : a combination of Multi-Level Dynamic Variable Ordering with Constraint Weighting VALCS solver : a combination of Multi-Level Dynamic Variable Ordering with Constraint Weighting Assef Chmeiss, Lakdar Saïs, Vincent Krawczyk CRIL - University of Artois - IUT de Lens Rue Jean Souvraz -

More information

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries. CS 188: Artificial Intelligence Fall 2007 Lecture 5: CSPs II 9/11/2007 More CSPs Applications Tree Algorithms Cutset Conditioning Today Dan Klein UC Berkeley Many slides over the course adapted from either

More information

Lecture 14: Lower Bounds for Tree Resolution

Lecture 14: Lower Bounds for Tree Resolution IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Advanced Course on Computational Complexity Lecture 14: Lower Bounds for Tree Resolution David Mix Barrington and Alexis Maciel August

More information

Constructive Search Algorithms

Constructive Search Algorithms Constructive Search Algorithms! Introduction Historically the major search method for CSPs Reference: S.W.Golomb & L.D.Baumert (1965) Backtrack Programming, JACM 12:516-524 Extended for Intelligent Backtracking

More information

Constraint Programming

Constraint Programming Volume title 1 The editors c 2006 Elsevier All rights reserved Chapter 1 Constraint Programming Francesca Rossi, Peter van Beek, Toby Walsh 1.1 Introduction Constraint programming is a powerful paradigm

More information

CS 188: Artificial Intelligence Spring Today

CS 188: Artificial Intelligence Spring Today CS 188: Artificial Intelligence Spring 2006 Lecture 7: CSPs II 2/7/2006 Dan Klein UC Berkeley Many slides from either Stuart Russell or Andrew Moore Today More CSPs Applications Tree Algorithms Cutset

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Reductions and Satisfiability

Reductions and Satisfiability Reductions and Satisfiability 1 Polynomial-Time Reductions reformulating problems reformulating a problem in polynomial time independent set and vertex cover reducing vertex cover to set cover 2 The Satisfiability

More information

PROPOSITIONAL LOGIC (2)

PROPOSITIONAL LOGIC (2) PROPOSITIONAL LOGIC (2) based on Huth & Ruan Logic in Computer Science: Modelling and Reasoning about Systems Cambridge University Press, 2004 Russell & Norvig Artificial Intelligence: A Modern Approach

More information

CS 512, Spring 2017: Take-Home End-of-Term Examination

CS 512, Spring 2017: Take-Home End-of-Term Examination CS 512, Spring 2017: Take-Home End-of-Term Examination Out: Tuesday, 9 May 2017, 12:00 noon Due: Wednesday, 10 May 2017, by 11:59 am Turn in your solutions electronically, as a single PDF file, by placing

More information

EFFICIENT ATTACKS ON HOMOPHONIC SUBSTITUTION CIPHERS

EFFICIENT ATTACKS ON HOMOPHONIC SUBSTITUTION CIPHERS EFFICIENT ATTACKS ON HOMOPHONIC SUBSTITUTION CIPHERS A Project Report Presented to The faculty of the Department of Computer Science San Jose State University In Partial Fulfillment of the Requirements

More information

Crossword Puzzles as a Constraint Problem

Crossword Puzzles as a Constraint Problem Crossword Puzzles as a Constraint Problem Anbulagan and Adi Botea NICTA and Australian National University, Canberra, Australia {anbulagan,adi.botea}@nicta.com.au Abstract. We present new results in crossword

More information

A Pearl on SAT Solving in Prolog (extended abstract)

A Pearl on SAT Solving in Prolog (extended abstract) A Pearl on SAT Solving in Prolog (extended abstract) Jacob M. Howe and Andy King 1 Introduction The Boolean satisfiability problem, SAT, is of continuing interest because a variety of problems are naturally

More information

Polynomial SAT-Solver Algorithm Explanation

Polynomial SAT-Solver Algorithm Explanation 1 Polynomial SAT-Solver Algorithm Explanation by Matthias Mueller (a.k.a. Louis Coder) louis@louis-coder.com Explanation Version 1.0 - December 1, 2013 Abstract This document describes an algorithm that

More information

Conflict Directed Backjumping for Max-CSPs

Conflict Directed Backjumping for Max-CSPs Conflict Directed Backjumping for Max-CSPs Roie Zivan and Amnon Meisels, Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel Abstract Max-CSPs are Constraint

More information

Announcements. CS 188: Artificial Intelligence Fall 2010

Announcements. CS 188: Artificial Intelligence Fall 2010 Announcements Project 1: Search is due Monday Looking for partners? After class or newsgroup Written 1: Search and CSPs out soon Newsgroup: check it out CS 188: Artificial Intelligence Fall 2010 Lecture

More information