Improvements to Clause Weighting Local Search for Propositional Satisfiability

Size: px
Start display at page:

Download "Improvements to Clause Weighting Local Search for Propositional Satisfiability"

Transcription

1 Improvements to Clause Weighting Local Search for Propositional Satisfiability by Valnir Ferreira Jr A thesis submitted in fulfillment of the requirements of the degree of Doctor of Philosophy Institute for Integrated and Intelligent Systems Griffith University, Australia July 2006

2

3 Abstract The propositional satisfiability (SAT) problem is of considerable theoretical and practical relevance to the artificial intelligence (AI) community and has been used to model many pervasive AI tasks such as default reasoning, diagnosis, planning, image interpretation, and constraint satisfaction. Computational methods for SAT have historically fallen into two broad categories: complete search and local search. Within the local search category, clause weighting methods are amongst the best alternatives for SAT, becoming particularly attractive on problems where a complete search is impractical or where there is a need to find good candidate solutions within a short time. The thesis is concerned with the study of improvements to clause weighting local search methods for SAT. The main contributions are: A component-based framework for the functional analysis of local search methods. A clause weighting local search heuristic that exploits longer-term memory arising from clause weight manipulations. The approach first learns which clauses are globally hardest to satisfy and then uses this information to treat these clauses differentially during weight manipulation [Ferreira Jr and Thornton, 2004]. A study of heuristic tie breaking in the domain of additive clause weighting local search methods, and the introduction of a competitive method that uses heuristic tie breaking instead of the random tie breaking approach used in most existing methods [Ferreira Jr and Thornton, 2005].

4 An evaluation of backbone guidance for clause weighting local search, and the introduction of backbone guidance to three state-of-the-art clause weighting local search methods [Ferreira Jr, 2006]. A new clause weighting local search method for SAT that successfully exploits synergies between the longer-term memory and tie breaking heuristics developed in the thesis to significantly improve on the performance of current state-of-the-art local search methods for SAT-encoded instances containing identifiable CSP structure. Portions of this thesis have appeared in the following refereed publications: Longer-term memory in clause weighting local search for SAT. In Proceedings of the 17th Australian Joint Conference on Artificial Intelligence, volume 3339 of Lecture Notes in Artificial Intelligence, pages , Cairns, Australia, Tie breaking in clause weighting local search for SAT. In Proceedings of the 18th Australian Joint Conference on Artificial Intelligence, volume 3809 of Lecture Notes in Artificial Intelligence, pages 70 81, Sydney, Australia, Backbone guided dynamic local search for propositional satisfiability. In Proceedings of the Ninth International Symposium on Artificial Intelligence and Mathematics, AI&M, Fort Lauderdale, Florida, ii

5 Contents Acknowledgments xi Statement of Originality xiii 1 Introduction Propositional Satisfiability Search Paradigms Research Problems Thesis Outline Local Search Introduction Components Preprocessing Initialisation Variable Selection GSAT WalkSAT Novelty Summary iii

6 Contents 3 Clause Weighting Introduction Weight Manipulation Breakout Discrete Lagrangian Method (DLM) Guided Local Search for SAT (GLSSAT) Smoothed Descent and Flood (SDF) Exponentiated Subgradient (ESG) Scaling and Probabilistic Smoothing (SAPS) Pure Additive Weighting Scheme (PAWS) Divide and Distribute Fixed Weights (DDFW) Summary Longer-Term Memory Learning while Weighting Related Work The Usual Suspects Preprocessing Initialisation Weight Manipulation Additional Parameter Setting Evaluation Test Set Experimental Conditions Testing for Statistical Significance Empirical Results Inter-parametric Dependency iv

7 Contents 4.5 Summary Tie Breaking Tie Breaking and Search Landscapes Related Work Alternative Tie Breaking Heuristics The TB Heuristic The HTB Method Evaluation Test Set Experimental Conditions Parameter Setting Empirical Results Testing for Probabilistic Domination Summary Backbone Guidance Background Related Work Evaluation Estimation Execution Experimental Conditions BGPAWS BGR+PAWS BGmvPAWS Summary v

8 Contents 7 Longer-Term Memory and Tie Breaking at Work Dynamic Longer-Term Memory Acquisition Extending the HTB Method The mvhtb#1 Method Summary Conclusion Summary of Contributions Future Work Glossary 113 A Tables of Results for Chapter B Tie Breaking Heuristics from Chapter C Run-Time Distributions from Chapter D Tables of Results for Chapter Bibliography 147 vi

9 List of Figures 2.1 Generic local search for SAT Novelty s variable selection component R-Novelty s variable selection component Generic clause weighting local search for SAT The PAWS method The DDFW method The PAWS+US method Comparative run-length performance with varying LL settings Run-length distributions for PAWS versus PAWS+US Comparative run-length performance for f1600-hard instance The TB heuristic The BOCM heuristic The HTB method Probabilistic domination WalkSAT s variable selection component Comparative run-length performance on the random 3-SAT domain Comparative run-time performance on the qge domain vii

10 List of Figures 6.4 Comparative run-time performance on the bqwh order 30 instances Comparative run-time performance on the bqwh order 33 instances Comparative run-time performance on the bqwh order 36 instances RTD and RLD for mvhtb#1 versus mvpaws B.1 Performance variation of tie breaking heuristics B.2 The #1 heuristic B.3 The #3 heuristic B.4 The #7 heuristic B.5 The #8 heuristic B.6 The #10 heuristic B.7 The #12 heuristic B.8 The #15 heuristic B.9 The #16 heuristic B.10 The #17 heuristic B.11 The BOCM2 heuristic C.1 RTDs for PAWS versus HTB on random instances C.2 RTDs for PAWS versus HTB on structured instances viii

11 List of Tables 5.1 Random instances Structured instances Results for the clause weighting local search methods on the ais instances Best weight decrement parameter settings for the mvpaws and mvhtb variants Results for mvpaws and mvpaws with dynamic usual suspects on instances with identifiable CSP structure Results for mvpaws and mvpaws with dynamic usual suspects on instances without identifiable CSP structure Results for mvpaws and mvhtb Results for mvpaws and mvhtb# A.1 Results for PAWS and PAWS+US on the random 3-SAT instances 117 A.2 Results for PAWS and PAWS+US on the SATLIB instances A.3 Results for PAWS and PAWS+US on the random binary CSP instances A.4 Results for PAWS and PAWS+US on the DIMACS instances ix

12 List of Tables D.1 Run-length results for the clause weighting local search methods on the CBS instances D.2 Results of resolution on the qge instances D.3 Results for the complete methods on the qge instances D.4 Results for the clause weighting local search methods on the qge instances D.5 Results for the complete methods on the bqwh order 30 instances 143 D.6 Results for the complete methods on the bqwh order 33 instances 144 D.7 Results for the complete methods on the bqwh order 36 instances 144 D.8 Results for the clause weighting local search methods on the bqwh order 30 instances D.9 Results for the clause weighting local search methods on the bqwh order 33 instances D.10 Results for the clause weighting local search methods on the bqwh order 36 instances x

13 Acknowledgments I would like to thank Dr John Thornton for instilling into me my passion for local search and for being my candidature adviser in the journey that ensued, Prof. Abdul Sattar for the continual encouragement and for providing an excellent research environment, my closest colleagues Stuart Bain, Dr Owen Bourne, Dr Vineet Nair, Duc Nghia Pham, and Lingzhong Zhou for their friendship, the anonymous conference reviewers for their useful comments on my work, and the Queensland Parallel Supercomputing Foundation for the computational resources. I would also like to acknowledge the generous financial support I received through an Australian postgraduate award and a Griffith University School of Information and Communication Technology top-up scholarship. On a personal level, I would like to thank my wife Linda whose love has always been the perfect element for soothing the scientific mind at the end of a day s work, and my parents for, amongst many other things, teaching me the value of values - muito obrigado, eu amo vocês! xi

14

15 Statement of Originality This work has not previously been submitted for a degree or diploma in any university. To the best of my knowledge and belief, the thesis contains no material previously published or written by another person except where due reference is made in the thesis itself. Signed:... July 2006 xiii

16

17 Chapter 1 Introduction This chapter introduces the work contained in the thesis. We describe the propositional satisfiability problem and the computational paradigms available for addressing it. We then state the research questions representing the main motivations for conducting this work, and conclude by giving an outline of the thesis. 1.1 Propositional Satisfiability The propositional satisfiability problem (SAT) was the first problem for which N P-completeness was established [Cook, 1971] and besides its evident historical significance, it is also of great relevance to the artificial intelligence community because its conceptually simple framework is well suited for modeling pervasive AI tasks such as default reasoning, diagnosis, planning, image interpretation, and constraint satisfaction problems (CSPs). An instance F of a SAT problem consists of a set of clauses in Conjunctive Normal Form (CNF). Although not part of the original definition of the problem, the CNF restriction has become the de facto representation for SAT formu-

18 Chapter 1 Introduction lae, providing a unifying ground for the design, development and analysis of SAT theory and practice. In the CNF formalism, each clause is a disjunction of literals, and each literal represents a propositional variable or its negation. As an example, consider the instance F = ( a b c) (b c) (a c) ( a b d) The goal of a SAT solver is to find an assignment to the propositional variables a, b, c, and d such that all four clauses are simultaneously satisfied. The assignment s = {a = true,b = false,c = true,d = false} meets this goal and represents a solution to the problem instance (also called a model of F ), as a = true satisfies the third clause, b = false satisfies the first and fourth clauses, and c = true satisfies the second clause. 1.2 Search Paradigms Computational methods for solving SAT can be divided into two broad categories: complete search, and local search. Complete search methods generally employ backtracking techniques [Bitner and Reingold, 1975] to systematically explore the search space, instantiating variables in a sequential order while constructively extending a partial solution. The completeness property of these methods guarantees that eventually either (a) a solution is found, i.e., all variables are instantiated and all clauses are satisfied, or (b) it is determined that no solution exists. Local search methods begin the search procedure from a candidate solution where all variables are instantiated but at least one clause is unsatisfied. 2

19 1.2 Search Paradigms They attempt to improve this candidate solution by changing the value of a variable at each iteration. The goal is to minimise an evaluation function that is typically based on a measure of the number of unsatisfied clauses. Unlike their complete search counterparts, local search methods do not offer the completeness property, although they represent the best alternative for tackling problems for which the application of a complete search method is impractical due to problem size or difficulty [Selman et al., 1992] or due to the need for finding reasonably good candidate solutions (those with relatively few clauses unsatisfied) within a short time [Hoos and Stützle, 2005]. The first part of a review of local search methods for SAT is presented in the next chapter. One significant drawback of local search is that due to the greedy nature of the iterative improvement steps, a simple local search eventually reaches some local minimum of the evaluation function from which it cannot escape. Therefore, a simple local search is only useful when a local minimum is actually the global minimum. Consequently, a significant body of work on local search methods for SAT is dedicated to the study of strategies to avoid or escape from local minima. The most prominent such strategies are random walk and clause weighting, which have become an integral part of modern local search methods for SAT. Random walk strategies periodically change a variable s value uniformly at random. Clause weighting strategies, in addition to some degree of randomness, also discriminate amongst clauses by assigning weights. Methods that utilise the latter strategy are collectively termed clause weighting local search (a.k.a. dynamic local search), and have emerged from the 1990 s as the most competitive local search approach for SAT solving. These methods are reviewed in Chapter 3. 3

20 Chapter 1 Introduction 1.3 Research Problems The thesis is concerned with the study of improvements to clause weighting local search methods for SAT. The motivation for this research stems from the following questions: Do clause weights contain useful information of a longer-term, global nature? If so, how can this information be used as a heuristic to improve search performance? Can the performance of local search methods be significantly improved with the use of heuristics for breaking the ties between equal-cost flips? Is it possible to efficiently estimate the backbone of a SAT instance? If so, can useful heuristics be devised that use this information to improve the run-time efficiency of clause weighting local search methods? Should we be able to successfully obtain such heuristics, can they be unified into a single method that gives demonstrable practical advantages? 1.4 Thesis Outline In the following chapter we introduce a component-based framework for the functional analysis of local search methods and present the first part of our chronological review of such methods. Chapter 3 continues the review but focuses on clause weighting local search methods, discusses their comparative superiority over traditional local search, and establishes their prominence as SAT solving tools. Chapter 4 proposes a new clause weighting heuristic that explores longerterm information derived from a global measure of clause perturbation that is 4

21 1.4 Thesis Outline available to all clause weighting methods. We also present an empirical study to evaluate the usefulness of the approach. Chapter 5 considers the use of tie breaking heuristics for additive clause weighting local search methods, and investigates whether their performance can be enhanced with the introduction of some tie breaking heuristic in place of the predominantly random approaches currently used. We introduce a method that incorporates heuristic tie breaking, and present an empirical study that compares it against the state-of-the-art. Chapter 6 investigates the usefulness of backbone guidance to the performance of clause weighting local search methods. We address the problem of obtaining accurate backbone estimations and use this information to alter the variable selection procedure of three state-of-the-art methods. Chapter 7 unifies much of the work developed in the thesis by considering the integration of longer-term memory and tie breaking into a single clause weighting local search method for SAT. Chapter 8 presents our concluding remarks and proposes avenues for extending the work presented in the thesis. 5

22

23 Chapter 2 Local Search In this chapter we present the first part of a chronological review of local search methods for propositional satisfiability. Our discussion is centred around a framework that views methods as consisting of three functional components: preprocessing, initialisation, and variable selection. 2.1 Introduction A local search method for SAT consists of three components: preprocessing, initialisation, and variable selection. It typically operates as follows: after preprocessing and initialisation, variable selection is performed iteratively in an attempt to improve a candidate solution by flipping the value of a propositional variable. Typically, a variable X is flipped that would minimise an evaluation function mapping a candidate solution s onto a real number, where s only differs from the current candidate solution on the assignment of X. The evaluation function of a local search method for SAT is often based on the number of clauses of F unsatisfied under s.

24 Chapter 2 Local Search Candidate solutions are potential solutions considered during the search, and represent an assignment of truth values to all variables in the problem instance. The candidate qualifier represents the fact that the assignment of values to variables under the solution leaves at least one clause unsatisfied. Flip is used to describe a change in the truth assignment of a variable from true to false or vice-versa. There are three types of flips, which are best described in the context of a search landscape. Search landscape L for a problem instance π is given by L(π) = (S,N,g), where S is the space of all candidate solutions, N is a given neighbourhood relation, and g is an evaluation function. For a position s S the following functions determine the number of cost-increasing, equal-cost, and cost-improving flips from s to its neighbours (adapted from [Hoos and Stützle, 2005]): Cost-increasing = #{s N(s) g(s ) > g(s)} Equal-cost = #{s N(s) g(s ) = g(s)} Cost-improving = #{s N(s) g(s ) < g(s)} Neighbour of s is a position that differs from s on at most one variable assignment. Landscape positions can be described in terms of flip types. We highlight two landscape positions of interest: 8 Local minimum: a landscape position where only equal-cost or costincreasing flips are available.

25 2.2 Components Strict local minimum: a landscape position where only cost-increasing flips are available. All methods share a default outer termination condition: the search is terminated whenever a solution is found. Additional outer termination conditions are usually implemented by each method. Inner termination conditions are implemented in methods that perform restarts. A solution is an assignment of truth values to the variables in the formula such that all clauses are satisfied, corresponding to a global minimum of the method s evaluation function. The generic local search method for SAT is shown in Figure 2.1. As our review of local search methods will reveal, all local search implementations discussed in the thesis can in fact be seen as variants of this generic method. Figure 2.1: Generic local search for SAT. 2.2 Components Here we introduce a component-based framework for the functional analysis of local search methods. The adoption of such a modular approach allows us to clearly compare how different methods implement individual components. 9

26 Chapter 2 Local Search In this section we look at preprocessing, initialisation and variable selection. A fourth component, weight manipulation, is used exclusively in clause weighting local search and will be introduced and discussed in Section Preprocessing All local search methods perform at least one preprocessing task: reading in the target problem instance. Additionally, the preprocessing component can be used to perform formula simplification or to probe the search space to obtain information that can be used during the search to support variable selection. Formula simplification This type of preprocessing simplifies problem instances based on their structural properties. Here we describe two such techniques: unit propagation and resolution. Unit propagation is widely used in complete SAT solvers, and it is based on clauses containing a single literal (unit clauses). A single pass of unit propagation works as follows: for each unit clause in the formula under consideration, delete (a) all other clauses containing the same literal, and (b) all occurrences of the complementary literal. A complete pass iteratively applies the unit propagation procedure to resulting unit clauses to ensure that no such clauses remain in the formula. The resulting simplified formula is guaranteed to be equivalent to the original one. This form of preprocessing, which can be done in linear time in the number of clauses [Zhang and Stickel, 1996], is used by the clause weighting local search solvers DLM and GLSSAT. In other work, the same formula simplification procedure used by the well-known complete SAT solver Satz [Li and Anbulagan, 1997] is used to simplify formulae before they are processed by various local search methods [Anbulagan et al., 2005]. This procedure, termed resolution, is based on adding resolvents of size 3 to the formula. For example, if a formula F 10

27 2.2 Components has two clauses (a b c) and ( a b d), a resolvent clause (b c d) is created and added to the formula in place of the two original ones. This process is repeated until saturation. Probing This type of preprocessing is used to obtain information that can be used as guidance for variable selection. For example, [Zhang et al., 2003, Ferreira Jr, 2006] used probing to obtain backbone frequency estimations. The backbone of a SAT problem consists of those variables whose logical values are the same in all solutions [Monasson et al., 1999]. The goal of a backbone frequency estimation procedure is to determine the probability of a literal appearing in the backbone, and to subsequently use that information in variable selection. Clearly, there will always be a trade-off between the potential benefit of preprocessing versus the additional computational cost it brings about. Consequently, a preprocessing step that takes time t is efficient only if its use results in a reduction in search time that is at least greater than t Initialisation The initialisation component is part of all local search methods. Its implementation is always method dependent, but it is invariably used for common tasks such as populating data structures and instantiating an initial candidate solution. For the latter, the dominant approach has been to randomly assign truth values to all variables in the problem Variable Selection Variable selection is used to decide which variable to flip at every iteration, and therefore plays a crucial role in a method s performance. Most local search 11

28 Chapter 2 Local Search methods vary considerably in the implementation of this component. 2.3 GSAT GSAT [Selman et al., 1992] is the prototypical local search method. It was introduced more than a decade ago as a viable local search alternative to complete search methods for SAT 1. A GSAT search does not perform any additional preprocessing and uses a random assignment of truth values to all variables in the problem to obtain the initial candidate solution. Then, GSAT s evaluation function g(f,s) = cw(c) (2.1) c UC(F,s) is used to guide variable selection, where UC is the set of clauses in the formula F unsatisfied under candidate solution s. The function cw(c) always returns one. In other words, the evaluation function simply counts the number of clauses that would be unsatisfied under a candidate solution s. At each iteration, GSAT selects the variable whose flip would minimise the evaluation function. Should there be more than one candidate flip under this evaluation, one is picked uniformly at random. Therefore, a solution assignment is characterised by g (F,s) = 0, i.e., a global minimum of the evaluation function. The termination condition is controlled by two parameters: MAX-FLIPS and MAX-TRIES. The former determines the maximum number of flips to be performed before the search is restarted from a randomly generated initial assignment (this is the inner termination condition in Figure 2.1). The latter determines the maximum number of tries allowed before the search is 1 The work of [Gu, 1992] was contemporary with GSAT and also included promising results that helped to cement local search as a viable alternative to complete satisfiability methods. Nevertheless, GSAT has had a greater impact on the development of the current state-of-theart local search methods for SAT [Hoos and Stützle, 2005]. 12

29 2.3 GSAT terminated (this is the outer termination condition in Figure 2.1), where a try refers to the number of flips allowed before the search is restarted. Standard GSAT has some limited ability to escape from local minima of its evaluation function because it takes equal-cost flips. However, it has no means of dynamically (i.e., without restarting) escaping from strict local minima because it does not take cost-increasing flips, resorting instead to restarting the search every MAX-FLIPS. In subsequent work [Selman and Kautz, 1993], three strategies for improving GSAT s performance were introduced: averaging-in, clause weighting, and random walk; the last two of which are useful to enable GSAT to escape from local minima. Averaging-in considers, for each try, the assignment at the beginning of the try, s init, the best assignment found during the try, s best, and the next initial assignment, s next. If a variable V i has the same value v i in both s init and s best, then V i s value in s next will be v i, otherwise V i is assigned either true or false uniformly at random. The s next assignment is then used as the starting point for the search in the subsequent try. As after many tries s init and s best tend to become identical, a random assignment for s init is used every RESET-TRIES, an instance dependent parameter with empirically observed good settings varying between 10 and 50 [Selman and Kautz, 1993]. The clause weighting strategy initialises the weights of all clauses in the formula to one. Clause weights are then incremented at the end of each try by adding a positive integer (usually one) to the weight of the clauses that are unsatisfied at the end of the try. A clause s weight is then used in subsequent tries to determine how many times that clause should be counted by GSAT s evaluation function. Note that under this strategy the clause weights are not modified during a try, only at the end of every try. Also, clause weights are never decremented. In the case of the clause weighting strategy, cw(c) in eval- 13

30 Chapter 2 Local Search uation function 2.1 returns the weight of a clause, instead of one. In GSAT with random walk (GWSAT), the next variable to be flipped is selected according to the following heuristic: with probability p, pick a variable occurring in some unsatisfied clause and flip its truth assignment, otherwise (i.e., with probability (1 p)), follow the standard GSAT scheme. The parameter p is called the noise setting. The random walk strategy leads to greater search efficiency as it allows GSAT to take cost-increasing flips to escape from strict local minima instead of having to restart the search. Although no longer competitive, GSAT is of historical importance because at the time of its introduction it was able to solve some hard random instances an order of magnitude larger than those that could be handled by complete solvers. Its ability to tackle such problems has provided the initial impetus for the large body of work into SAT solvers, and local search methods in particular, over the last decade. 2.4 WalkSAT WalkSAT further explores the idea of randomly walking around the search space introduced by GWSAT. Here we describe the original WalkSAT method [Selman et al., 1994], also known as WalkSAT/SKC [McAllester et al., 1997]. In WalkSAT, like GSAT, initialisation is performed by the random assignment of truth values to all variables in the problem. Although both GWSAT and WalkSAT employ random walk, there are at least two important differences between the two methods. Firstly, WalkSAT s variable selection is guided by the number of clauses that would become unsatisfied should a variable be flipped. Secondly, WalkSAT s variable selection is divided into two steps. In the first step, an unsatisfied clause c is selected 14

31 2.5 Novelty uniformly at random from the set of all currently unsatisfied clauses. In the second step, a variable V c is flipped according to the following heuristic: if there exists a variable flip that will not cause any clauses to become unsatisfied (i.e., clause break = 0) then flip it, breaking ties uniformly at random (the socalled freebie pick). Otherwise, with probability p (the noise parameter), flip a variable uniformly at random (noise pick), and with probability (1 p) flip the variable with the smallest clause break, breaking ties uniformly at random (greedy pick). Clause break is the number of clauses that would become unsatisfied should a variable be flipped. The pseudocode for WalkSAT s variable selection is shown in Figure 6.1. Consequently, unlike GWSAT, WalkSAT probabilistically favours variables appearing in many unsatisfied clauses. Note that there is a subtle difference between random walk in GWSAT and the WalkSAT method. In WalkSAT, randomness is only used if there are no freebie flips available. It can therefore be seen as a slightly more restricted form of random walk than that used in GWSAT. Search termination is effected in the same way as in GSAT, using the same termination condition and parameters. WalkSAT is typically faster in terms of run-time than GSAT and its three variants [Hoos and Stützle, 2005]. 2.5 Novelty Novelty and R-Novelty [McAllester et al., 1997] can be seen as alternative variable selection heuristics for the WalkSAT architecture, although they are similar to GSAT in the sense that they use the same evaluation function. Initialisation is performed by randomly assigning truth values to all variables in the problem. Search termination is controlled in the same way as in GSAT and WalkSAT. Novelty methods inherit the two-step variable selection mechanism 15

32 Chapter 2 Local Search and the p parameter used in WalkSAT, although the methods differ in the way variables are actually selected. Figure 2.2: Novelty s variable selection component. See text for discussion. See [Tompkins, 2004] for implementation details. Novelty s variable selection (see Figure 2.2) takes into account a variable s age, expressed as the number of flips performed since the variable was last flipped, and is carried out as follows: in a first step that is identical to WalkSAT, an unsatisfied clause is chosen and its variables sorted from highest to lowest scoring. A variable s score is given by an evaluation function that is identical to GSAT s. Any ties in this sorting are broken in favour of the variable with the greatest age. For the cases where the selected clause has several variables with identical score, the sorting ordering is implementation dependent. In step two, Novelty only takes into account the best and second best variables under this sorting. If the best variable is not the one with minimal age within the clause, 16

33 2.5 Novelty then it is selected. Otherwise, if the best variable has minimal age then with probability p the second best variable is selected, and with probability (1 p) the best variable is selected. R-Novelty is similar to Novelty, with an additional feature being used when the best variable is the most recently flipped one (i.e., it has minimal age). In this case, the method calculates the difference n between the scores of the second best and best variables. The value n is then used to guide the decision between picking the best or the second best variable based on a set of fixed parameters (i.e., not requiring instance dependent tuning) in such a way as to avoid taking large cost-increasing flips. This additional feature results in increased determinism, and in order to counterbalance the greater likelihood of search stagnation, R-Novelty selects and flips a variable from the selected clause uniformly at random every 100 flips. R-Novelty s heuristic is complex and somewhat ad hoc, having been discovered after extensively experimenting with various WalkSAT variants [McAllester et al., 1997]. The pseudocode for R-Novelty s variable selection component is shown in Figure 2.3. The Novelty + and R-Novelty + variants were introduced as an attempt to correct the original methods tendency to search stagnation due to their relatively high degree of determinism [Hoos, 1999]. These variants are almost identical to the original methods, except for the introduction of a new parameter, wp, used to extend upon their random walk capability 2. Note that wp means additional randomness, as p remains in use. After an unsatisfied clause c is selected, the methods flip, with probability wp, a variable in c uniformly at random. Otherwise, with probability (1 wp), they follow the respective schemes of their original counterparts. A low setting for the wp parameter (e.g., 0.01) is typically sufficient to give good performance across problem do- 2 In R-Novelty +, this random walk mechanism replaces R-Novelty s naive random selection at every 100 flips. 17

34 Chapter 2 Local Search Figure 2.3: R-Novelty s variable selection component. See text for discussion. See [Tompkins, 2004] for implementation details. mains. Note how wp is conceptually closer to the random walk parameter used by GWSAT. Novelty + and R-Novelty + give better performance than Nov- 18

35 2.6 Summary elty and R-Novelty in the problem instances where the latter methods suffer from stagnation. For other instances, performance differences are negligible [Hoos and Stützle, 2005]. This result attests to the significance of random walk mechanisms to overcome a method s tendency to become stuck in areas of local minima and, consequently, to provide for better performance. Optimal performance in WalkSAT-style methods such as Novelty is often closely related to the setting of p, the noise parameter, and finding the optimal instance dependent settings typically requires extensive experimentation [Hoos and Stützle, 2000]. A study of an adaptive mechanism to automatically tune the noise parameter of WalkSAT-style methods was presented in [Hoos, 2002]. This study found that AdaptNovelty +, a variant which automatically adjusts p during the search, typically achieves the same performance as Novelty + with approximately optimal p settings, with the added benefit of not requiring manual tuning. Consequently, AdaptNovelty + is one of the best performing and most robust local search methods for SAT currently available [Hoos and Stützle, 2005], although its performance is typically no match for state-of-the-art manually-tuned clause weighting local search methods [Anbulagan et al., 2005]. 2.6 Summary We presented the first part of a chronological review of local search methods for propositional satisfiability. GSAT s significance stems from the fact that, at the time of its introduction, it could solve instances that were too hard for complete solvers. Improvements to GSAT were developed to address its main weakness: a tendency to become stuck in areas of local minima. One such improvement, GWSAT, used the idea of randomly walking around the search 19

36 Chapter 2 Local Search space sporadically according to a noise parameter. WalkSAT methods continued this line of work by using a more restricted form of random walk coupled to a slightly more sophisticated variable selection mechanism and software implementation. As a result, WalkSAT was shown to be typically faster than GSAT and variants. Novelty methods extended upon WalkSAT by inheriting its proven features, namely the two-step variable selection and the random walk mechanism, while incorporating a history-based variable selection heuristic. The Novelty + and R-Novelty + variants introduced additional randomness to the search through an extra random walk parameter, and typically perform better than Novelty and R-Novelty. Finally, AdaptNovelty + uses a mechanism for automatically tuning the noise parameter during the search, achieving the same performance levels as Novelty + but with the added benefit of not requiring instance dependent parameter tuning. It can therefore be considered the best and most robust non-weighting local search method for SAT currently available. One underlying element in most of these methods is their reliance on some degree of randomness in order to avoid or escape areas of local minima. As methods have evolved, random walk has been retained as a central feature. In the next chapter, we continue our chronological review but focus on clause weighting local search methods. In addition to randomness, they also rely on the manipulation of clause weights to obtain search guidance, and are amongst the very best alternatives for SAT solving. 20

37 Chapter 3 Clause Weighting The previous chapter presented the first part of our chronological review on local search methods for propositional satisfiability and highlighted the comparative superiority of methods that use random walk in order to avoid or escape unpromising areas of the search space. This chapter presents the second part of the review, this time focusing on clause weighting methods. The common thread in all these methods is that, in addition to some degree of randomness, they also rely on clause weights and their adequate manipulation. 3.1 Introduction Clause weighting local search methods associate positive values to all clauses in an instance. Unlike the local search methods reviewed thus far, with the exception of GSAT with the clause weighting strategy, they measure the quality of a solution in terms of the weighted number of clauses. An added challenge is that these methods need to effect clause weight decrements in order to keep the clause weight distribution relevant to the current search context. This problem is non-trivial, and it can be seen as trying to maintain an optimal balance

38 Chapter 3 Clause Weighting between long- and short-term weight memory [Tompkins and Hoos, 2004]. It is now widely accepted that the better a method can handle this problem, the more efficiently it can solve hard instances. Figure 3.1: Generic clause weighting local search for SAT. 3.2 Weight Manipulation Recall that in Section 2.1 we defined a typical local search method as consisting of preprocessing, initialisation, and variable selection components. Clause weighting local search methods are characterised by the presence of a fourth component, weight manipulation, used to increment and decrement clause weights as the search progresses. The pseudo-code for a typical clause weighting local search method for SAT is shown in Figure 3.1. Note that it differs from the typical local search method only in the use of the weight manipulation module (lines 7-9), where a weight manipulation condition is used (line 7) to determine when to perform weight manipulation (line 8). Successful clause weighting local search methods need efficient ways to 22

39 3.3 Breakout adjust clause weights so they can maintain the clause weight distribution relevant to the context in which they are searching. To this end, most methods can be divided into those that adjust weights multiplicatively, and those that do so additively [Thornton et al., 2004]. Multiplicative methods use floating point clause weights and increase/decrease multipliers that combined give the weight distribution a much finer granularity. Additive methods, on the other hand, assign integer values to clause weights and increment/decrement amounts, which results in a coarser clause weight distribution. 3.3 Breakout The Breakout method [Morris, 1993] is the purest form of clause weighting local search. Initialisation is used to assign random truth values to all variables and to assign an initial weight of one to each clause. The evaluation function g(f,s) = cw(c) c UC(F,s) is used to guide variable selection at each iteration, where the method selects the variable whose flip would minimise the summed weight of unsatisfied clauses. Note that this evaluation function is identical to GSAT s except that all GSAT clauses return a weight of one, whereas clause weighting local search methods can manipulate the values of clause weights during the search. As in GSAT, any ties in the variable selection procedure are broken uniformly at random. Breakout s weight manipulation is limited to incrementing the weights of unsatisfied clauses by one every time it reaches a local minimum. Reaching a local minimum is the method s weight manipulation condition. It does 23

40 Chapter 3 Clause Weighting not discriminate between local minima and strict local minima. Therefore, it only performs cost-improving flips and resorts to weight increments whenever such flips are not available, never performing weight decrements. Note that the way weights are incremented in Breakout differs from the way in which they are incremented in GWSAT, GSAT s clause weighting extension [Selman and Kautz, 1993]. In GWSAT weights are added to unsatisfied clauses at the end of the try instead of at every local minimum. Breakout s weight manipulation condition turned out to be the dominant approach in contemporary clause weighting local search methods for SAT. 3.4 Discrete Lagrangian Method (DLM) DLM adapted the use of Lagrangian multipliers, originally utilised for constrained optimisation problems in continuous spaces, to the discrete domain of propositional satisfiability, therefore formalising much of the clause weighting work existing at the time of its introduction. In addition to clause weights, DLM variants also use Tabu lists [Glover, 1989] to guide their variable selection. Tabu lists keep track of points recently visited in the search space by recording the last l flips performed. This information is then used to constrain the search by ensuring that the same solution is not visited more than once for as long as that information is in l. Two practical issues arise with the implementation of a Tabu list: (a) the size of l, and (b) the way in which it is updated. DLM s Tabu list is controlled by a tabu len parameter [Wu and Wah, 1999], and it is is updated on a first-in-first-out (FIFO) fashion. Empirical evidence has shown that these two issues have significant impact on the performance of Tabu-based methods. 24

41 3.4 Discrete Lagrangian Method (DLM) When DLM was first described in [Wah and Shang, 1997], it did not use the Tabu list feature. Later, the basic DLM method was reintroduced as DLM-98- BASIC-SAT [Wu and Wah, 1999], incorporating the Tabu list feature described here. We refer to DLM-98-BASIC-SAT simply as basic DLM. There are three DLM variants in total, all sharing the same preprocessing and initialisation components. Preprocessing is used to perform a complete pass of unit propagation. Initialisation serves to assign random truth values to all variables and to initialise all clause weights to zero. All three variants share the same termination condition that ensures the search is stopped whenever a maximum amount of time has been reached. At each iteration, the basic DLM method selects the variable that (a) is not in the method s Tabu list, and (b) gives the maximum decrease in the evaluation function g(f,s) = (1 + cw(c)) c UC(F,s) Sequences of flips are performed in this fashion until a θ 1 number of equalcost and cost-increasing flips has been reached, at which point the method carries out weight manipulation. Therefore, the parameter θ 1 is part of DLM s weight manipulation condition. During weight manipulation, the weights of all unsatisfied clauses are incremented by one. Unfortunately, purely incremental clause weight manipulation schemes do not work well on difficult instances because the clauses develop large weight differences over time as the search progresses. This in turn results in an inability to rapidly adapt the clause weight distribution to new regions of the search space. Therefore, a distinctive feature of DLM variants is the existence of weight decrement steps, effected by reducing the weights of all clauses by one after a θ 2 number of weight increments are performed. 25

42 Chapter 3 Clause Weighting It is important to note, however, that the idea of periodically decrementing weights was first studied in [Frank, 1996] on a version of GSAT with clause weighting that performs weight updates after every flip. The study indicated the usefulness of weight decrements for the method s performance, especially on random instances. The DLM-99-SAT variant [Wu and Wah, 1999] introduces an extension to basic DLM s weight manipulation component. This extension is of particular relevance to our study on longer-term memory presented in Chapter 4 because it attempts to add a longer-term memory capability to DLM. DLM-99-SAT s SPECIAL-INCREASE feature picks, after every weight increment step, a set C of clauses and computes the ratio r between the maximum clause weight in C and the mean weight of all clauses in C. Membership of C is determined by an instance dependent parameter and is either all unsatisfied clauses or all clauses. If the ratio r is greater than an instance dependent parameter θ 3, then the weight of the clause with the maximum weight in C is incremented by one. Note that the special increase is performed at the end of every standard weight increment and so it can be seen as adding an extra penalty to that single most heavily weighted clause. This is true for all instances where C consists only of unsatisfied clauses. The resulting method showed substantially better performance over basic DLM, particularly on large and structured SAT instances. This result indicates that there is a potential benefit to be obtained from using heuristics to exploit the longer-term memory information contained in clause weights. DLM-2000-SAT [Wu and Wah, 2000] introduces a different extension to basic DLM s variable selection. The idea is to stop the search from visiting the same set of uninteresting candidate solutions visited in the past. To achieve 26

43 3.4 Discrete Lagrangian Method (DLM) this, DLM s evaluation function is extended into g(f,s) = (1 + cw(c)) d c UC(F,s) to accommodate d, a distance penalty given by d = q min {θ t,hd(s,s i )} i=1 where hd(s,s i ) is the hamming distance between candidate solution s and a previously visited candidate solution s i. The hamming distance is the number of different variable assignments between any two candidate solutions. The list containing the previous candidate solutions used in the computation of d is maintained as follows: after every w s flips, the current candidate solution is added to the fixed-length FIFO queue of size q. Another parameter, θ t, is used to ensure d does not become a dominant factor in the evaluation function. Without this parameter, DLM-2000-SAT could potentially prefer flips that are far away from the current candidate solution instead of those giving a better improvement on the weighted cost of the unsatisfied clauses. The DLM-2000-SAT variant typically offers better performance than DLM- 99-SAT. At the time of its introduction, it was responsible for elevating DLM to the position of best performing clause weighting local search method for SAT [Hoos and Stützle, 2005]. However, a significant drawback of all three DLM variants is the relatively high number of instance dependent parameters requiring tuning in order to obtain the levels of performance reported. This characteristic undermines the method s usefulness in practice, given that obtaining optimal or even near-optimal parameter settings is a very time consuming exercise. 27

CS-E3200 Discrete Models and Search

CS-E3200 Discrete Models and Search Shahab Tasharrofi Department of Information and Computer Science, Aalto University Lecture 7: Complete and local search methods for SAT Outline Algorithms for solving Boolean satisfiability problems Complete

More information

Trends in Constraint Programming. Frédéric BENHAMOU Narendra JUSSIEN Barry O SULLIVAN

Trends in Constraint Programming. Frédéric BENHAMOU Narendra JUSSIEN Barry O SULLIVAN Trends in Constraint Programming Frédéric BENHAMOU Narendra JUSSIEN Barry O SULLIVAN November 24, 2006 2 Contents FIRST PART. LOCAL SEARCH TECHNIQUES IN CONSTRAINT SATISFAC- TION..........................................

More information

Handbook of Constraint Programming 245 Edited by F. Rossi, P. van Beek and T. Walsh c 2006 Elsevier All rights reserved

Handbook of Constraint Programming 245 Edited by F. Rossi, P. van Beek and T. Walsh c 2006 Elsevier All rights reserved Handbook of Constraint Programming 245 Edited by F. Rossi, P. van Beek and T. Walsh c 2006 Elsevier All rights reserved Chapter 8 Local Search Methods Holger H. Hoos and Edward Tsang Local search is one

More information

Simple mechanisms for escaping from local optima:

Simple mechanisms for escaping from local optima: The methods we have seen so far are iterative improvement methods, that is, they get stuck in local optima. Simple mechanisms for escaping from local optima: I Restart: re-initialise search whenever a

More information

Speeding Up the ESG Algorithm

Speeding Up the ESG Algorithm Speeding Up the ESG Algorithm Yousef Kilani 1 and Abdullah. Mohdzin 2 1 Prince Hussein bin Abdullah Information Technology College, Al Al-Bayt University, Jordan 2 Faculty of Information Science and Technology,

More information

Random Walk With Continuously Smoothed Variable Weights

Random Walk With Continuously Smoothed Variable Weights Random Walk With Continuously Smoothed Variable Weights Steven Prestwich Cork Constraint Computation Centre Department of Computer Science University College Cork, Ireland s.prestwich@cs.ucc.ie Abstract.

More information

The MAX-SAX Problems

The MAX-SAX Problems STOCHASTIC LOCAL SEARCH FOUNDATION AND APPLICATION MAX-SAT & MAX-CSP Presented by: Wei-Lwun Lu 1 The MAX-SAX Problems MAX-SAT is the optimization variant of SAT. Unweighted MAX-SAT: Finds a variable assignment

More information

Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature.

Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature. Simulated Annealing Key idea: Vary temperature parameter, i.e., probability of accepting worsening moves, in Probabilistic Iterative Improvement according to annealing schedule (aka cooling schedule).

More information

Massively Parallel Seesaw Search for MAX-SAT

Massively Parallel Seesaw Search for MAX-SAT Massively Parallel Seesaw Search for MAX-SAT Harshad Paradkar Rochester Institute of Technology hp7212@rit.edu Prof. Alan Kaminsky (Advisor) Rochester Institute of Technology ark@cs.rit.edu Abstract The

More information

Foundations of AI. 8. Satisfiability and Model Construction. Davis-Putnam, Phase Transitions, GSAT and GWSAT. Wolfram Burgard & Bernhard Nebel

Foundations of AI. 8. Satisfiability and Model Construction. Davis-Putnam, Phase Transitions, GSAT and GWSAT. Wolfram Burgard & Bernhard Nebel Foundations of AI 8. Satisfiability and Model Construction Davis-Putnam, Phase Transitions, GSAT and GWSAT Wolfram Burgard & Bernhard Nebel Contents Motivation Davis-Putnam Procedure Average complexity

More information

EMPIRICAL ANALYSIS OF LOCAL SEARCH ALGORITHMS AND PROBLEM DIFFICULTY IN SATISFIABILITY. Dave Tae Shik Yoon

EMPIRICAL ANALYSIS OF LOCAL SEARCH ALGORITHMS AND PROBLEM DIFFICULTY IN SATISFIABILITY. Dave Tae Shik Yoon EMPIRICAL ANALYSIS OF LOCAL SEARCH ALGORITHMS AND PROBLEM DIFFICULTY IN SATISFIABILITY by Dave Tae Shik Yoon A thesis submitted in conformity with the requirements for the degree of Master of Applied Science

More information

CS227: Assignment 1 Report

CS227: Assignment 1 Report 1 CS227: Assignment 1 Report Lei Huang and Lawson Wong April 20, 2008 1 Introduction Propositional satisfiability (SAT) problems have been of great historical and practical significance in AI. Despite

More information

An Adaptive Noise Mechanism for WalkSAT

An Adaptive Noise Mechanism for WalkSAT From: AAAI-02 Proceedings. Copyright 2002, AAAI (www.aaai.org). All rights reserved. An Adaptive Noise Mechanism for WalkSAT Holger H. Hoos University of British Columbia Computer Science Department 2366

More information

REACTIVE SEARCH FOR MAX-SAT: DIVERSIFICATION- BIAS PROPERTIES WITH PROHIBITIONS AND PENALTIES

REACTIVE SEARCH FOR MAX-SAT: DIVERSIFICATION- BIAS PROPERTIES WITH PROHIBITIONS AND PENALTIES DEPARTMENT OF INFORMATION AND COMMUNICATION TECHNOLOGY 38050 Povo Trento (Italy), Via Sommarive 14 http://dit.unitn.it/ REACTIVE SEARCH FOR MAX-SAT: DIVERSIFICATION- BIAS PROPERTIES WITH PROHIBITIONS AND

More information

A Stochastic Non-CNF SAT Solver

A Stochastic Non-CNF SAT Solver A Stochastic Non-CNF SAT Solver Rafiq Muhammad and Peter J. Stuckey NICTA Victoria Laboratory, Department of Computer Science and Software Engineering, The University of Melbourne, Victoria 3010, Australia

More information

algorithms, i.e., they attempt to construct a solution piece by piece and are not able to offer a complete solution until the end. The FM algorithm, l

algorithms, i.e., they attempt to construct a solution piece by piece and are not able to offer a complete solution until the end. The FM algorithm, l The FMSAT Satisfiability Solver: Hypergraph Partitioning meets Boolean Satisfiability Arathi Ramani, Igor Markov framania, imarkovg@eecs.umich.edu February 6, 2002 Abstract This report is intended to present

More information

Evolving Variable-Ordering Heuristics for Constrained Optimisation

Evolving Variable-Ordering Heuristics for Constrained Optimisation Griffith Research Online https://research-repository.griffith.edu.au Evolving Variable-Ordering Heuristics for Constrained Optimisation Author Bain, Stuart, Thornton, John, Sattar, Abdul Published 2005

More information

On the Run-time Behaviour of Stochastic Local Search Algorithms for SAT

On the Run-time Behaviour of Stochastic Local Search Algorithms for SAT From: AAAI-99 Proceedings. Copyright 1999, AAAI (www.aaai.org). All rights reserved. On the Run-time Behaviour of Stochastic Local Search Algorithms for SAT Holger H. Hoos University of British Columbia

More information

Lookahead Saturation with Restriction for SAT

Lookahead Saturation with Restriction for SAT Lookahead Saturation with Restriction for SAT Anbulagan 1 and John Slaney 1,2 1 Logic and Computation Program, National ICT Australia Ltd., Canberra, Australia 2 Computer Sciences Laboratory, Australian

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

An Introduction to SAT Solvers

An Introduction to SAT Solvers An Introduction to SAT Solvers Knowles Atchison, Jr. Fall 2012 Johns Hopkins University Computational Complexity Research Paper December 11, 2012 Abstract As the first known example of an NP Complete problem,

More information

Example: Map coloring

Example: Map coloring Today s s lecture Local Search Lecture 7: Search - 6 Heuristic Repair CSP and 3-SAT Solving CSPs using Systematic Search. Victor Lesser CMPSCI 683 Fall 2004 The relationship between problem structure and

More information

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Formal Methods Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley The Boolean Satisfiability Problem (SAT) Given: A Boolean formula F(x 1, x 2, x 3,, x n ) Can F evaluate to 1

More information

Adaptive Memory-Based Local Search for MAX-SAT

Adaptive Memory-Based Local Search for MAX-SAT Adaptive Memory-Based Local Search for MAX-SAT Zhipeng Lü a,b, Jin-Kao Hao b, Accept to Applied Soft Computing, Feb 2012 a School of Computer Science and Technology, Huazhong University of Science and

More information

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley Project Proposals Due Friday, February 13 on bcourses Will discuss project topics on Monday Instructions

More information

Satisfiability. Michail G. Lagoudakis. Department of Computer Science Duke University Durham, NC SATISFIABILITY

Satisfiability. Michail G. Lagoudakis. Department of Computer Science Duke University Durham, NC SATISFIABILITY Satisfiability Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 27708 COMPSCI 271 - Spring 2001 DUKE UNIVERSITY Page 1 Why SAT? Historical Reasons The first NP-COMPLETE problem

More information

Stochastic Local Search for SMT

Stochastic Local Search for SMT DISI - Via Sommarive, 14-38123 POVO, Trento - Italy http://disi.unitn.it Stochastic Local Search for SMT Silvia Tomasi December 2010 Technical Report # DISI-10-060 Contents 1 Introduction 5 2 Background

More information

Stochastic Local Search Methods for Dynamic SAT an Initial Investigation

Stochastic Local Search Methods for Dynamic SAT an Initial Investigation Stochastic Local Search Methods for Dynamic SAT an Initial Investigation Holger H. Hoos and Kevin O Neill Abstract. We introduce the dynamic SAT problem, a generalisation of the satisfiability problem

More information

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA GSAT and Local Consistency 3 Kalev Kask and Rina Dechter Department of Information and Computer Science University of California, Irvine, CA 92717-3425 fkkask,dechterg@ics.uci.edu Abstract It has been

More information

Heuristic Backtracking Algorithms for SAT

Heuristic Backtracking Algorithms for SAT Heuristic Backtracking Algorithms for SAT A. Bhalla, I. Lynce, J.T. de Sousa and J. Marques-Silva IST/INESC-ID, Technical University of Lisbon, Portugal fateet,ines,jts,jpmsg@sat.inesc.pt Abstract In recent

More information

Evidence for Invariants in Local Search

Evidence for Invariants in Local Search This paper appears in the Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI-97), Providence, RI, 1997. Copyright 1997 American Association for Artificial Intelligence.

More information

Dynamic Variable Filtering for Hard Random 3-SAT Problems

Dynamic Variable Filtering for Hard Random 3-SAT Problems Dynamic Variable Filtering for Hard Random 3-SAT Problems Anbulagan, John Thornton, and Abdul Sattar School of Information Technology Gold Coast Campus, Griffith University PMB 50 Gold Coast Mail Centre,

More information

A Mixture-Model for the Behaviour of SLS Algorithms for SAT

A Mixture-Model for the Behaviour of SLS Algorithms for SAT From: AAAI-2 Proceedings. Copyright 22, AAAI (www.aaai.org). All rights reserved. A Mixture-Model for the Behaviour of SLS Algorithms for SAT Holger H. Hoos University of British Columbia Computer Science

More information

The Island Confinement Method for Reducing Search Space in Local Search Methods

The Island Confinement Method for Reducing Search Space in Local Search Methods The Island Confinement Method for Reducing Search Space in Local Search Methods H. Fang Y. Kilani J.H.M. Lee P.J. Stuckey July 14, 2006 Abstract Typically local search methods for solving constraint satisfaction

More information

Stochastic greedy local search Chapter 7

Stochastic greedy local search Chapter 7 Stochastic greedy local search Chapter 7 ICS-275 Winter 2016 Example: 8-queen problem Main elements Choose a full assignment and iteratively improve it towards a solution Requires a cost function: number

More information

6.034 Notes: Section 3.1

6.034 Notes: Section 3.1 6.034 Notes: Section 3.1 Slide 3.1.1 In this presentation, we'll take a look at the class of problems called Constraint Satisfaction Problems (CSPs). CSPs arise in many application areas: they can be used

More information

Full CNF Encoding: The Counting Constraints Case

Full CNF Encoding: The Counting Constraints Case Full CNF Encoding: The Counting Constraints Case Olivier Bailleux 1 and Yacine Boufkhad 2 1 LERSIA, Université de Bourgogne Avenue Alain Savary, BP 47870 21078 Dijon Cedex olivier.bailleux@u-bourgogne.fr

More information

QingTing: A Fast SAT Solver Using Local Search and E cient Unit Propagation

QingTing: A Fast SAT Solver Using Local Search and E cient Unit Propagation QingTing: A Fast SAT Solver Using Local Search and E cient Unit Propagation Xiao Yu Li, Matthias F. Stallmann, and Franc Brglez Dept. of Computer Science, NC State Univ., Raleigh, NC 27695, USA {xyli,mfms,brglez}@unity.ncsu.edu

More information

A Re-examination of Limited Discrepancy Search

A Re-examination of Limited Discrepancy Search A Re-examination of Limited Discrepancy Search W. Ken Jackson, Morten Irgens, and William S. Havens Intelligent Systems Lab, Centre for Systems Science Simon Fraser University Burnaby, B.C., CANADA V5A

More information

Improving Diversification in Local Search for Propositional Satisfiability

Improving Diversification in Local Search for Propositional Satisfiability Improving Diversification in Local Search for Propositional Satisfiability by Thach-Thao Nguyen Duong Bachelor of Information Technology, University of Science, Vietnam (2006) Master of Computer Science,

More information

Unrestricted Backtracking Algorithms for Satisfiability

Unrestricted Backtracking Algorithms for Satisfiability From: AAAI Technical Report FS-01-04. Compilation copyright 2001, AAAI (www.aaai.org). All rights reserved. Unrestricted Backtracking Algorithms for Satisfiability I. Lynce, L. Baptista and J. Marques-Silva

More information

Applying Local Search to Temporal Reasoning

Applying Local Search to Temporal Reasoning Applying Local Search to Temporal Reasoning J. Thornton, M. Beaumont and A. Sattar School of Information Technology, Griffith University Gold Coast, Southport, Qld, Australia 4215 {j.thornton, m.beaumont,

More information

Random backtracking in backtrack search algorithms for satisfiability

Random backtracking in backtrack search algorithms for satisfiability Discrete Applied Mathematics 155 (2007) 1604 1612 www.elsevier.com/locate/dam Random backtracking in backtrack search algorithms for satisfiability I. Lynce, J. Marques-Silva Technical University of Lisbon,

More information

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro CMU-Q 15-381 Lecture 8: Optimization I: Optimization for CSP Local Search Teacher: Gianni A. Di Caro LOCAL SEARCH FOR CSP Real-life CSPs can be very large and hard to solve Methods so far: construct a

More information

GSAT and Local Consistency

GSAT and Local Consistency GSAT and Local Consistency Kalev Kask Computer Science Department University of California at Irvine Irvine, CA 92717 USA Rina Dechter Computer Science Department University of California at Irvine Irvine,

More information

On Computing Minimum Size Prime Implicants

On Computing Minimum Size Prime Implicants On Computing Minimum Size Prime Implicants João P. Marques Silva Cadence European Laboratories / IST-INESC Lisbon, Portugal jpms@inesc.pt Abstract In this paper we describe a new model and algorithm for

More information

Journal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems *

Journal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems * Journal of Global Optimization, 10, 1{40 (1997) c 1997 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. A Discrete Lagrangian-Based Global-Search Method for Solving Satisability Problems

More information

Satisfiability-Based Algorithms for 0-1 Integer Programming

Satisfiability-Based Algorithms for 0-1 Integer Programming Satisfiability-Based Algorithms for 0-1 Integer Programming Vasco M. Manquinho, João P. Marques Silva, Arlindo L. Oliveira and Karem A. Sakallah Cadence European Laboratories / INESC Instituto Superior

More information

The island confinement method for reducing search space in local search methods

The island confinement method for reducing search space in local search methods J Heuristics (2007) 13: 557 585 DOI 10.1007/s10732-007-9020-8 The island confinement method for reducing search space in local search methods H. Fang Y. Kilani J.H.M. Lee P.J. Stuckey Received: 1 November

More information

Set 5: Constraint Satisfaction Problems Chapter 6 R&N

Set 5: Constraint Satisfaction Problems Chapter 6 R&N Set 5: Constraint Satisfaction Problems Chapter 6 R&N ICS 271 Fall 2017 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples:

More information

An Efficient Global-Search Strategy in Discrete Lagrangian Methods for Solving Hard Satisfiability Problems

An Efficient Global-Search Strategy in Discrete Lagrangian Methods for Solving Hard Satisfiability Problems From: AAAI-00 Proceedings. Copyright 2000, AAAI (www.aaai.org). All rights reserved. An Efficient Global-Search Strategy in Discrete Lagrangian Methods for Solving Hard Satisfiability Problems Zhe Wu and

More information

An Experimental Evaluation of Conflict Diagnosis and Recursive Learning in Boolean Satisfiability

An Experimental Evaluation of Conflict Diagnosis and Recursive Learning in Boolean Satisfiability An Experimental Evaluation of Conflict Diagnosis and Recursive Learning in Boolean Satisfiability Fadi A. Aloul and Karem A. Sakallah Department of Electrical Engineering and Computer Science University

More information

Using Cost Distributions to Guide Weight Decay in Local Search for SAT

Using Cost Distributions to Guide Weight Decay in Local Search for SAT Using Cost Distributions to Guide Weight Decay in Local Search for SAT John Thornton and Duc Nghia Pham SAFE Program, Queensland Research Lab, NICTA and Institute for Integrated and Intelligent Systems,

More information

WalkSAT: Solving Boolean Satisfiability via Stochastic Search

WalkSAT: Solving Boolean Satisfiability via Stochastic Search WalkSAT: Solving Boolean Satisfiability via Stochastic Search Connor Adsit cda8519@rit.edu Kevin Bradley kmb3398@rit.edu December 10, 2014 Christian Heinrich cah2792@rit.edu Contents 1 Overview 1 2 Research

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2014 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

An Analysis and Comparison of Satisfiability Solving Techniques

An Analysis and Comparison of Satisfiability Solving Techniques An Analysis and Comparison of Satisfiability Solving Techniques Ankur Jain, Harsha V. Madhyastha, Craig M. Prince Department of Computer Science and Engineering University of Washington Seattle, WA 98195

More information

underlying iterative best improvement procedure based on tabu attributes. Heuristic Optimization

underlying iterative best improvement procedure based on tabu attributes. Heuristic Optimization Tabu Search Key idea: Use aspects of search history (memory) to escape from local minima. Simple Tabu Search: I Associate tabu attributes with candidate solutions or solution components. I Forbid steps

More information

CS-E3220 Declarative Programming

CS-E3220 Declarative Programming CS-E3220 Declarative Programming Lecture 5: Premises for Modern SAT Solving Aalto University School of Science Department of Computer Science Spring 2018 Motivation The Davis-Putnam-Logemann-Loveland (DPLL)

More information

Kalev Kask and Rina Dechter

Kalev Kask and Rina Dechter From: AAAI-96 Proceedings. Copyright 1996, AAAI (www.aaai.org). All rights reserved. A Graph-Based Method for Improving GSAT Kalev Kask and Rina Dechter Department of Information and Computer Science University

More information

Outline of the module

Outline of the module Evolutionary and Heuristic Optimisation (ITNPD8) Lecture 2: Heuristics and Metaheuristics Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ Computing Science and Mathematics, School of Natural Sciences University

More information

Integrating Probabilistic Reasoning with Constraint Satisfaction

Integrating Probabilistic Reasoning with Constraint Satisfaction Integrating Probabilistic Reasoning with Constraint Satisfaction IJCAI Tutorial #7 Instructor: Eric I. Hsu July 17, 2011 http://www.cs.toronto.edu/~eihsu/tutorial7 Getting Started Discursive Remarks. Organizational

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Chapter 6 of Russell & Norvig] Constraint satisfaction problems (CSPs) Standard search problem:

More information

System Description: iprover An Instantiation-Based Theorem Prover for First-Order Logic

System Description: iprover An Instantiation-Based Theorem Prover for First-Order Logic System Description: iprover An Instantiation-Based Theorem Prover for First-Order Logic Konstantin Korovin The University of Manchester School of Computer Science korovin@cs.man.ac.uk Abstract. iprover

More information

Configuration landscape analysis and backbone guided local search. Part I: Satisfiability and maximum satisfiability

Configuration landscape analysis and backbone guided local search. Part I: Satisfiability and maximum satisfiability Artificial Intelligence 158 (2004) 1 26 www.elsevier.com/locate/artint Configuration landscape analysis and backbone guided local search. Part I: Satisfiability and maximum satisfiability Weixiong Zhang

More information

Lecture: Iterative Search Methods

Lecture: Iterative Search Methods Lecture: Iterative Search Methods Overview Constructive Search is exponential. State-Space Search exhibits better performance on some problems. Research in understanding heuristic and iterative search

More information

CS 3EA3: Sheet 9 Optional Assignment - The Importance of Algebraic Properties

CS 3EA3: Sheet 9 Optional Assignment - The Importance of Algebraic Properties CS 3EA3: Sheet 9 Optional Assignment - The Importance of Algebraic Properties James Zhu 001317457 21 April 2017 1 Abstract Algebraic properties (such as associativity and commutativity) may be defined

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

Boolean Functions (Formulas) and Propositional Logic

Boolean Functions (Formulas) and Propositional Logic EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Part I: Basics Sanjit A. Seshia EECS, UC Berkeley Boolean Functions (Formulas) and Propositional Logic Variables: x 1, x 2, x 3,, x

More information

a local optimum is encountered in such a way that further improvement steps become possible.

a local optimum is encountered in such a way that further improvement steps become possible. Dynamic Local Search I Key Idea: Modify the evaluation function whenever a local optimum is encountered in such a way that further improvement steps become possible. I Associate penalty weights (penalties)

More information

The Distributed Breakout Algorithms

The Distributed Breakout Algorithms The Distributed Breakout Algorithms Katsutoshi Hirayama a, Makoto Yokoo b a Faculty of Maritime Sciences, Kobe University, 5-- Fukaeminami-machi, Higashinada-ku, Kobe 658-00, JAPAN b Faculty of Information

More information

Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms

Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms Frank Hutter 1, Youssef Hamadi 2, Holger Hoos 1, and Kevin Leyton-Brown 1 1 University of British Columbia, Vancouver,

More information

Introduction: Combinatorial Problems and Search

Introduction: Combinatorial Problems and Search STOCHASTIC LOCAL SEARCH FOUNDATIONS AND APPLICATIONS Introduction: Combinatorial Problems and Search Holger H. Hoos & Thomas Stützle Outline 1. Combinatorial Problems 2. Two Prototypical Combinatorial

More information

CMPUT 366 Intelligent Systems

CMPUT 366 Intelligent Systems CMPUT 366 Intelligent Systems Assignment 1 Fall 2004 Department of Computing Science University of Alberta Due: Thursday, September 30 at 23:59:59 local time Worth: 10% of final grade (5 questions worth

More information

ABHELSINKI UNIVERSITY OF TECHNOLOGY

ABHELSINKI UNIVERSITY OF TECHNOLOGY Local Search Algorithms for Random Satisfiability Pekka Orponen (joint work with Sakari Seitz and Mikko Alava) Helsinki University of Technology Local Search Algorithms for Random Satisfiability 1/30 Outline

More information

Multi Domain Logic and its Applications to SAT

Multi Domain Logic and its Applications to SAT Multi Domain Logic and its Applications to SAT Tudor Jebelean RISC Linz, Austria Tudor.Jebelean@risc.uni-linz.ac.at Gábor Kusper Eszterházy Károly College gkusper@aries.ektf.hu Abstract We describe a new

More information

Captain Jack: New Variable Selection Heuristics in Local Search for SAT

Captain Jack: New Variable Selection Heuristics in Local Search for SAT Captain Jack: New Variable Selection Heuristics in Local Search for SAT Dave Tompkins, Adrian Balint, Holger Hoos SAT 2011 :: Ann Arbor, Michigan http://www.cs.ubc.ca/research/captain-jack Key Contribution:

More information

Normal Forms for Boolean Expressions

Normal Forms for Boolean Expressions Normal Forms for Boolean Expressions A NORMAL FORM defines a class expressions s.t. a. Satisfy certain structural properties b. Are usually universal: able to express every boolean function 1. Disjunctive

More information

Towards More Effective Unsatisfiability-Based Maximum Satisfiability Algorithms

Towards More Effective Unsatisfiability-Based Maximum Satisfiability Algorithms Towards More Effective Unsatisfiability-Based Maximum Satisfiability Algorithms Joao Marques-Silva and Vasco Manquinho School of Electronics and Computer Science, University of Southampton, UK IST/INESC-ID,

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose Department of Electrical and Computer Engineering University of California,

More information

Solving 3-SAT. Radboud University Nijmegen. Bachelor Thesis. Supervisors: Henk Barendregt Alexandra Silva. Author: Peter Maandag s

Solving 3-SAT. Radboud University Nijmegen. Bachelor Thesis. Supervisors: Henk Barendregt Alexandra Silva. Author: Peter Maandag s Solving 3-SAT Radboud University Nijmegen Bachelor Thesis Author: Peter Maandag s3047121 Supervisors: Henk Barendregt Alexandra Silva July 2, 2012 Contents 1 Introduction 2 1.1 Problem context............................

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2013 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

CSC2542 SAT-Based Planning. Sheila McIlraith Department of Computer Science University of Toronto Summer 2014

CSC2542 SAT-Based Planning. Sheila McIlraith Department of Computer Science University of Toronto Summer 2014 CSC2542 SAT-Based Planning Sheila McIlraith Department of Computer Science University of Toronto Summer 2014 1 Acknowledgements Some of the slides used in this course are modifications of Dana Nau s lecture

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2012 Rina Dechter ICS-271:Notes 5: 1 Outline The constraint network model Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

EFFICIENT ATTACKS ON HOMOPHONIC SUBSTITUTION CIPHERS

EFFICIENT ATTACKS ON HOMOPHONIC SUBSTITUTION CIPHERS EFFICIENT ATTACKS ON HOMOPHONIC SUBSTITUTION CIPHERS A Project Report Presented to The faculty of the Department of Computer Science San Jose State University In Partial Fulfillment of the Requirements

More information

4.1 Review - the DPLL procedure

4.1 Review - the DPLL procedure Applied Logic Lecture 4: Efficient SAT solving CS 4860 Spring 2009 Thursday, January 29, 2009 The main purpose of these notes is to help me organize the material that I used to teach today s lecture. They

More information

Complete Local Search for Propositional Satisfiability

Complete Local Search for Propositional Satisfiability Complete Local Search for Propositional Satisfiability Hai Fang Department of Computer Science Yale University New Haven, CT 06520-8285 hai.fang@yale.edu Wheeler Ruml Palo Alto Research Center 3333 Coyote

More information

Planning as Search. Progression. Partial-Order causal link: UCPOP. Node. World State. Partial Plans World States. Regress Action.

Planning as Search. Progression. Partial-Order causal link: UCPOP. Node. World State. Partial Plans World States. Regress Action. Planning as Search State Space Plan Space Algorihtm Progression Regression Partial-Order causal link: UCPOP Node World State Set of Partial Plans World States Edge Apply Action If prec satisfied, Add adds,

More information

Multilevel Stochastic Local Search for SAT

Multilevel Stochastic Local Search for SAT Multilevel Stochastic Local Search for SAT Camilo Rostoker University of British Columbia Department of Computer Science rostokec@cs.ubc.ca Chris Dabrowski University of British Columbia Department of

More information

Iterated Robust Tabu Search for MAX-SAT

Iterated Robust Tabu Search for MAX-SAT Iterated Robust Tabu Search for MAX-SAT Kevin Smyth 1, Holger H. Hoos 1, and Thomas Stützle 2 1 Department of Computer Science, University of British Columbia, Vancouver, B.C., V6T 1Z4, Canada {hoos,ksmyth}@cs.ubc.ca

More information

1 Introduction RHIT UNDERGRAD. MATH. J., VOL. 17, NO. 1 PAGE 159

1 Introduction RHIT UNDERGRAD. MATH. J., VOL. 17, NO. 1 PAGE 159 RHIT UNDERGRAD. MATH. J., VOL. 17, NO. 1 PAGE 159 1 Introduction Kidney transplantation is widely accepted as the preferred treatment for the majority of patients with end stage renal disease [11]. Patients

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Greedy Local Search Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 19, 2007 Nebel, Hué and Wölfl (Universität Freiburg) Constraint

More information

Reducing Graphic Conflict In Scale Reduced Maps Using A Genetic Algorithm

Reducing Graphic Conflict In Scale Reduced Maps Using A Genetic Algorithm Reducing Graphic Conflict In Scale Reduced Maps Using A Genetic Algorithm Dr. Ian D. Wilson School of Technology, University of Glamorgan, Pontypridd CF37 1DL, UK Dr. J. Mark Ware School of Computing,

More information

A New Algorithm for Singleton Arc Consistency

A New Algorithm for Singleton Arc Consistency A New Algorithm for Singleton Arc Consistency Roman Barták, Radek Erben Charles University, Institute for Theoretical Computer Science Malostranské nám. 2/25, 118 Praha 1, Czech Republic bartak@kti.mff.cuni.cz,

More information

CSP- and SAT-based Inference Techniques Applied to Gnomine

CSP- and SAT-based Inference Techniques Applied to Gnomine CSP- and SAT-based Inference Techniques Applied to Gnomine Bachelor Thesis Faculty of Science, University of Basel Department of Computer Science Artificial Intelligence ai.cs.unibas.ch Examiner: Prof.

More information

Discrete Lagrangian-Based Search for Solving MAX-SAT Problems. Benjamin W. Wah and Yi Shang West Main Street. Urbana, IL 61801, USA

Discrete Lagrangian-Based Search for Solving MAX-SAT Problems. Benjamin W. Wah and Yi Shang West Main Street. Urbana, IL 61801, USA To appear: 15th International Joint Conference on Articial Intelligence, 1997 Discrete Lagrangian-Based Search for Solving MAX-SAT Problems Abstract Weighted maximum satisability problems (MAX-SAT) are

More information

CS 188: Artificial Intelligence Spring Today

CS 188: Artificial Intelligence Spring Today CS 188: Artificial Intelligence Spring 2006 Lecture 7: CSPs II 2/7/2006 Dan Klein UC Berkeley Many slides from either Stuart Russell or Andrew Moore Today More CSPs Applications Tree Algorithms Cutset

More information

Random Subset Optimization

Random Subset Optimization Random Subset Optimization Boi Faltings and Quang-Huy Nguyen Artificial Intelligence Laboratory (LIA), Swiss Federal Institute of Technology (EPFL), IN-Ecublens, CH-1015 Ecublens, Switzerland, boi.faltings

More information

ESE535: Electronic Design Automation CNF. Today CNF. 3-SAT Universal. Problem (A+B+/C)*(/B+D)*(C+/A+/E)

ESE535: Electronic Design Automation CNF. Today CNF. 3-SAT Universal. Problem (A+B+/C)*(/B+D)*(C+/A+/E) ESE535: Electronic Design Automation CNF Day 21: April 21, 2008 Modern SAT Solvers ({z}chaff, GRASP,miniSAT) Conjunctive Normal Form Logical AND of a set of clauses Product of sums Clauses: logical OR

More information

Solving Constraint Satisfaction Problems by Artificial Bee Colony with Greedy Scouts

Solving Constraint Satisfaction Problems by Artificial Bee Colony with Greedy Scouts , 23-25 October, 2013, San Francisco, USA Solving Constraint Satisfaction Problems by Artificial Bee Colony with Greedy Scouts Yuko Aratsu, Kazunori Mizuno, Hitoshi Sasaki, Seiichi Nishihara Abstract In

More information