Improving Diversification in Local Search for Propositional Satisfiability

Size: px
Start display at page:

Download "Improving Diversification in Local Search for Propositional Satisfiability"

Transcription

1 Improving Diversification in Local Search for Propositional Satisfiability by Thach-Thao Nguyen Duong Bachelor of Information Technology, University of Science, Vietnam (2006) Master of Computer Science, University of Science, Vietnam (2009) Institute for Integrated and Intelligent Systems Science, Environment, Engineering and Technology Griffith University A thesis submitted in fulfillment of the requirements of the degree of Doctor of Philosophy Febuary, 2014

2

3 Abstract In recent years, the Propositional Satisfiability (SAT) has become standard for encoding real world complex constrained problems. SAT has significant impacts on various research fields in Artificial Intelligence (AI) and Constraint Programming (CP). SAT algorithms have also been successfully used in solving many practical and industrial applications that include electronic design automation, default reasoning, diagnosis, planning, scheduling, image interpretation, circuit design, and hardware and software verification. The most common representation of a SAT formula is the Conjunctive Normal Form (CNF). A CNF formula is a conjunction of clauses where each clause is a disjunction of Boolean literals. A SAT formula is satisfiable if there is a truth assignment for each variable such that all clauses in the formula are satisfied. Solving a SAT problem is to determine a truth assignment that satisfies a CNF formula. SAT is the first problem proved to be NP-complete [20]. There are many algorithmic methodologies to solve SAT. The most obvious one is systematic search; however another popular and successful approach is stochastic local search (SLS). Systematic search is usually referred to as complete search or backtrack-style search. In contrast, SLS is a method to explore the search space by randomisation and perturbation operations. Although SLS is an incomplete search method, it is able to find the solutions effectively by using limited time and resources. Moreover, some SLS solvers can solve hard SAT problems in a few minutes while these problems could be beyond the capacity of systematic search solvers. Due to the widespread demands for efficient SLS for SAT (SLS-SAT) and also due to the fact that many large-scale problems encoded successfully into SAT, methods to boost the performance of the SLS-SAT solvers are highly desired. Although SLS-SAT have been rigorously investigated in the recent decades, there are still various open areas that should be addressed to further boost the performance of SLS-SAT. Such open areas include improving diversification phases to widely explore the search space and improving the intensification to find solutions faster, escaping local i

4 ii Abstract minima intelligently, and controlling the diversification and intensification during the search. All of these issues require handling the trade-off between intensification and diversification. In this thesis, we proposed and developed novel strategies to enhance SLS performance in both intensification and diversification phases. We aimed to verify empirically the efficiency and robustness of our proposed strategies on structured and random instances. Since local search and systematic search have different effects and impacts on solving structured and random instances, optimal settings of enhancement strategies are learnt separately on different instance sets. The outcomes showed that optimal parameter settings depend on the types of problem instance. Our four strategies to improve performance of SLS for SAT are listed below: Learning strategies on variables that are involved in stagnation situation with a view to efficiently preventing local search from falling into traps. [Part of this work was published as regular papers at AAAI-2012, ECAI-2012, and AI-2012 conferences] Exploiting clause weights in clause selection at stagnation to help SLS escape local minima. [Part of this work was published as a regular paper at IJCAI-2013 conference] Decreasing greediness in the scoring function in the diversification phases to boost SLS to reach solutions faster especially in structured instances [Part of this work was published as a regular paper and was awarded the Best Paper Award at AI-2013 conference] Backtracking from local minima to identify and revert the conflicts as an alternative way to escape local minima. [Part of this work was published as a regular paper at SCAI-2013 conference] All of these enhancement strategies are tested on verification benchmarks, and SAT 2011 and 2012 competition datasets. Our experimental results showed the robustness and efficiency of the proposed novel strategies on a wide range of instance sets. The results demonstrate that by following our proposed enhancement strategies, significant improvements could be achieved on structured and random instances.

5 Publications The results presented in this thesis have already been published in a number of conferences that are highly-ranked by the Excellence in Research for Australia (ERA) ranking system. The articles are listed below in their chronological order of publishing. 1. Duc-Nghia Pham, Thach-Thao Duong, Abdul Sattar, Trap Avoidance in Local Search Using Pseudo-Conflict Learning, In Proceeding of the 26th Conference on Artificial Intelligence (AAAI), pp , July 2012, Toronto, Canada. [85] 2. Thach-Thao Duong, Duc Nghia Pham, Abdul Sattar, A Study of Local Minimum Avoidance Heuristics for SAT, In Proceeding of the 20th European Conference on Artificial Intelligence (ECAI), pp , August 2012, Montpellier, France. [27] 3. Thach-Thao Duong, Duc Nghia Pham, Abdul Sattar, A Method to Avoid Duplicative Flipping in Local Search for SAT, In Proceeding of the 25th Australasian Joint Conference on Artificial Intelligence (AI), pp , 4-7 December 2012, Sydney, Australia[26] 4. Thach-Thao Duong, Duc Nghia Pham, M.A.Hakim Newton, Abdul Sattar, Weight-Enhanced Diversification in Stochastic Local Search for Satisfiability, In Proceeding of the 23th International Joint Conference on Artificial Intelligence (IJCAI), pp , 3-9 August 2012, Beijing, China [29] 5. Huu-Phuoc Duong, Thach-Thao Duong, Duc-Nghia Pham, Abdul Sattar, Anh-Duc Duong, Trap escape for local search by backtracking and conflict reverse, In the proceeding of the 12th Scandinavian Conference on Artificial Intelligence (SCAI ), pp 85-94, November 2013, Aalborg, Denmark [25] 6. Thach-Thao Duong, Duc-Nghia Pham, Abdul Sattar, Diversify intensification phases for local search with a new probability distribution, In the proceeding of the 25th Australasian iii

6 iv Publications Joint Conference on Artificial Intelligence (AI), pp , 1-6 December 2013, Dunedin, New Zealand (Best paper award and best student paper award) [28]

7 Statement of Originality This work has not previously been submitted for a degree or diploma to any university. To the best of my knowledge and belief, the thesis contains no material previously published or written by another person except where due reference is made in the thesis itself. Thach-Thao Nguyen Duong Febuary, 2014 v

8 Contents Abstract i Publications iii Statement of Originality v Contents vi List of Figures xi List of Tables xii List of Algorithms xv Abbreviations and Notations xvi Acknowledgements xviii 1 Introduction CSP SAT Constraint-based Search Strategies Search Strategies for SAT Research Questions Research contributions Thesis Outline Stochastic Local Search For SAT Constraint-based Stochastic Local Search Evaluation Function and Objective Function Global Optimum, Local Optimum and Plateau Iterative Improvement Randomised Iterative Improvement vi

9 Contents vii Probabilistic Iterative Improvement Dynamic Local Search Simulated Annealing Tabu Search Iterated Local Search Existing Stochastic Local Search for SAT Preliminary Notations GSAT HSAT GSAT-random-walk TSAT - Tabu GSAT GSAT Hill-climbing WalkSAT WalkSAT/SKC Novelty Adaptive WalkSAT and AdaptNovelty Clause-weighting Scheme GSAT-clause-weight and Breakout WGSAT and WGSAT-Decay DLM GLSSAT SDF ESG SAPS PAWS Some Contemporary and Hybrid Algorithms G 2 WSAT in 2005 and adaptg 2 WSAT+ in VW1 and VW2 in gnovelty + in TNM Sparrow EagleUP

10 viii Contents sattime CCASat The Categorisation and Milestones in Developing SLS-SAT Pseudo-conflict Learning Introduction Trap Avoidance in SLS-SAT gnovelty Exploration of Poor Stagnation Performance Trap Prevention Strategy Stagnation Paths and Stagnation Weights Learning and Prevention from Stagnation Pseudo-conflict Learning NoveltyPCL: Integration of PCL on Novelty gnovelty + PCL: Integration of PCL on gnovelty Evaluation of gnovelty + PCL Ternary Chains with Pseudo-Conflict Learning Verification Benchmarks SAT 2011 Benchmarks Search Enhancements for PCL Forgetting Outdated Conflicts A Study of Uncertainty PCL for Uncertainty and Forgetting Variants of PCL Evaluation of Search Enhancements Ternary Chains Verification Benchmarks Conclusion and Future Work Weight-Enhanced Diversification Introduction Clause and Variable Weighting gnovelty

11 Contents ix 4.3 Weight-Enhanced Diversification Clause-Weighting Enhancement Variable-Weighting Enhancement gnovelty + GC Experimental Results Experiments on Ternary Chains Experiments on Verification Problems Experiments on SAT2011 Benchmarks Discussions Conclusion And Future work Probability-based Evaluation Function Introduction Preliminaries Basic Scoring Function Dynamic Scoring Function Clause-weighting VW Sparrow Probability-based Dynamic Scoring Function Motivation Defining a Probability Distribution Probability Distribution on Greediness Probability Distribution on Diversification Diversification Parameter α The PCF Algorithm Experiments Experiment Setup Experiment Analysis Analysis of Parameter Configurations Conclusion and Future work

12 x Contents 6 Pseudo-Conflict Reversion Introduction Preliminaries gnovelty + PCL PCL Heuristics at Stagnation Phases Backtracking Retrieval to Reverse Pseudo-Conflicts Backtracking Retrieval Variable Weights for Tie-breaking NoveltyE for Trap Escape The NovEsc Algorithm Experiments Experiment Set up Result Analysis Discussion about Parameter Configurations Conclusion and Future Works Conclusions Summary of Contributions Future Directions

13 List of Figures 2.1 State space landscape of local search Dynamic local search fills local minima Flow chart illustrated the Novelty s mechanim of variable selection Performances of the 7 canonical SLS-SATs on Ternary chain gnovelty + PCL (vs) other SLS-SATs on Ternary chain size [-0] gnovelty + PCL (vs) SLS-SATs on Ternary chain size [0-00] gnovelty + PCL (vs) other SLS-SATs on Verification benchmarks gnovelty + PCL (vs) other SLS-SAT solvers on SAT 2011 competition dataset Runtime distribution of gnovelty + PCL on Ternary chain size [0,00] Runtime comparision of PCL: single (vs) duplication (vs) all selection Runtime comparision of PCL: Time-window (vs) No Time Window Runtime comparision of PCL: Smooth (vs) No Smooth Performance of gnovelty + PCL on Verification benchmarks Performance of gnovelty + PCL on SAT 2011 competition instances gnovelty + GC (vs) SLS-SATs on Ternary chains size[-0] gnovelty + GC (vs) SLS-SATs on Ternary chains size[0-00] Runtime Correlation between gnovelty + GC a and SLS-SATs Run-time correlation of PCF (vs) SLS-SATs Runtime Correlation of NovEsc (vs) SLS-SATs xi

14 List of Tables 3.1 Algorithmic comparision of SLS-SATs on trap escape Performance of gnovelty + PCL on Verification benchmarks Performance of gnovelty + PCL on the 2011 SAT competition datasets Average of duplication ratios of VW2, gnovelty +, Sparrow Variants of Pseudo-Conflict Learning Performance of optimised PCL variants in Verification benchmarks Performance of optimised PCL variants in SAT 2011 benchmarks Performance of gnovelty + GC (vs) SLS-SATs on cbmc, swv, and sss-sat Performance of gnovelty + GC (vs) SLS-SATs on SAT 2011 structured instances Performance of gnovelty + GC (vs) SLS-SATs on SAT 2011 random instances Optimal parameters of gnovelty + GC Performance of PCF on cbmc, SAT 2012 Crafted instances Performance of PCF on SAT 2011 Competition medium-size random instances Optimal parameters of PCF Performance of NovEsc (vs) SLS-SATs on cbmc and sss-sat Optimal parameters of NovEsc xii

15 List of Algorithms 1 Iterative Improvement Randomised Iterative Improvement Probabilistic Iterative Improvement Dynamic Local Search Simulated Annealing Tabu Iterative Improvement Iterated Local Search Random-Assignment ( Θ ) GSAT(Θ, maxt ries, maxsteps ) HSAT (Θ, maxt ries, maxsteps) LeastRecentVariable (V arset) RandomWalk (Set) RandomWalks (Θ,Ω) GSAT-random-walk (Θ,maxT ries, maxsteps, wp) GSAT-Tabu (Θ,maxT ries, maxsteps, k) GSAT Hill-climbing (Θ, maxt ries, maxsteps) WalkSAT-SKC (Θ) NoveltySelection (f, c, p) MostRecentVariable (V arset) Novelty (Θ, p) Novelty+ (Θ, p) InitNoveltyNoise (θ) AdaptNoveltyNoise(p, φ) AdaptNovelty + (Θ) xiii

16 xiv List of Algorithms 25 GSAT-clause-weight (Θ) Breakout (Θ) WGSAT-Decay (Θ, ρ) DLM-98-BASIC-SAT(Θ,θ 1,θ 2,δ o,δ d ) GLSSAT(Θ, smax, pmax, pdecay) SDF(Θ, η, δ) ESG(Θ, α sat, α unsat, η) SAPS(p smooth, wp, ρ, SAP S thresh ) PAWS(Θ, Max inc ) Novelty++ (f, p, dp) NoveltySelection( f, c, p) G 2 WSAT adaptg 2 WSAT+ (Θ, p, wp) VW1(Θ, p) VW2(Θ, p,s,b) gnovelty + (Θ, sp) adaptg 2 WSAT Selection (p2, wp) T NM Sparrow2011(Θ,a 1, a 2, a 3 ) EagleUP(Θ) NoveltySelection( f(), c, p, satt ime) sattime (Θ, wp, p) sattime (Θ, wp, p) CCA heuristic at Greedy phases (Θ) Swcca (Θ, γ, ρ) CCApscore (Θ) SLSSAT(Θ,MaxSteps) gnovelty + (Θ, MaxSteps, sp) Pseudo-Conflict Learning PCL(k,H) NoveltyPCL(p) gnovelty + PCL(k,sT,dp,sp)

17 List of Algorithms xv 56 Pseudo-Conflict Learning PCL(k,H,s,T,tp) gnovelty + (Θ, sp) WeightedNovelty(p) gnovelty + GC(β,p) gnovelty + GC (Θ, β, sp) PCF(Θ, sp) gnovelty + PCL(k,sp) Pseudo-conflict learning strategy PCL(k,H) NoveltyE(P, γ, p) NovEsc(γ, k, sp)

18 Abbreviations and Notations ABBREVIATION SAT SLS SLS-SAT SLS-SATs GLS GLS-SAT DLS DLS-SAT SAT MAX-SAT CNF DNF DLM PAWS SAPS GLSSAT Eq. Alg. Fig. FULL NAME Boolean Propositional Satisfiability Problems (i.e. Satisfiability in short) Stochastic Local Search Stochastic Local Search for Satisfiability Problems Stochastic Local Search Algorithms for Satisfiability Problems Guided Local Search Guided Local Search for Satisfiability Problems Dynamic local search Dynamic local search for Satisfiability Problems Satisfiability Problem Maximum Satisfiability Problems Conjunctive normal form Disjunctive normal form Discrete Langrangian Multipliers Pure Additive Weighting Scheme Scaling and Probabilistic smoothing Guided Local Search for Satisfiability Problems Equation Algorithm Figure xvi

19 Abbreviations and Notations xvii NOTATION Θ n m Ω Ω v C V V [c] L[c] c u c i v v i F Ω F Ω (v) W Ω W Ω (v) ft[v] cw[c] N[v] NC[v] x : max(f(x)) x : min(f(x)) DESCRIPTION SAT formula Number of variables of Θ Number of clauses of Θ Solution candidate Solution candidate Ω after variable v is flipped The set of clauses of Θ The set of variables in Θ The set of variables in clause c The set of literals in clause c A specific clause. A specific unsatisfied clause. The i-th clause (i = 1..m) A specific variable. The i-th variable. (i = 1..n) Number of unsatisfied clauses of Θ under the assignment Ω Decrease in number of unsatisfied clauses when variable v of Ω is flipped Total weights of unsatisfied clauses Θ under the assignment Ω Decrease in total weights of unsatisfied clauses when variable v of Ω is flipped The last step a variable v is flipped (i.e. ft stands for flip time) The weight of clause c Set of neighbor variables of v Set of clauses containing variable v The x maximising the function f(x) The x minimising the function f(x)

20 Acknowledgements I am deeply grateful to my supervisors Prof. Abdul Sattar and Dr. Duc Nghia Pham for their constructive discussions and brilliant suggestions. With their dedication, they always kept me in the right directions, while at the same time giving me the freedom to explore my own ideas. I gratefully acknowledge their encouragement and tireless support especially in the hard-time of my PhD candidate. I would also like to gratefully acknowledge my colleagues at National ICT Australia (NICTA) Ltd. (Dr. M.A.Hakim Newton and Dr. Charles Gretton) and my friends in the Institute of Integrated and Intelligent Systems of Griffith University for their kindness and helpful comments towards my research. I would like to acknowledge the generous financial and facility assistance from Griffith University and the Queensland Research Laboratory of National ICT Australia (NICTA) Ltd.; without their support this work would not have been possible. Special thanks also to the Research Computing Services of Griffith University for providing the high performance computing infrastructure and IT support during my PhD. And finally, I would like to express my gratitude to my parents for their unconditional support and love. xviii

21

22 Chapter 1 Introduction In this chapter, we provide a brief background information on the two popular and well-related research problems: constraint satisfaction problem (CSP) and propositional satisfaction (SAT) problem. We then explore the search strategies used in solving these problems. Our focus, however, is mostly on stochastic local search approaches. Then, we present the research questions investigated in this study, and the contributions made thereby. Lastly, we outline the organization of the rest of the thesis. 1.1 CSP A constraint satisfaction problem (CSP) is defined as the problem of finding an assignment of valid properties or values to objects or variables so that the assignment satisfies all the specified constraints on those objects. If no such assignment exists, CSP in that case attempts to prove that. Many real world problems have constraints on objects. For example, planning and scheduling problems have temporal constraints or budget management problem has tangible constraints of costs on assets. These problems could be described as CSPs and then could be solved efficiently by using constraint solvers. While the exponential nature of possible variable assignments in CSPs make them computationally difficult, the idea of constraint could also be used to effectively prune the search space and thus solve many of the hard problems. In fact, the constraint-based approach has been successfully applied to real world problems in domains such as operation research (e.g. scheduling, timetabling), bioinformatics (DNA sequencing), and electrical engineering (circuit lay-outing) [4]. Definition A CSP is formally defined as a tuple (X, D, C) where 1

23 2 Chapter 1. Introduction V = {v 1, v 2,.., v n } is a finite set of variables whose values are to be determined. D = {d 1, d 2,.., d n } is a set of finite domains where each domain d i contains a finite set of values that can be assigned to the variable v i ; C = {c 1, c 2,.., c m } is a finite set of constraints that restrict the assignment of values to those variables in V. Similar to CSPs, a constraint optimisation problem (COP) can be defined as the problem of finding an assignment of valid properties to objects so that a pre-defined objective function is optimised (maximised or minimised) under that assignment. However, the solution to a COP may or may not be expected to satisfy all the specified constraints on the given objects. Not surprisingly, a CSP can be viewed and solved as a COP where the objective function is to minimise the number of unsatisfied constraints. In this case, the CSP is satisfiable if there exists an assignment under which the objective function equals to zero. 1.2 SAT The SAT problem [39] is a sub-class of the constraint satisfaction problem [8]. Boolean SAT is a special type of SAT where the domain of a variable is binary values {false,true}, and each constraint is a logical disjunction. If the formula is limited to only logic operations (e.g. and ( ), or ( ) and not ( )), then the formula is said to be a propositional boolean formula. There are two normal forms to express a SAT formula: Conjunctive Normal Form (CNF) and Disjunctive Normal Form (DNF). A CNF formula is a conjunction over disjunctions of literals where literals are negated or non-negated variables. In contrast, a DNF formula is a disjunction over conjunctions of literals. k-cnf is a CNF formula having exactly k literals in all clauses, whereas k-dnf is a DNF formula having exactly k literals in all clauses. Example of a 2-CNF is (v 1 v 2 ) ( v 2 v 3 ) where each clause has 2 literals. Similarly, example of a 2-DNF is (v 1 v 2 ) ( v 2 v 3 ) where each clause also has 2 literals. SAT instances in this dissertation are represented as propositional formula in conjunctive normal form, that is, each instance is a conjunction of disjunctions of literals. The problem format is restricted to CNF-encoded formula since this is widely-accepted as the standard input format for SAT solvers [2]. Any propositional formula can be encoded into CNF with linear overhead by adding additional variables.

24 1.3. Constraint-based Search Strategies 3 Definition A Boolean Propositional Satisfiability is formally defined under the CNF form in the following way: A set of n boolean variables V = {v 1, v 2,..., v n }. Each clause has a value of true or false. A set of m clauses C = {c 1, c 2,..., c n }. Each clause e.g. v 1 v 2 v 3 is a disjunction of literals where each literal is a variable or its negation. The notation l mn refers to the literal in the m th clause at the n th position. Each literal can be either true or false. A SAT formula is a conjunction of all given clauses F = c 1 c 2... c n. As an example, consider a propositional formula v 1 ( v 2 v 3 ) that has two clauses. The problem now is how to assign T of F (i.e. true or false) value to the variables x 1, x 2, x 3 so that the formula becomes true. The table below shows all the possible assignments and the corresponding boolean value of the formula. The formula is satisfiable because there exists at least one (in fact three) assignment that satisfies the formula. The bold text lines indicate the satisfied solutions. v 1 v 2 v 3 v 1 ( v 2 v 3 ) F * * F T T T T T F T T T T F F T F F T Under the CNF format, the target of the SAT problem is to determine a binary assignment to the variables so that all clauses are satisfied. If there exists no solution such that all clauses are satisfied the SAT problem is unsatisfiable. Solving propositional satisfiability problem is proved to be an NP-complete problem [20] and then it became a prototypical NP-hard problem [35]. 1.3 Constraint-based Search Strategies Like other finite-domain artificial intelligence (AI) problems, CSP and SAT problems are usually solved by using a search algorithm. In general, there are two main types of search strategy for solving CSPs and SATs: backtracking and stochastic local search (SLS).

25 4 Chapter 1. Introduction Typically, a backtracking (or systematic) algorithm starts with a partial assignment (i.e. only a subset of variables are assigned values and the rest of the variables remain free) and attempts to extend this assignment to a complete one while ensuring all the constraints are satisfied. In other words, it divides the search space hierarchically into a tree structure, where each node represents an assigned variable and each branch coming out of a node represents the value that is assigned to that node. Whenever the search cannot find a domain value to assign to a free variable without violating a constraint, it will backtrack to the previous node and will attempt to find a new value for the variable at that node. If there is a valid value, the search will continue to travel down the search tree or backtrack if necessary until a solution is found. As a result, backtracking algorithms are able to prove that a problem is satisfiable or not. In the optimisation case, backtracking algorithms guarantee to find the optimal solution (due to the ability to generate all solutions to a problem). Therefore, backtracking algorithms are complete. Despite many improvements obtained by using various powerful propagation techniques and different variables/values ordering heuristics, backtracking algorithms unfortunately still do not perform well on many large and complex real world problems [4]. In contrast, a stochastic local search algorithm starts with a full assignment to all variables regardless of whether all constraints are satisfied or not. It then iteratively tries to adjust the current assignment by assigning new values to a subset of variables. Normally, the selection of the next move (i.e. the combination of variables and to-be-assigned values) is heuristically guided towards the solution. In CSP, the objective function is generally to minimise the number of violated constraints. In COP, the objective function also includes optimisation of constraints to move closer to the optimal solution. Although SLS algorithms are incomplete (i.e. they cannot prove that a CSP is unsatisfiable and cannot prove the optimality of its solution for a COP), they tend to produce better results within a reasonable time-frame than backtracking algorithms when solving large and complex problems [34]. 1.4 Search Strategies for SAT SAT has been emerging as a popular constraint solving technology for industry and application problems. However, solving large-scale SAT-encoded application problems is still a big challenge for researchers in the field. Experts in other fields such as verification, planning, circuit design and bioinformatics have made attempts to encode their problems in SAT and used SAT algorithms

26 1.5. Research Questions 5 to solve their corresponding hard SAT instances. SAT competition [2] is a biennial contest to evaluate modern SAT solvers. It allows SAT instances which are constructed from real-world problems and industrial applications to be submitted for solving. The latest SAT solvers are encouraged to participate in the contest. Through these events, hard real world SAT instances have achieved assistance and have taken advantages from the state-of-the-art SAT solvers. Analogous to CSPs, there are two approaches to build a SAT solver: complete (or systematic) and incomplete (or non-systematic) search algorithms. Nowadays, both complete and incomplete algorithms have been used to solve SAT problems. Since systematic algorithms explore the search space completely, they are able to find a solution or prove that problem is unsatisfiable. In contrast, incomplete search does not systematically explore the whole search space. So they cannot determine whether a SAT problem is unsatisfiable or guarantee that a solution will be found. In the SAT competition series, systematic search algorithms showed their advantages over stochastic local search on solving structured problems. Therefore, improving the efficiency of the stochastic local search algorithms in solving structured problems is very critical. However, on the random problems, stochastic local search algorithms were recognised as the winner solvers. Most complete search SAT algorithms are based on the Davis-Putnam procedure [23, 22]. In the category of SLSs, most algorithms are developed based on the GSAT framework [97]. In the early version of GSAT, the algorithm used the objective function of the number of satisfied clauses and then local search strategies based on randomness. The WalkSAT algorithm was further developed incorporating a random walk strategy to select a variable in an unsatisfied clause [96]. Afterwards, dynamic local search algorithms were developed mostly based on clause-weighting scheme [95]. In these days, most state-of-the-art stochastic local search algorithms are hybrid and mixture of these approaches. 1.5 Research Questions The SAT problem is important in solving many practical problems in mathematical logic, constraint satisfaction, VLSI engineering, computing theory, and industrial verification problems [46]. SAT is conceptually a combinatorial problem that has a prominent role in complexity theory and artificial intelligence because it is a prototypical NP-complete problem. The advantage of systematic search compared to local search is that it is more effective in solving highly-constrained problems; however, systematic search is not very efficient in solving

27 6 Chapter 1. Introduction large-scale problems because of a big search space. In contrast, SLS heuristics, based on randomisation and perturbation, are very efficient and fast to find the solutions even with limited time and resource. Additionally, SLS is very effective for large-scale problems because of the compromise between complexity and goal-orientation via randomisation mechanism that help explore the search space widely. Through series of SAT competitions, the efficiency and robustness of SLS in solving large-scale combinatorial problems are greatly improving [2]. Because of the advantages of SLS-SATs, we focus our work on local search and improve the performance of local search for SAT. Stochastic local search is very quick in solving random instances but unfortunately not so efficient in highly-constrained structured problems. In general, this is due to the three main limitations incompleteness, re-visitation, and stagnation of stochastic local search algorithms [48]. Incompleteness: Since SLS does not explore the search space completely, it does not guarantee to find all solutions for the CSPs and the optimal solutions for COPs. In order to broaden visited areas and to explore unvisited areas, some SLS algorithms attempt to improve the diversification and trap escaping strategies. The simplest techniques to improve diversification are random walk [96] and restart [97]. Re-visitation: SLS heuristics neither explicitly store the history of the visited places nor possess a backtracking mechanism like in systematic searches. Therefore, it easily visits previous places several times. There are some memory-based techniques that were used to prevent SLS from re-visiting previous places. These methods save a partial history of search progress and avoid most recently visited places. Tabu search [76] when is applied to SAT act as a short-term memory and restricts re-visitation of recently flipped variables within a specific duration of time. There are SLS-SAT algorithms that employ long-term memory such as variable selection time [37], variable selection frequency [90] and clause unsatisfiability frequency [95]) Stagnation: Local search trajectory is easily attracted to local minima because SLS is not a backtracking search, it is rather based on a goal-oriented heuristic. Plateau and local minimum are two difficult stagnation problems for SLS. Plateau is one of the common problems related to stagnation in SLS. It is defined as a search area where all neighbours have the

28 1.5. Research Questions 7 same value of the objective function. When the search procedure falls in a wide plateau, the algorithm gets stuck in a flat landscape. In that situation, moving to any neighbour does not improve the objective function. In case of large plateaus, if the SLS performs flat (sideways) moves to the neighbours, it still remains in the large plateau. More over, a local minimum is a search point where all neighbours have worse-cost. If SLS falls into a deep local minimum, it will eventually fall back to the deep valley even though the algorithm performs many worse-cost move. In short, it is time-consuming for the algorithm to escape a trap if it falls into large plateaus and deep local minima. The cure from this trapped circumstance is to restart at another position in the search space or to jump to another search area. There are sophisticated methods that help the search escape local minima and improve diversification without resorting to any restart mechanism. Such methods include flat move [114], dynamic adjustment on objective function [95, 80], and probability-based random walk [96]. In reality, SLS encounters more stagnation situation in structured instances than in random instances. There are two reasons for this phenomenon. Firstly, structured instances are from real-world problem; hence they are highly constrained and they additionally contain hidden constraints. These structured constraints and hidden constraints create local minima for SLS. Once local search falls into a trap, it is hard for them to escape because of the large number of violated constraints. SLS needs to solve the conflicts in violated constraints to escape the traps. Secondly, most SLSs prefer to explore the search space greedily. Thus, it is extremely hard for SLS to escape from a deep local minimum which happens often for greedy heuristics. It needs a restart mechanism or strong perturbation to escape the local minimum because a few number of minor perturbations is not sufficient for escaping local minima. In order to improve the performance of SLS-SAT, it is necessary to target these three issues by balancing between intensification and diversification capabilities of the search algorithm. Intensification and diversification are two important aspects of SLS-SAT [48]. Intensification aims to greedily improve the solution quality by exploiting the evaluation function. In contrast, diversification aims to prevent the search stagnation by keeping the search process away from traps in the confined regions. For this reason, we targeted our research on the enhancement of diversification strategies for stochastic local search to intelligently prevent and escape stagnation. From this point of view, our research aim is to improve the diversification of SLS-SAT by focusing on the following research

29 8 Chapter 1. Introduction problems: Objective function: We aim to integrate diversification capability into the objective function in order to decrease the greediness in exploring the search space. Long-term learning mechanism: We aim to improve SLS-SATs capability to learn the reason of stagnation during the search trail. For example, we aim to learn the conflicts between the assignment and the constraints between variables and clauses. Stagnation escape: Our purpose is to construct a trap escaping mechanism to escape from current stagnation and to predict future stagnation situation. This mechanism needs to exploit the information learned over a long-term especially about local minima in order to effectively prevent future stagnation. 1.6 Research contributions The main contributions of this dissertation are listed below: Stagnation prevention strategy by exploiting pseudo-conflict learning: Firstly, we proposed pseudo-conflict as a new variable property for stagnation prevention and escaping. In this work, we introduced a long-term learning strategy focusing on local minima. Our preliminary investigation has revealed that developing efficient enhancement techniques for stagnation escape and prevention in local search is an important research issue. It also has significant practical implications. This heuristic aims to exploit long-term memory from stagnation weights in order to assist the search process to avoid and escape local minima intelligently. The approach firstly learns which variable is vulnerable with stagnation and is likely to lead the search to stagnation. A variable weighting strategy was used to compute the frequency of a variable being involved in stagnation. The approach then utilises this information to treat the variable differently in variable selection phases in order to select variable more cleverly. This new trap prevention strategies are published in AAAI-2012, ECAI-2012 and AI-2012 [85, 27, 26]. Weighting scheme in selecting clauses and variables: Weighting schemes in SLS have been recently recognised as a state-of-the-art strategy to solve structured problems. In clause weighting scheme, each clause is associated with a weight and the scoring function

30 1.7. Thesis Outline 9 is computed based on these weights. There are clause-weighting and variable-weighting schemes which provide SLS with information learnt from the search trail. In this study, we exploit the conventional clause-weighting for the new approach of trap escaping. More specifically, we proposed a new method to select clauses according to clause-weight mechanism to exploit completely the learning information accumulated in clause weights during the search progress. Additionally, we combine the variable weights with the new approach in order to boost the advantage of the weighting strategies. This new weighting strategy for stagnation phases obtains better performance over some current SLS-SAT solvers. This work is published in IJCAI-2013 [29]. The scoring formula based on a new probability distribution: One of the reasons for stagnation in local search is its greedy goal-orientation in terms of its objective function. From this point of view, we attempted to decrease the greediness of the score function by a new distribution probability formation, which is a combination of greediness and diversification instead of the maximum greediness as conventional score. The score grants a portion of probability distribution to diversification criteria in variable selection in order to balance between greediness and diversification. The significance of the result obtained from this enhancement strategies was recognised in AI-2013 [28] where the related paper won the best paper award. Back-tracking style in selecting variables to reverse the pseudo-conflicts: In the conventional method of escaping local minima, the search selects an unsatisfied clause randomly and selects variables in that clause by mechanisms such as Random walk, Novelty+ and R-Novelty. In this thesis, we developed a different approach to reverse the recent pseudo-conflicts by retrieving a variable s current stagnation path. This trap escaping strategy achieved promising results over the state-of-the-art SAT solvers. This piece of work was published in SCAI-2013 [25]. 1.7 Thesis Outline The remainder of this dissertation is organised as described below: In Chapter 2, we define the main concept of propositional satisfiability and present a general literature review on SLS solvers for SAT. The algorithm of each SLS is reviewed and its core

31 Chapter 1. Introduction techniques and contributions are analysed. For better readability of the literature, we have exhaustively provided almost all of the algorithms in pseudo-code and have explained the procedures and objective functions in a systematic way. In Chapter 3, we present our new pseudo-conflict learning strategy integrated with gnovelty +. We named the new algorithm gnovelty + PCL. This is a new contribution to heuristics based on long-term memory. The chapter introduces the new definitions of pseudo-conflict variables, stagnation paths, and stagnation weights. Additionally, the new stagnation escaping strategy based on Novelty heuristics and pseudo-conflict variables is presented. In Chapter 4, we describe the clause and variable weighting enhancement strategies that are applied at stagnation phases. Firstly, a new variable weighting scheme is integrated in order to improve the diversification. Secondly, we apply a new probability-based switching method to navigate the search between the new clause selection strategies and the conventional random walk in clause selection. In Chapter 5, we present our new dynamic objective function based on probability distribution. The new scoring function is a combination of greediness and diversification aiming to improve diversification in greedy phases of SLS-SAT. In Chapter 6, we introduce a new stagnation escaping strategy by taking advantages of the pseudo-conflict learning mechanism. The new algorithm NovEsc is developed on gnovelty + PCL. However, in order to escape the local minima efficiently, the new heuristic prefers to select the most pseudo-conflicted variable in the latest stagnation paths. Moreover, the algorithm integrated variable weights as an additional tie-breaking factor. Finally, we provide the conclusion and list the research contributions of the thesis and discuss our future work in Chapter 7.

32

33 Chapter 2 Stochastic Local Search For SAT This chapter describes and discusses the key concepts and techniques of the existing constraintbased local search strategies. We then explore the general constraint-based local search and state-of-the-art local-search based SAT algorithms (SLS-SAT). We also discuss categorisation, performance and milestones of these SLS-SAT techniques. 2.1 Constraint-based Stochastic Local Search In recent decades, stochastic local search has been an emerging approach for searching large search spaces. It is among the most successful techniques for solving computationally hard problems from computing science, operation research and various application areas (e.g the traveling salesman problem, routing and scheduling problems, genome sequence assembly, winner determination in combinatorial auctions) [44]. The key concept of SLS is randomisation and perturbation. To start the search, the search procedure selects a randomised position in the search space. After this initialisation step, the algorithm iteratively improves the current candidate solution with small modifications until the termination criteria are met. In general, SLS is able to find a good solution but cannot guarantee the most optimal solution. Despite the fact that it is incomplete, it has the advantage of using less memory and capability to find acceptable solutions in a reasonable time for large and complex problems. Given a problem instance π, an SLS algorithm generally has the following components [44]: A search space that is the finite set S(π) of all candidate solutions (or search positions). 12

34 2.1. Constraint-based Stochastic Local Search 13 A neighbourhood relation N(π) S(π) S(π) that determines the criteria to generate the neighbouring candidate solutions of an observing solution. A finite set of memory state M(π) that specifies the stored knowledge during search beyond the candidate solutions (e.g. the tabu tenure in Tabu search [40, 41, 42] or the temperature in simulated annealing [65]) and can be used later as an additional search guidance for selecting a candidate solution from the neighbours. An initialisation function init : Ø:= P (S(π)) that indicates clearly an initial search position and memory states. A step function step(π) : S(π) M(π) := D(S(π) M(π)) that maps each search position and memory states onto a probability distribution over its neighbouring positions and memory states. The termination criterion terminate(π) : S(π) M(π) := D({true, f alse}) that determines the criterion to terminate the search based on the current search position and memory states. Among the above components, the neighbourhood relation and the step function are particularly important. A well-designed neighbourhood relation can restrict the local space surrounding the current solution and a smart step function can speed up the search. There are various ways to define neighbourhood relationships such as 1-exchange neighbourhood, k-exchange neighbourhood or population-based neighbourhood. Every neighbourhood relation induces a neighbourhood graph, whose vertices s correspond to the given search space and in which each pair of neighbouring search positions is connected by an edge. Therefore, under a local search strategy, the search space can be considered as a graph of search positions. The search is typically guided by an evaluation function that is used to heuristically assess or rank candidate solutions. Furthermore, some SLS methods use more than one evaluation function and some can even dynamically adjust the evaluation function while searching. Most problems in our real world can be converted to a search problem for solutions and all candidate solutions create a search space. The target of searching are formulated based on several constraints and an objective function. There are three issues that are needed to be defined to formulate a problem for local search:

Stochastic Local Search for SMT

Stochastic Local Search for SMT DISI - Via Sommarive, 14-38123 POVO, Trento - Italy http://disi.unitn.it Stochastic Local Search for SMT Silvia Tomasi December 2010 Technical Report # DISI-10-060 Contents 1 Introduction 5 2 Background

More information

Simple mechanisms for escaping from local optima:

Simple mechanisms for escaping from local optima: The methods we have seen so far are iterative improvement methods, that is, they get stuck in local optima. Simple mechanisms for escaping from local optima: I Restart: re-initialise search whenever a

More information

Handbook of Constraint Programming 245 Edited by F. Rossi, P. van Beek and T. Walsh c 2006 Elsevier All rights reserved

Handbook of Constraint Programming 245 Edited by F. Rossi, P. van Beek and T. Walsh c 2006 Elsevier All rights reserved Handbook of Constraint Programming 245 Edited by F. Rossi, P. van Beek and T. Walsh c 2006 Elsevier All rights reserved Chapter 8 Local Search Methods Holger H. Hoos and Edward Tsang Local search is one

More information

Foundations of AI. 8. Satisfiability and Model Construction. Davis-Putnam, Phase Transitions, GSAT and GWSAT. Wolfram Burgard & Bernhard Nebel

Foundations of AI. 8. Satisfiability and Model Construction. Davis-Putnam, Phase Transitions, GSAT and GWSAT. Wolfram Burgard & Bernhard Nebel Foundations of AI 8. Satisfiability and Model Construction Davis-Putnam, Phase Transitions, GSAT and GWSAT Wolfram Burgard & Bernhard Nebel Contents Motivation Davis-Putnam Procedure Average complexity

More information

Introduction: Combinatorial Problems and Search

Introduction: Combinatorial Problems and Search STOCHASTIC LOCAL SEARCH FOUNDATIONS AND APPLICATIONS Introduction: Combinatorial Problems and Search Holger H. Hoos & Thomas Stützle Outline 1. Combinatorial Problems 2. Two Prototypical Combinatorial

More information

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley Project Proposals Due Friday, February 13 on bcourses Will discuss project topics on Monday Instructions

More information

Massively Parallel Seesaw Search for MAX-SAT

Massively Parallel Seesaw Search for MAX-SAT Massively Parallel Seesaw Search for MAX-SAT Harshad Paradkar Rochester Institute of Technology hp7212@rit.edu Prof. Alan Kaminsky (Advisor) Rochester Institute of Technology ark@cs.rit.edu Abstract The

More information

Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature.

Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature. Simulated Annealing Key idea: Vary temperature parameter, i.e., probability of accepting worsening moves, in Probabilistic Iterative Improvement according to annealing schedule (aka cooling schedule).

More information

Speeding Up the ESG Algorithm

Speeding Up the ESG Algorithm Speeding Up the ESG Algorithm Yousef Kilani 1 and Abdullah. Mohdzin 2 1 Prince Hussein bin Abdullah Information Technology College, Al Al-Bayt University, Jordan 2 Faculty of Information Science and Technology,

More information

Satisfiability. Michail G. Lagoudakis. Department of Computer Science Duke University Durham, NC SATISFIABILITY

Satisfiability. Michail G. Lagoudakis. Department of Computer Science Duke University Durham, NC SATISFIABILITY Satisfiability Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 27708 COMPSCI 271 - Spring 2001 DUKE UNIVERSITY Page 1 Why SAT? Historical Reasons The first NP-COMPLETE problem

More information

The MAX-SAX Problems

The MAX-SAX Problems STOCHASTIC LOCAL SEARCH FOUNDATION AND APPLICATION MAX-SAT & MAX-CSP Presented by: Wei-Lwun Lu 1 The MAX-SAX Problems MAX-SAT is the optimization variant of SAT. Unweighted MAX-SAT: Finds a variable assignment

More information

CS-E3200 Discrete Models and Search

CS-E3200 Discrete Models and Search Shahab Tasharrofi Department of Information and Computer Science, Aalto University Lecture 7: Complete and local search methods for SAT Outline Algorithms for solving Boolean satisfiability problems Complete

More information

Boolean Functions (Formulas) and Propositional Logic

Boolean Functions (Formulas) and Propositional Logic EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Part I: Basics Sanjit A. Seshia EECS, UC Berkeley Boolean Functions (Formulas) and Propositional Logic Variables: x 1, x 2, x 3,, x

More information

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Formal Methods Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley The Boolean Satisfiability Problem (SAT) Given: A Boolean formula F(x 1, x 2, x 3,, x n ) Can F evaluate to 1

More information

An Introduction to SAT Solvers

An Introduction to SAT Solvers An Introduction to SAT Solvers Knowles Atchison, Jr. Fall 2012 Johns Hopkins University Computational Complexity Research Paper December 11, 2012 Abstract As the first known example of an NP Complete problem,

More information

Example: Map coloring

Example: Map coloring Today s s lecture Local Search Lecture 7: Search - 6 Heuristic Repair CSP and 3-SAT Solving CSPs using Systematic Search. Victor Lesser CMPSCI 683 Fall 2004 The relationship between problem structure and

More information

Non-deterministic Search techniques. Emma Hart

Non-deterministic Search techniques. Emma Hart Non-deterministic Search techniques Emma Hart Why do local search? Many real problems are too hard to solve with exact (deterministic) techniques Modern, non-deterministic techniques offer ways of getting

More information

Heuristic Backtracking Algorithms for SAT

Heuristic Backtracking Algorithms for SAT Heuristic Backtracking Algorithms for SAT A. Bhalla, I. Lynce, J.T. de Sousa and J. Marques-Silva IST/INESC-ID, Technical University of Lisbon, Portugal fateet,ines,jts,jpmsg@sat.inesc.pt Abstract In recent

More information

algorithms, i.e., they attempt to construct a solution piece by piece and are not able to offer a complete solution until the end. The FM algorithm, l

algorithms, i.e., they attempt to construct a solution piece by piece and are not able to offer a complete solution until the end. The FM algorithm, l The FMSAT Satisfiability Solver: Hypergraph Partitioning meets Boolean Satisfiability Arathi Ramani, Igor Markov framania, imarkovg@eecs.umich.edu February 6, 2002 Abstract This report is intended to present

More information

Hybrid solvers for the Boolean Satisfiability problem: an exploration

Hybrid solvers for the Boolean Satisfiability problem: an exploration Rowan University Rowan Digital Works Theses and Dissertations 12-12-2012 Hybrid solvers for the Boolean Satisfiability problem: an exploration Nicole Nelson Follow this and additional works at: http://rdw.rowan.edu/etd

More information

Improvements to Clause Weighting Local Search for Propositional Satisfiability

Improvements to Clause Weighting Local Search for Propositional Satisfiability Improvements to Clause Weighting Local Search for Propositional Satisfiability by Valnir Ferreira Jr A thesis submitted in fulfillment of the requirements of the degree of Doctor of Philosophy Institute

More information

A Re-examination of Limited Discrepancy Search

A Re-examination of Limited Discrepancy Search A Re-examination of Limited Discrepancy Search W. Ken Jackson, Morten Irgens, and William S. Havens Intelligent Systems Lab, Centre for Systems Science Simon Fraser University Burnaby, B.C., CANADA V5A

More information

(Stochastic) Local Search Algorithms

(Stochastic) Local Search Algorithms DM841 DISCRETE OPTIMIZATION Part 2 Heuristics (Stochastic) Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. Components 2 Outline 1. 2. 3. Components

More information

Local Search for CSPs

Local Search for CSPs Local Search for CSPs Alan Mackworth UBC CS CSP February, 0 Textbook. Lecture Overview Domain splitting: recap, more details & pseudocode Local Search Time-permitting: Stochastic Local Search (start) Searching

More information

Unrestricted Backtracking Algorithms for Satisfiability

Unrestricted Backtracking Algorithms for Satisfiability From: AAAI Technical Report FS-01-04. Compilation copyright 2001, AAAI (www.aaai.org). All rights reserved. Unrestricted Backtracking Algorithms for Satisfiability I. Lynce, L. Baptista and J. Marques-Silva

More information

Trends in Constraint Programming. Frédéric BENHAMOU Narendra JUSSIEN Barry O SULLIVAN

Trends in Constraint Programming. Frédéric BENHAMOU Narendra JUSSIEN Barry O SULLIVAN Trends in Constraint Programming Frédéric BENHAMOU Narendra JUSSIEN Barry O SULLIVAN November 24, 2006 2 Contents FIRST PART. LOCAL SEARCH TECHNIQUES IN CONSTRAINT SATISFAC- TION..........................................

More information

DM841 DISCRETE OPTIMIZATION. Part 2 Heuristics. Satisfiability. Marco Chiarandini

DM841 DISCRETE OPTIMIZATION. Part 2 Heuristics. Satisfiability. Marco Chiarandini DM841 DISCRETE OPTIMIZATION Part 2 Heuristics Satisfiability Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. Mathematical Programming Constraint

More information

REACTIVE SEARCH FOR MAX-SAT: DIVERSIFICATION- BIAS PROPERTIES WITH PROHIBITIONS AND PENALTIES

REACTIVE SEARCH FOR MAX-SAT: DIVERSIFICATION- BIAS PROPERTIES WITH PROHIBITIONS AND PENALTIES DEPARTMENT OF INFORMATION AND COMMUNICATION TECHNOLOGY 38050 Povo Trento (Italy), Via Sommarive 14 http://dit.unitn.it/ REACTIVE SEARCH FOR MAX-SAT: DIVERSIFICATION- BIAS PROPERTIES WITH PROHIBITIONS AND

More information

Random Walk With Continuously Smoothed Variable Weights

Random Walk With Continuously Smoothed Variable Weights Random Walk With Continuously Smoothed Variable Weights Steven Prestwich Cork Constraint Computation Centre Department of Computer Science University College Cork, Ireland s.prestwich@cs.ucc.ie Abstract.

More information

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro CMU-Q 15-381 Lecture 8: Optimization I: Optimization for CSP Local Search Teacher: Gianni A. Di Caro LOCAL SEARCH FOR CSP Real-life CSPs can be very large and hard to solve Methods so far: construct a

More information

Set 5: Constraint Satisfaction Problems Chapter 6 R&N

Set 5: Constraint Satisfaction Problems Chapter 6 R&N Set 5: Constraint Satisfaction Problems Chapter 6 R&N ICS 271 Fall 2017 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples:

More information

Evolving Variable-Ordering Heuristics for Constrained Optimisation

Evolving Variable-Ordering Heuristics for Constrained Optimisation Griffith Research Online https://research-repository.griffith.edu.au Evolving Variable-Ordering Heuristics for Constrained Optimisation Author Bain, Stuart, Thornton, John, Sattar, Abdul Published 2005

More information

Lookahead Saturation with Restriction for SAT

Lookahead Saturation with Restriction for SAT Lookahead Saturation with Restriction for SAT Anbulagan 1 and John Slaney 1,2 1 Logic and Computation Program, National ICT Australia Ltd., Canberra, Australia 2 Computer Sciences Laboratory, Australian

More information

Learning Techniques for Pseudo-Boolean Solving and Optimization

Learning Techniques for Pseudo-Boolean Solving and Optimization Learning Techniques for Pseudo-Boolean Solving and Optimization José Faustino Fragoso Fremenin dos Santos September 29, 2008 Abstract The extension of conflict-based learning from Propositional Satisfiability

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

EMPIRICAL ANALYSIS OF LOCAL SEARCH ALGORITHMS AND PROBLEM DIFFICULTY IN SATISFIABILITY. Dave Tae Shik Yoon

EMPIRICAL ANALYSIS OF LOCAL SEARCH ALGORITHMS AND PROBLEM DIFFICULTY IN SATISFIABILITY. Dave Tae Shik Yoon EMPIRICAL ANALYSIS OF LOCAL SEARCH ALGORITHMS AND PROBLEM DIFFICULTY IN SATISFIABILITY by Dave Tae Shik Yoon A thesis submitted in conformity with the requirements for the degree of Master of Applied Science

More information

Administrative. Local Search!

Administrative. Local Search! Administrative Local Search! CS311 David Kauchak Spring 2013 Assignment 2 due Tuesday before class Written problems 2 posted Class participation http://www.youtube.com/watch? v=irhfvdphfzq&list=uucdoqrpqlqkvctckzqa

More information

N-Queens problem. Administrative. Local Search

N-Queens problem. Administrative. Local Search Local Search CS151 David Kauchak Fall 2010 http://www.youtube.com/watch?v=4pcl6-mjrnk Some material borrowed from: Sara Owsley Sood and others Administrative N-Queens problem Assign 1 grading Assign 2

More information

Stochastic greedy local search Chapter 7

Stochastic greedy local search Chapter 7 Stochastic greedy local search Chapter 7 ICS-275 Winter 2016 Example: 8-queen problem Main elements Choose a full assignment and iteratively improve it towards a solution Requires a cost function: number

More information

Adaptive Memory-Based Local Search for MAX-SAT

Adaptive Memory-Based Local Search for MAX-SAT Adaptive Memory-Based Local Search for MAX-SAT Zhipeng Lü a,b, Jin-Kao Hao b, Accept to Applied Soft Computing, Feb 2012 a School of Computer Science and Technology, Huazhong University of Science and

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2014 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Chapter 6 of Russell & Norvig] Constraint satisfaction problems (CSPs) Standard search problem:

More information

CS227: Assignment 1 Report

CS227: Assignment 1 Report 1 CS227: Assignment 1 Report Lei Huang and Lawson Wong April 20, 2008 1 Introduction Propositional satisfiability (SAT) problems have been of great historical and practical significance in AI. Despite

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Greedy Local Search Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 19, 2007 Nebel, Hué and Wölfl (Universität Freiburg) Constraint

More information

underlying iterative best improvement procedure based on tabu attributes. Heuristic Optimization

underlying iterative best improvement procedure based on tabu attributes. Heuristic Optimization Tabu Search Key idea: Use aspects of search history (memory) to escape from local minima. Simple Tabu Search: I Associate tabu attributes with candidate solutions or solution components. I Forbid steps

More information

n Informally: n How to form solutions n How to traverse the search space n Systematic: guarantee completeness

n Informally: n How to form solutions n How to traverse the search space n Systematic: guarantee completeness Advanced Search Applications: Combinatorial Optimization Scheduling Algorithms: Stochastic Local Search and others Analyses: Phase transitions, structural analysis, statistical models Combinatorial Problems

More information

Applying Local Search to Temporal Reasoning

Applying Local Search to Temporal Reasoning Applying Local Search to Temporal Reasoning J. Thornton, M. Beaumont and A. Sattar School of Information Technology, Griffith University Gold Coast, Southport, Qld, Australia 4215 {j.thornton, m.beaumont,

More information

Integrating Probabilistic Reasoning with Constraint Satisfaction

Integrating Probabilistic Reasoning with Constraint Satisfaction Integrating Probabilistic Reasoning with Constraint Satisfaction IJCAI Tutorial #7 Instructor: Eric I. Hsu July 17, 2011 http://www.cs.toronto.edu/~eihsu/tutorial7 Getting Started Discursive Remarks. Organizational

More information

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA GSAT and Local Consistency 3 Kalev Kask and Rina Dechter Department of Information and Computer Science University of California, Irvine, CA 92717-3425 fkkask,dechterg@ics.uci.edu Abstract It has been

More information

Captain Jack: New Variable Selection Heuristics in Local Search for SAT

Captain Jack: New Variable Selection Heuristics in Local Search for SAT Captain Jack: New Variable Selection Heuristics in Local Search for SAT Dave Tompkins, Adrian Balint, Holger Hoos SAT 2011 :: Ann Arbor, Michigan http://www.cs.ubc.ca/research/captain-jack Key Contribution:

More information

Heuristic Optimisation

Heuristic Optimisation Heuristic Optimisation Part 2: Basic concepts Sándor Zoltán Németh http://web.mat.bham.ac.uk/s.z.nemeth s.nemeth@bham.ac.uk University of Birmingham S Z Németh (s.nemeth@bham.ac.uk) Heuristic Optimisation

More information

Random backtracking in backtrack search algorithms for satisfiability

Random backtracking in backtrack search algorithms for satisfiability Discrete Applied Mathematics 155 (2007) 1604 1612 www.elsevier.com/locate/dam Random backtracking in backtrack search algorithms for satisfiability I. Lynce, J. Marques-Silva Technical University of Lisbon,

More information

The Automatic Design of Batch Processing Systems

The Automatic Design of Batch Processing Systems The Automatic Design of Batch Processing Systems by Barry Dwyer, M.A., D.A.E., Grad.Dip. A thesis submitted for the degree of Doctor of Philosophy in the Department of Computer Science University of Adelaide

More information

On the Run-time Behaviour of Stochastic Local Search Algorithms for SAT

On the Run-time Behaviour of Stochastic Local Search Algorithms for SAT From: AAAI-99 Proceedings. Copyright 1999, AAAI (www.aaai.org). All rights reserved. On the Run-time Behaviour of Stochastic Local Search Algorithms for SAT Holger H. Hoos University of British Columbia

More information

Lecture: Iterative Search Methods

Lecture: Iterative Search Methods Lecture: Iterative Search Methods Overview Constructive Search is exponential. State-Space Search exhibits better performance on some problems. Research in understanding heuristic and iterative search

More information

On Computing Minimum Size Prime Implicants

On Computing Minimum Size Prime Implicants On Computing Minimum Size Prime Implicants João P. Marques Silva Cadence European Laboratories / IST-INESC Lisbon, Portugal jpms@inesc.pt Abstract In this paper we describe a new model and algorithm for

More information

An Experimental Evaluation of Conflict Diagnosis and Recursive Learning in Boolean Satisfiability

An Experimental Evaluation of Conflict Diagnosis and Recursive Learning in Boolean Satisfiability An Experimental Evaluation of Conflict Diagnosis and Recursive Learning in Boolean Satisfiability Fadi A. Aloul and Karem A. Sakallah Department of Electrical Engineering and Computer Science University

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2012 Rina Dechter ICS-271:Notes 5: 1 Outline The constraint network model Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

Dynamic Variable Filtering for Hard Random 3-SAT Problems

Dynamic Variable Filtering for Hard Random 3-SAT Problems Dynamic Variable Filtering for Hard Random 3-SAT Problems Anbulagan, John Thornton, and Abdul Sattar School of Information Technology Gold Coast Campus, Griffith University PMB 50 Gold Coast Mail Centre,

More information

Satisfiability-Based Algorithms for 0-1 Integer Programming

Satisfiability-Based Algorithms for 0-1 Integer Programming Satisfiability-Based Algorithms for 0-1 Integer Programming Vasco M. Manquinho, João P. Marques Silva, Arlindo L. Oliveira and Karem A. Sakallah Cadence European Laboratories / INESC Instituto Superior

More information

Deductive Methods, Bounded Model Checking

Deductive Methods, Bounded Model Checking Deductive Methods, Bounded Model Checking http://d3s.mff.cuni.cz Pavel Parízek CHARLES UNIVERSITY IN PRAGUE faculty of mathematics and physics Deductive methods Pavel Parízek Deductive Methods, Bounded

More information

SLS Algorithms. 2.1 Iterative Improvement (revisited)

SLS Algorithms. 2.1 Iterative Improvement (revisited) SLS Algorithms Stochastic local search (SLS) has become a widely accepted approach to solving hard combinatorial optimisation problems. An important characteristic of many recently developed SLS methods

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

Pre-requisite Material for Course Heuristics and Approximation Algorithms

Pre-requisite Material for Course Heuristics and Approximation Algorithms Pre-requisite Material for Course Heuristics and Approximation Algorithms This document contains an overview of the basic concepts that are needed in preparation to participate in the course. In addition,

More information

Boolean Satisfiability Solving Part II: DLL-based Solvers. Announcements

Boolean Satisfiability Solving Part II: DLL-based Solvers. Announcements EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Part II: DLL-based Solvers Sanjit A. Seshia EECS, UC Berkeley With thanks to Lintao Zhang (MSR) Announcements Paper readings will be

More information

a local optimum is encountered in such a way that further improvement steps become possible.

a local optimum is encountered in such a way that further improvement steps become possible. Dynamic Local Search I Key Idea: Modify the evaluation function whenever a local optimum is encountered in such a way that further improvement steps become possible. I Associate penalty weights (penalties)

More information

GRASP. Greedy Randomized Adaptive. Search Procedure

GRASP. Greedy Randomized Adaptive. Search Procedure GRASP Greedy Randomized Adaptive Search Procedure Type of problems Combinatorial optimization problem: Finite ensemble E = {1,2,... n } Subset of feasible solutions F 2 Objective function f : 2 Minimisation

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2013 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Frank C. Langbein F.C.Langbein@cs.cf.ac.uk Department of Computer Science Cardiff University 13th February 2001 Constraint Satisfaction Problems (CSPs) A CSP is a high

More information

WalkSAT: Solving Boolean Satisfiability via Stochastic Search

WalkSAT: Solving Boolean Satisfiability via Stochastic Search WalkSAT: Solving Boolean Satisfiability via Stochastic Search Connor Adsit cda8519@rit.edu Kevin Bradley kmb3398@rit.edu December 10, 2014 Christian Heinrich cah2792@rit.edu Contents 1 Overview 1 2 Research

More information

A Stochastic Non-CNF SAT Solver

A Stochastic Non-CNF SAT Solver A Stochastic Non-CNF SAT Solver Rafiq Muhammad and Peter J. Stuckey NICTA Victoria Laboratory, Department of Computer Science and Software Engineering, The University of Melbourne, Victoria 3010, Australia

More information

Computational Intelligence Meets the NetFlix Prize

Computational Intelligence Meets the NetFlix Prize Computational Intelligence Meets the NetFlix Prize Ryan J. Meuth, Paul Robinette, Donald C. Wunsch II Abstract The NetFlix Prize is a research contest that will award $1 Million to the first group to improve

More information

Stochastic Local Search Methods for Dynamic SAT an Initial Investigation

Stochastic Local Search Methods for Dynamic SAT an Initial Investigation Stochastic Local Search Methods for Dynamic SAT an Initial Investigation Holger H. Hoos and Kevin O Neill Abstract. We introduce the dynamic SAT problem, a generalisation of the satisfiability problem

More information

An Analysis and Comparison of Satisfiability Solving Techniques

An Analysis and Comparison of Satisfiability Solving Techniques An Analysis and Comparison of Satisfiability Solving Techniques Ankur Jain, Harsha V. Madhyastha, Craig M. Prince Department of Computer Science and Engineering University of Washington Seattle, WA 98195

More information

Recap Hill Climbing Randomized Algorithms SLS for CSPs. Local Search. CPSC 322 Lecture 12. January 30, 2006 Textbook 3.8

Recap Hill Climbing Randomized Algorithms SLS for CSPs. Local Search. CPSC 322 Lecture 12. January 30, 2006 Textbook 3.8 Local Search CPSC 322 Lecture 12 January 30, 2006 Textbook 3.8 Local Search CPSC 322 Lecture 12, Slide 1 Lecture Overview Recap Hill Climbing Randomized Algorithms SLS for CSPs Local Search CPSC 322 Lecture

More information

of m clauses, each containing the disjunction of boolean variables from a nite set V = fv 1 ; : : : ; vng of size n [8]. Each variable occurrence with

of m clauses, each containing the disjunction of boolean variables from a nite set V = fv 1 ; : : : ; vng of size n [8]. Each variable occurrence with A Hybridised 3-SAT Algorithm Andrew Slater Automated Reasoning Project, Computer Sciences Laboratory, RSISE, Australian National University, 0200, Canberra Andrew.Slater@anu.edu.au April 9, 1999 1 Introduction

More information

Lower Bounds and Upper Bounds for MaxSAT

Lower Bounds and Upper Bounds for MaxSAT Lower Bounds and Upper Bounds for MaxSAT Federico Heras, Antonio Morgado, and Joao Marques-Silva CASL, University College Dublin, Ireland Abstract. This paper presents several ways to compute lower and

More information

Using Cost Distributions to Guide Weight Decay in Local Search for SAT

Using Cost Distributions to Guide Weight Decay in Local Search for SAT Using Cost Distributions to Guide Weight Decay in Local Search for SAT John Thornton and Duc Nghia Pham SAFE Program, Queensland Research Lab, NICTA and Institute for Integrated and Intelligent Systems,

More information

ADAPTIVE VIDEO STREAMING FOR BANDWIDTH VARIATION WITH OPTIMUM QUALITY

ADAPTIVE VIDEO STREAMING FOR BANDWIDTH VARIATION WITH OPTIMUM QUALITY ADAPTIVE VIDEO STREAMING FOR BANDWIDTH VARIATION WITH OPTIMUM QUALITY Joseph Michael Wijayantha Medagama (08/8015) Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Master of Science

More information

Towards More Effective Unsatisfiability-Based Maximum Satisfiability Algorithms

Towards More Effective Unsatisfiability-Based Maximum Satisfiability Algorithms Towards More Effective Unsatisfiability-Based Maximum Satisfiability Algorithms Joao Marques-Silva and Vasco Manquinho School of Electronics and Computer Science, University of Southampton, UK IST/INESC-ID,

More information

Combinational Equivalence Checking

Combinational Equivalence Checking Combinational Equivalence Checking Virendra Singh Associate Professor Computer Architecture and Dependable Systems Lab. Dept. of Electrical Engineering Indian Institute of Technology Bombay viren@ee.iitb.ac.in

More information

Solving 3-SAT. Radboud University Nijmegen. Bachelor Thesis. Supervisors: Henk Barendregt Alexandra Silva. Author: Peter Maandag s

Solving 3-SAT. Radboud University Nijmegen. Bachelor Thesis. Supervisors: Henk Barendregt Alexandra Silva. Author: Peter Maandag s Solving 3-SAT Radboud University Nijmegen Bachelor Thesis Author: Peter Maandag s3047121 Supervisors: Henk Barendregt Alexandra Silva July 2, 2012 Contents 1 Introduction 2 1.1 Problem context............................

More information

CSP- and SAT-based Inference Techniques Applied to Gnomine

CSP- and SAT-based Inference Techniques Applied to Gnomine CSP- and SAT-based Inference Techniques Applied to Gnomine Bachelor Thesis Faculty of Science, University of Basel Department of Computer Science Artificial Intelligence ai.cs.unibas.ch Examiner: Prof.

More information

Satisfiability (SAT) Applications. Extensions/Related Problems. An Aside: Example Proof by Machine. Annual Competitions 12/3/2008

Satisfiability (SAT) Applications. Extensions/Related Problems. An Aside: Example Proof by Machine. Annual Competitions 12/3/2008 15 53:Algorithms in the Real World Satisfiability Solvers (Lectures 1 & 2) 1 Satisfiability (SAT) The original NP Complete Problem. Input: Variables V = {x 1, x 2,, x n }, Boolean Formula Φ (typically

More information

Solving the Boolean Satisfiability Problem Using Multilevel Techniques

Solving the Boolean Satisfiability Problem Using Multilevel Techniques Solving the Boolean Satisfiability Problem Using Multilevel Techniques Sirar Salih Yujie Song Supervisor Associate Professor Noureddine Bouhmala This Master s Thesis is carried out as a part of the education

More information

CS 188: Artificial Intelligence Spring Today

CS 188: Artificial Intelligence Spring Today CS 188: Artificial Intelligence Spring 2006 Lecture 7: CSPs II 2/7/2006 Dan Klein UC Berkeley Many slides from either Stuart Russell or Andrew Moore Today More CSPs Applications Tree Algorithms Cutset

More information

Machine Learning for Software Engineering

Machine Learning for Software Engineering Machine Learning for Software Engineering Single-State Meta-Heuristics Prof. Dr.-Ing. Norbert Siegmund Intelligent Software Systems 1 2 Recap: Goal is to Find the Optimum Challenges of general optimization

More information

Modelling and Solving Temporal Reasoning as Propositional Satisfiability

Modelling and Solving Temporal Reasoning as Propositional Satisfiability Modelling and Solving Temporal Reasoning as Propositional Satisfiability Duc Nghia Pham a,b,, John Thornton a,b and Abdul Sattar a,b a SAFE Program NICTA Ltd., Queensland, Australia b Institute for Integrated

More information

Using Learning Automata to Enhance Local-Search Based SAT Solvers

Using Learning Automata to Enhance Local-Search Based SAT Solvers Using Learning Automata to Enhance Local-Search Based SAT Solvers with Learning Capability 63 5 Using Learning Automata to Enhance Local-Search Based SAT Solvers with Learning Capability Ole-Christoffer

More information

Normal Forms for Boolean Expressions

Normal Forms for Boolean Expressions Normal Forms for Boolean Expressions A NORMAL FORM defines a class expressions s.t. a. Satisfy certain structural properties b. Are usually universal: able to express every boolean function 1. Disjunctive

More information

A Hyper-heuristic based on Random Gradient, Greedy and Dominance

A Hyper-heuristic based on Random Gradient, Greedy and Dominance A Hyper-heuristic based on Random Gradient, Greedy and Dominance Ender Özcan and Ahmed Kheiri University of Nottingham, School of Computer Science Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB, UK

More information

Week 8: Constraint Satisfaction Problems

Week 8: Constraint Satisfaction Problems COMP3411/ 9414/ 9814: Artificial Intelligence Week 8: Constraint Satisfaction Problems [Russell & Norvig: 6.1,6.2,6.3,6.4,4.1] COMP3411/9414/9814 18s1 Constraint Satisfaction Problems 1 Outline Constraint

More information

Solving the Maximum Satisfiability Problem Using an Evolutionary Local Search Algorithm

Solving the Maximum Satisfiability Problem Using an Evolutionary Local Search Algorithm 154 The International Arab Journal of Information Technology, Vol. 2, No. 2, April 2005 Solving the Maximum Satisfiability Problem Using an Evolutionary Local Search Algorithm Mohamed El Bachir Menai 1

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Lesson 4 Local Search Local improvement, no paths Look around at states in the local neighborhood and choose the one with the best value Pros: Quick (usually linear) Sometimes enough

More information

Overview of Tabu Search

Overview of Tabu Search Overview of Tabu Search The word tabu (or taboo) comes from Tongan, a language of Polynesia, where it was used by the aborigines of Tonga island to indicate things that cannot be touched because they are

More information

TABU search and Iterated Local Search classical OR methods

TABU search and Iterated Local Search classical OR methods TABU search and Iterated Local Search classical OR methods tks@imm.dtu.dk Informatics and Mathematical Modeling Technical University of Denmark 1 Outline TSP optimization problem Tabu Search (TS) (most

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

Journal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems *

Journal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems * Journal of Global Optimization, 10, 1{40 (1997) c 1997 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. A Discrete Lagrangian-Based Global-Search Method for Solving Satisability Problems

More information

New Worst-Case Upper Bound for #2-SAT and #3-SAT with the Number of Clauses as the Parameter

New Worst-Case Upper Bound for #2-SAT and #3-SAT with the Number of Clauses as the Parameter Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10) New Worst-Case Upper Bound for #2-SAT and #3-SAT with the Number of Clauses as the Parameter Junping Zhou 1,2, Minghao

More information

Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt

Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt TABU search and Iterated Local Search classical OR methods Outline TSP optimization problem Tabu Search (TS) (most important) Iterated Local Search (ILS) tks@imm.dtu.dk Informatics and Mathematical Modeling

More information