CS227: Assignment 1 Report

Size: px
Start display at page:

Download "CS227: Assignment 1 Report"

Transcription

1 1 CS227: Assignment 1 Report Lei Huang and Lawson Wong April 20, Introduction Propositional satisfiability (SAT) problems have been of great historical and practical significance in AI. Despite being fundamental problems, SAT problems are NP-hard and are difficult to solve using complete search, such as by the Davis-Putnam procedure [1]. Selman, Levesque, and Mitchell [2] proposed a greedy local search procedure GSAT that was able to quickly solve many hard problems that would take very long using complete methods. Many variants of GSAT have since appeared. We examine the relative performance of GSAT with three of these variants: HSAT [3], GSAT with random walk [4], and WalkSAT [4]. 2 Algorithms The simple GSAT procedure is reproduced in Algorithm 1. Algorithm 1 GSAT [2] 1: while run-time < MAX-TIME (or num-tries < MAX-TRIES) do 2: T := random initialization of variables in Σ, the set of clauses in CNF 3: for i := 1 to MAX-FLIPS do 4: if T satisfies Σ then 5: return T 6: else 7: for each variable v do 8: score[v] := change in # of satisfied clauses in Σ with v s value flipped in T 9: end for 10: poss-flips := {v score[v] = max w score[w]} 11: v := choose a variable at random from poss-flips 12: T := T with v s value flipped 13: end if 14: end for 15: end while 16: return No satisfying assignment found

2 2 GSAT randomly chooses a truth assignment for variables, then flips individual variable truth values until a satisfying assignment is found, or until timeout (i.e., no solution found). The choice of which variable to flip is made by a hill-climb step, where each variable is given a score, and from the set of variables that have maximum score, one is selected at random. The score of a variable is defined as the change in number of satisfied clauses if the variable s truth value was flipped. Hence the more positive the variable s score, the greater the number of satisfied clauses will result from flipping the variable s truth value. This hill-climb heuristic is therefore greedy as it tries to maximize the next step s total number of satisfied clauses. To study more closely the hill-climbing procedure, Gent and Walsh [3] proposed HSAT, which differs from GSAT only in the way that the next variable to flip is chosen from possflips. Instead of choosing this variable randomly, a variable that was flipped most long ago (if ever) is chosen. In detail, we keep track of an extra array when, indexed by variables, where when[v] is initialized to be 0 for all variables v, and on step (flip) i, if v is chosen, then when[v] := i. Hence this array stores the last iteration that each variable has been flipped, or 0 if it has never been flipped. The variable in poss-flips that has minimum value in when is chosen (ties broken arbitrarily). HSAT therefore provides memory to the procedure. One major problem of applying local greedy search methods such as GSAT and HSAT in a non-convex problem space is that searches easily become stuck at local optima, which in terms of SAT means an assignment satisfying all clauses can never be found. To address this problem, Selman, Kautz, and Cohen [4] proposed the use of random walk to add randomness to the search and allow escaping from local optima. GSAT with random walk does this by simply choosing not to use hill-climbing with probability p = 0.5; instead, it picks an unsatisfied clause and sets poss-f lips to contain all variables occurring in this clause. In other cases (with probability 1 p), standard GSAT hill-climbing is used to find poss-f lips. The authors also proposed a more extreme version, WalkSAT, which differs from GSAT more than the previous variants. In WalkSAT, an unsatisfied clause is always picked; in this way, random walk is always used. With probability p = 0.5, greediness is added by setting poss-flips to be the set of variables in the clause that have the minimum break count, where break count is defined as the change in number of unsatisfied clauses if the variable s truth value was flipped. If a clause changes from satisfied to unsatisfied, then # of unsatisfied clauses increases by 1, # of satisfied clauses decreases by 1, and vice versa for unsatisfied to satisfied; hence, (change in # of unsatisfied clauses) = (change in # of satisfied clauses). Since we want to minimize change of unsatisfied, we can equivalently maximize change of satisfied, hence the greedy procedure of minimizing the break count is actually equivalent to the original procedure of maximizing the score. Hence the score computation as shown in Algorithm 1 is still used in WalkSAT. With probability p, WalkSAT therefore sets poss-f lips to contain the variables in the random unsatisfied clause that have maximum score; in other cases, no greediness is used and all variables are added to poss-flips. 2.1 Optimizations In each iteration of the inner loop, there are three steps that take more than O(1) time: checking whether T is a satisfying assignment, computing scores for each variable, and determining poss-flips. Checking T is an O(KL) task, where K = maximum number of literals per clause, L = number of clauses. To compute a score for a variable, each clause

3 3 must be checked to see if it is satisfied, which would potentially take O(K) time (e.g., if all literals are false). Since this must be done for each clause and variable, score computation takes O(KLN) time, where N = number of variables. Finally, determining the poss-f lips set takes O(N) time, as each variable s score must be compared. Clearly, the most expensive step is score computation. Initially, when scores are unknown, each literal for each clause for each variable flip must be considered, hence an O(KLN) computation is necessary. However, between iterations, much of this is wasted work, as each iteration only differs by one variable flip. This is the idea behind score caching; given the current scores and the number of true literals for each clause before and after a variable flip, we can use specific rules to determine the new scores after the flip. The number of true literals in a clause is an indicator of whether the clause is satisfied (> 0) or not (= 0). If the number of true literals in clause c, by flipping variable v s truth value, changes from: 0 1: All literals were false, so c gave each clause variable a score of +1. Now, c is satisfied because of v; if v is flipped again, c is unsatisfied, so v s score from c is now 1. Flipping the other literals still makes c satisfied, hence all other clause variables now have score 0. Hence the change in score is 2 for v, 1 for other clause variables. 1 0: Flipping v caused c to become unsatisfied, so v had a score of 1. Since c was satisfied, flipping other clause variables had no effect, so their score was 0. Now, flipping any clause variable makes c true, so each has score 1. Hence the change in score is +2 for v, +1 for other clause variables. 1 2: Flipping v increased the number of true literals, so v s literal must have been false. Then there is another variable w whose literal in c was true. Since this was the only true literal, its score was 1, as flipping it would have made c unsatisfied. Since c was satisfied, flipping clause variables other than w had no effect, so their score was 0. Now, with 2 true literals, flipping any of these two still makes c satisfied by the other true literal, so their scores are 0. Flipping other clause variables has no effect, so their score is 0. Hence the change in score is +1 for w, the original sole true literal. 2 1: Flipping v decreased the number of true literals, so v s literal must have been true. Then there is another variable w whose literal was true. Since there are 2 true literals, falsifying either does not take away c s support, so c remains satisfied and their scores were 0. Since c was true, scores for all other variables was 0. Now, w becomes the sole support of c, so its score is now 1. Since c is still true, all other clause variables have score 0. Hence the change in score is 1 for w, the new sole true literal. Otherwise: Either the number of true literals did not change, or was from 2 2. In the former case, v was not involved in c, so c s contribution to scores remains the same. In the latter case, we saw above that when a clause has 2 true literals, it remains satisfied on any flip as it does not lose its support, hence the the clause s contribution to its clause variables scores is 0. The score of all clause variables from c is therefore 0 both before and now, hence there is no change in score. Given the original scores (for each variable) and number of true literals (for each clause), we can use the above rules to more efficiently compute the new scores and number of true

4 4 literals. First, use O(KL) time to compute the new number of true literals for each clause. Then, for each clause, determine the correct rule from the old and new number of true literals, and apply the rule on each of its literals to get each clause variable s new score from its old score. This also uses O(KL) time, so overall score caching and updating uses only O(KL) time. Note that the full O(KLN) score computation must still be performed on the initial truth assignment, but on inner loop iterations only O(KL) updating is necessary (and should be performed at the end of the loop to avoid initial redundancy). In the random walk variants of GSAT, the current set of unsatisfied clauses was sampled from regularly. Now that score caching computes the new number of true literals per clause in each iteration, this set can be easily maintained; if the change in number of true literals was from 1 0, add the clause into the set, and if from 0 1, remove the clause. The initial unsatisfied set can be deduced when computing the initial number of true literals (a clause is in the set if and only if it has 0 true literals). There is an additional bonus from doing this: instead of having to use O(KL) time to check whether T is a satisfying assignment, which is the first non-o(1) noted at the beginning of this section, this step can be now done in O(1) time by simply checking if the unsatisfied set is empty. The final non-o(1) step, determining poss-f lips, is rather fast already (O(N), which is in practice significantly smaller than the other two steps), and was not optimized. 3 Experimental Results 3.1 Setting Max-Flips Finding the correct value for Max-flips is important, as it controls how quickly to give up on a search path. If Max-flips is too low, it is likely that the search has yet to converge to a solution, resulting thrashing behavior where many repeated restarts occur. In contrast, if Max-flips is too high, more time will be wasted on unfruitful paths, such as those starting from poor initial assignments or those stuck in local optima. Gent and Walsh [3] analyzed this problem also, and confirmed the above result, that there indeed exists an optimal value of Max-flips that minimizes the average total flips (a measure of the amount of work done). They also empirically found that max-flips varies with N in an approximately O(N 2 ) fashion. However, they only found the optimal max-flips values for N 100; as we evaluate much larger problems, we need a better way to determine a good max-flips value given N. One note from [3] was that the max-flips optimum is not very sharp, so we crudely chose integral multiples of Max-flips to test on. Also, it was suggested that Max-flips did not have significant dependence on the algorithm, hence we only used HSAT and WalkSAT to determine the optimal Max-flips. Hard random 3-SAT problems (see Section 3.2 below) with N = 50, 100, 150, 200, 250, 300 were chosen, and were evaluated with Max-flips = c N, for 1 c 10. The resulting trend was remarkably simple: HSAT was optimal for N = 50 when c = 1, for N = 100 when c = 2, etc. in a generally linear manner. WalkSAT also had a similar trend, except with double the c values, i.e., N = 100 was optimal when c = 4. For a given N, we therefore have c = N for GSAT and HSAT, and c = N for GSAT with random walk and WalkSAT, and set Max-flips = c N. Hence for GSAT and HSAT, Max-flips = N2, 50 and double for the random walk variants. Max-flips therefore does appear to grow with N 2.

5 5 3.2 Random SAT Table 1: Random 3-SAT Results, N 300 (100 trials per entry) Vars Clauses Algorithm Max-Flips %-Solved Time Flips Tries GSAT % 0.006s HSAT % 0.003s GSAT-W % 0.008s WalkSAT % 0.006s GSAT % 0.08s HSAT % 0.02s GSAT-W % 0.06s WalkSAT % 0.04s GSAT % 1.97s HSAT % 0.25s GSAT-W % 0.84s WalkSAT % 0.76s GSAT % 15.4s HSAT % 0.6s GSAT-W % 1.6s WalkSAT % 1.6s GSAT % 82.0s HSAT % 3.3s GSAT-W % 15.6s WalkSAT % 8.0s GSAT % 81.6s HSAT % 3.8s GSAT-W % 17.4s WalkSAT % 7.9s We extensively evaluated the four algorithms on hard random 3-SAT formulas of different variable sizes. It has been empirically shown that the hardest problems occur when L 4.26N, hence satisfiable SAT formulas with N = 50, 100, 150, 200, 250, 300, 400,500,600,700,800 that adhere to this clause-variable ratio were used. To reduce the idiosyncrasies of any specific random formula, 10 instances were generated for each N. For N 300, 10 trials were conducted for each instance, giving a total of 100 trials; for the other larger formulas (N 400), only 2 trials were performed per instance due to time constraints, giving a total of 20 trials. The results are shown in Table 1 above and 2 below respectively. All experiments were performed on a Core 2 Quad 2.40GHz computer. In the table, time is the median running time of one instance (up to 600s, after which the algorithm was forced to terminate), flips is the median number of flips on the successful try, and tries is the median number of times the algorithm is restarted before a satisfying assignment is found on the successful try. For example, a tries value of 4.8 should be interpreted as the algorithm it-

6 erated until Max-flips 4 times (hence completed 4 tries unsuccessfully), and on the 5th try required 0.8*Max-flips number of flips before a satisfying assignment was found. Medians were used to obtain more robust measures of central tendency. For N 300, the algorithms display a stable trend. With the exception of GSAT whose performance began to significantly deteriorate for N 250, all problems were successfully solved. Also, a very stable ranking for the algorithms is seen from the running times, with HSAT always being fastest, followed by WalkSAT, GSAT with random walk, and GSAT. Especially for N 200, GSAT began to take significantly more time. This can be explained by the much higher number of tries that GSAT takes compared to the other three algorithms, which suggests either that Max-flips was chosen poorly, or that GSAT often becomes stuck in bad search paths. The fact that the number of flips on successful tries was close to, but with a stable margin from, the Max-flips value suggests that Max-flips was generally well chosen. Also, the fact that on successful tries GSAT takes number of flips just slightly more than that of HSAT suggests that it is not the case that GSAT chooses particularly bad flips and requires longer paths to reach the satisfying assignment. Rather, it is likely that GSAT often becomes stuck in local optima and hence is unproductive until the next restart. The two random walk variants are able to prevent this by randomly selecting possibly non-locally-optimal variables to flip, which provide opportunities to escape from local optima. The fact that WalkSAT adopts a stronger random walk strategy than GSAT with random walk also explains its lower number of tries and running times. HSAT escapes the local optima problem in a different way, by flipping the variable in poss-flips that was flipped longest ago. Local optima often cause variables to flip back and forth around the optima, hence adopting the HSAT strategy prevents this from happening. Moreover, as variables that have never been flipped before are by definition the ones that were flipped longest ago, the HSAT strategy also allows exploration of new areas of the search space via these new variables, and provides escape from local optima this way. Perhaps because HSAT s escape method is always guided by the greedy hill-climbing heuristic (max score), it does more fruitful searching than random walk, hence requires the least number of flips. Both random walk methods require at least double the number of flips as GSAT and HSAT, suggesting that the consequence of their ability to escape is that more random, unnecessary moves are taken, resulting in slower convergence to satisfying assignments. As for larger formulas (N 400, as shown in Table 2), these trends are not as clear. It is still the case that GSAT deteriorates very quickly; it was only able to solve a minimal number of instances for N = 400, and failed for larger N (and hence is omitted from the table for such cases). We also see that the performance of the other three algorithms begin to deteriorate, although they continue to maintain their rank in terms of % of instances solved. One significant difference is that HSAT no longer appears to dominate the other algorithms stably in terms of running time, but rather that all three algorithms seem to do well in some cases and worse in others, giving close running times (although we must note that relatively few trials were evaluated, hence especially for larger N when fewer instances are solved, the error in the median can potentially be large). While the number of flips maintains the same trends, the number of tries is in general higher than that of lower N, suggesting that increasing N makes it much more difficult for the algorithms to converge to solutions. All the algorithms performed rather poorly when trying to find satisfying assignments before the Max-time of 10 minutes was exceeded for N =

7 7 Table 2: Random 3-SAT Results, N 400 (20 trials per entry) Vars Clauses Algorithm Max-Flips %-Solved Time Flips Tries GSAT % 276s HSAT % 41s GSAT-W % 41s WalkSAT % 20s HSAT % 72s GSAT-W % 109s WalkSAT % 90s HSAT % 240s GSAT-W % 130s WalkSAT % 68s HSAT % 103s GSAT-W % 100s WalkSAT % 238s HSAT % 102s GSAT-W % 439s WalkSAT % 107s Other Benchmarks Hirsch Formulas This set of benchmarks is known to be difficult to solve despite their low number of variables (N 300). The reason for this difficulty is because the formulas are constructed backwards from the truth assignments in order to have certain properties that make the formula difficult to solve; in particular, variables that appear together in one clause are not allowed to appear together in other clauses, which reduces the overlap in the amount of work that can be done by single variable flips, and makes the greedy heuristics less effective as flips have less effect. The results of evaluating the algorithms on 10 instances with 2 trials per instance is shown in Table 6 (at end). Again, as few trials were performed due to time constraints, individual figures are less meaningful; however, the general trend can still be seen. In terms of % solved, HSAT is still clearly dominant, solving at least as many trials per instance compared to other algorithms, and for two instances is the only algorithm that can find a satisfying truth assignment. Also, HSAT has comparable and often better running times. GSAT again has the worst performance in terms of these two metrics, obtaining success only on 5 of 20 trials. The random walk algorithms work occasionally, often with high running times and in many instances with number of flips close to Max-flips, suggesting that either Max-flips is not well chosen (should be higher), or that convergence is not fast enough due to the difficulty of the search space. The fact that number of tries is very high for all instances suggests that the search space is indeed very difficult, in that search paths often fail even with the ability to escape local optima, hence implying that solutions are very hidden.

8 Spin-Glass Models These SAT problems are exceptionally difficult, and no instance was solved by any of the algorithms. Although these are also 3-SAT problems with N , their difficulty comes from the intricate structure of its clauses. The clauses combined essentially exhibit a cyclic structure over the variables and connects the variables in order. The challenge this poses is that once an assignment to a certain variable along the chain is fixed, the assignments of all other variables must also fit exactly in place, which clearly is difficult given the randomness and greediness involved in the algorithms we evaluate. It is therefore too difficult in these problems for the algorithms to find a converging path to a satisfying assignment Quasigroup Completion Table 3: Quasigroup Completion Benchmarks on GSAT-W (2 trials per entry) Name Vars Clauses Max-Flips %-Solved Time Flips Tries qcp % 149s qcp % 402s qcp % 0.95s qcp % 26s qcp % 326s qwh % 386s qwh % 481s qwh % 157s qwh % 289s These problems have properties that make it difficult for greedy heuristics to work. Of the 20 instances evaluated (with 2 trials per instance), only GSAT with random walk was able to find a satisfying assignment in any instance, and it was able to do so in only 9 of the 20. The difficulty in these problems lies in their structure; approximately 90% of the clauses are purely negative 2-variable clauses (i.e., both literals contain ), and the rest of the clauses are purely positive. Since there are so many more negative clauses, flipping a variable s truth values to false would have a high score because it can cause many negative clauses to be satisfied, even if the single positive clause that contained the variable becomes unsatisfied. This would likely cause many negative truth assignments; however, as there are purely positive clauses, some variables must have a positive assignment. This is difficult to obtain during a greedy hill-climbing procedure, since flipping a truth value from false to true causes many 2-variable negative clauses to rely on the other variable as its support, which prevents future flips from false to true, and hence blocks the correct combinations of true variables from occurring. The local optima in this case is therefore very strong, or in other words, the true satisfying assignment is very hidden. An interesting supporting case for this is the qcp problem, which is considerably smaller than the other problems. All algorithms can successfully solve this instance, but both

9 9 GSAT and HSAT take much longer ( 300s, compared to 0.95s for GSAT with random walk), suggesting that it is very difficult to just rely on hill-climbing heuristics to find solutions. WalkSAT performed only slightly slower on this instance, so it is unclear why only GSAT with random walk and not WalkSAT is successful on other instances. It is likely that WalkSAT is capable of converging to the solution, but is not quick enough due to the size of the problem (with N 4000 in many cases). This is supported by the fact that for GSAT with random walk, even though all solved cases occurred in the first try and flips used was not near Max-flips, the running time for many cases was close to Max-time. Hence because WalkSAT is slightly slower, it may have taken longer than Max-time to converge Others Table 4: Other Problems (5 trials per entry) Name Vars Clauses Algorithm Max-Flips %-Solved Time Flips Tries blocksworld.a GSAT % 229s HSAT % 262s GSAT-W % 17s WalkSAT % 4s logistics.a WalkSAT % 257s logistics.b WalkSAT % 184s logistics.c WalkSAT % 261s logistics.d WalkSAT % 164s graphcoloring.a GSAT % 5.61s HSAT % 5.97s GSAT-W % 0.87s WalkSAT % 0.80s graphcoloring.b GSAT % 2.56s HSAT % 1.27s GSAT-W % 1.62s WalkSAT % 0.82s graphcoloring.c GSAT % 13.5s HSAT % 6.0s GSAT-W % 2.8s WalkSAT % 20.1s graphcoloring.d GSAT % 2.02s HSAT % 0.29s GSAT-W % 5.68s WalkSAT % 4.15s The algorithms were evaluated on several other problems, including blocks world planning, logistics, and graph coloring. All of these problems had structure similar to the quasi-

10 10 group completion problems in Section A total of 10 instances with 5 trials per instance was evaluated, with results shown in Table 4. The problem blocksworld.b and non-walksat instances of logistics are omitted as they were unsolvable. The solvable instances of blocksworld and logistics had behavior similar to that of the quasigroup completion problems, where GSAT and HSAT would take very long, with a very high number of tries indicating that it was very difficult to converge to a satisfying assignment. In contrast, random walk methods, and especially WalkSAT, performed much better; WalkSAT was the only algorithm that could solve the logistics problems. The low number of tries for GSAT with random walk and WalkSAT for blockworlds.a suggest that random walk strategies, despite having observably slower convergence so far, can often converge successfully to the solution, possibly due to their superior ability to escape local optima. However, it is interesting that in the logistics problems, apart from logistics.d, WalkSAT had a relatively high number of tries, indicating that perhaps Max-flips needs to be more carefully tuned to the specific problem. It is also interesting to see that in quasigroup completion only GSAT with random walk was successful, whereas here only WalkSAT was successful; the problems on which either algorithm is superior is still unclear. One surprising result is that although graph coloring problems have similar structure to the quasigroup completion problems, all algorithms were able to find satisfying truth assignments on all trials, and in a comparatively low running time and low number of tries. This is most likely due to a slight difference in the structure of the problem; whereas in quasigroup completion the purely positive clauses were relatively long and occupied approximately 10% of the clauses, in graph coloring the purely positive clauses are short (5 literals) and occupy only 3% of the clauses. The instances are therefore less problematic in terms of local optima, which can be seen by the ability for GSAT and HSAT to solve the problem with comparable speeds to the random walk algorithms. Still, local optima problems exist and are significant, which can be seen by the large difference in number of flips between GSAT and HSAT; HSAT s better method to escape local optima allowed it to significantly exceed GSAT s performance, suggesting that local optima problems are still prevalent. 3.4 Score Caching To evaluate the effectiveness of the score caching optimization, two versions of HSAT, one with score caching and one without, were compared. Both versions were tested on the random 3-SAT formulas from Section 3.2 for N = 50, 100, 150, 200, 250, 300, again with 10 instances of each N, and with 2 trials per instance. The results are shown in Table 5. Note that since the method in solving SAT has not been changed (only a speedup optimization was introduced), the two algorithms should perform similarly in terms of flips and tries, which was indeed the case and these fields are therefore omitted. Clearly, score caching improves the performance of HSAT significantly, with a speedup of 50 times in terms of running time. Also, the % of instances that can be solved quickly decrease as N (and the number of tries) increases, as Max-time is often exceeded. For N 200, this deterioration is significant, and by N = 400, HSAT without score caching cannot find a satisfying assignment for any instance. Score caching is therefore a very fast and effective optimization.

11 11 Table 5: HSAT With and Without Score Caching (20 trials per entry) Cache No Cache Vars Clauses Max-Flips %-Solved Time %-Solved Time % 0.003s 100% 0.07s % 0.02s 100% 1.3s % 0.25s 100% 33s % 0.6s 90% 96s % 3.3s 50% 187s % 3.8s 35% 181s 4 Conclusions We evaluated four SAT algorithms GSAT, HSAT, GSAT with random walk, and WalkSAT against a variety of SAT problems. For hard random 3-SAT formulas, and especially for N 300, a stable trend was found, where HSAT outperformed the other algorithms in running time, number of flips, and number of tries. This was followed by WalkSAT, GSAT with random walk, and GSAT. For greater values of N, HSAT continued to outperform the other algorithms, though not as stably and significantly. For other benchmarks, different trends were found. When the problems had more structure, such as in quasigroup completion, blocks world planning, logistics, and graph coloring problems, GSAT with random walk and WalkSAT significantly outperformed the former two greedy local search algorithms. The performance of the evaluated algorithms therefore depend greatly on the nature of the SAT problem; however, in all cases, the variants of GSAT all outperformed GSAT itself. References [1] M. Davis and H. Putnam, A computing procedure for quantification theory, JACM, vol. 7, no. 3, [2] B. Selman, H. Levesque, and D. Mitchell, A new method for solving hard satisfiability problems, in AAAI, [3] I. Gent and T. Walsh, Towards an understanding of hill-climbing procedures for sat, in AAAI, [4] B. Selman, H. Kautz, and B. Cohen, Noise strategies for improving local search, in AAAI, 1994.

12 12 Table 6: Hirsch Benchmarks (2 trials per entry) Name Vars Clauses Algorithm Max-Flips %-Solved Time Flips Tries hgen GSAT % HSAT % 101s GSAT-W % 169s WalkSAT % hgen GSAT % 256s HSAT % 45s GSAT-W % 319s WalkSAT % 138s hgen GSAT % HSAT % 116s GSAT-W % WalkSAT % 39s hgen GSAT % HSAT % 77s GSAT-W % WalkSAT % 116s hgen GSAT % HSAT % 269s GSAT-W % 354s WalkSAT % hgen GSAT % 429s HSAT % 217s GSAT-W % 249s WalkSAT % 155s hgen GSAT % HSAT % 226s GSAT-W % 211s WalkSAT % 301s hgen GSAT % HSAT % 543s GSAT-W % WalkSAT % hgen GSAT % HSAT % 280s GSAT-W % WalkSAT % hgen GSAT % 340s HSAT % 20s GSAT-W % 117s WalkSAT % 317s

Massively Parallel Seesaw Search for MAX-SAT

Massively Parallel Seesaw Search for MAX-SAT Massively Parallel Seesaw Search for MAX-SAT Harshad Paradkar Rochester Institute of Technology hp7212@rit.edu Prof. Alan Kaminsky (Advisor) Rochester Institute of Technology ark@cs.rit.edu Abstract The

More information

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA GSAT and Local Consistency 3 Kalev Kask and Rina Dechter Department of Information and Computer Science University of California, Irvine, CA 92717-3425 fkkask,dechterg@ics.uci.edu Abstract It has been

More information

Foundations of AI. 8. Satisfiability and Model Construction. Davis-Putnam, Phase Transitions, GSAT and GWSAT. Wolfram Burgard & Bernhard Nebel

Foundations of AI. 8. Satisfiability and Model Construction. Davis-Putnam, Phase Transitions, GSAT and GWSAT. Wolfram Burgard & Bernhard Nebel Foundations of AI 8. Satisfiability and Model Construction Davis-Putnam, Phase Transitions, GSAT and GWSAT Wolfram Burgard & Bernhard Nebel Contents Motivation Davis-Putnam Procedure Average complexity

More information

GSAT and Local Consistency

GSAT and Local Consistency GSAT and Local Consistency Kalev Kask Computer Science Department University of California at Irvine Irvine, CA 92717 USA Rina Dechter Computer Science Department University of California at Irvine Irvine,

More information

Satisfiability. Michail G. Lagoudakis. Department of Computer Science Duke University Durham, NC SATISFIABILITY

Satisfiability. Michail G. Lagoudakis. Department of Computer Science Duke University Durham, NC SATISFIABILITY Satisfiability Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 27708 COMPSCI 271 - Spring 2001 DUKE UNIVERSITY Page 1 Why SAT? Historical Reasons The first NP-COMPLETE problem

More information

An Analysis and Comparison of Satisfiability Solving Techniques

An Analysis and Comparison of Satisfiability Solving Techniques An Analysis and Comparison of Satisfiability Solving Techniques Ankur Jain, Harsha V. Madhyastha, Craig M. Prince Department of Computer Science and Engineering University of Washington Seattle, WA 98195

More information

CS-E3200 Discrete Models and Search

CS-E3200 Discrete Models and Search Shahab Tasharrofi Department of Information and Computer Science, Aalto University Lecture 7: Complete and local search methods for SAT Outline Algorithms for solving Boolean satisfiability problems Complete

More information

Set 5: Constraint Satisfaction Problems Chapter 6 R&N

Set 5: Constraint Satisfaction Problems Chapter 6 R&N Set 5: Constraint Satisfaction Problems Chapter 6 R&N ICS 271 Fall 2017 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples:

More information

algorithms, i.e., they attempt to construct a solution piece by piece and are not able to offer a complete solution until the end. The FM algorithm, l

algorithms, i.e., they attempt to construct a solution piece by piece and are not able to offer a complete solution until the end. The FM algorithm, l The FMSAT Satisfiability Solver: Hypergraph Partitioning meets Boolean Satisfiability Arathi Ramani, Igor Markov framania, imarkovg@eecs.umich.edu February 6, 2002 Abstract This report is intended to present

More information

Evidence for Invariants in Local Search

Evidence for Invariants in Local Search This paper appears in the Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI-97), Providence, RI, 1997. Copyright 1997 American Association for Artificial Intelligence.

More information

On the Run-time Behaviour of Stochastic Local Search Algorithms for SAT

On the Run-time Behaviour of Stochastic Local Search Algorithms for SAT From: AAAI-99 Proceedings. Copyright 1999, AAAI (www.aaai.org). All rights reserved. On the Run-time Behaviour of Stochastic Local Search Algorithms for SAT Holger H. Hoos University of British Columbia

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2014 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

Simple mechanisms for escaping from local optima:

Simple mechanisms for escaping from local optima: The methods we have seen so far are iterative improvement methods, that is, they get stuck in local optima. Simple mechanisms for escaping from local optima: I Restart: re-initialise search whenever a

More information

Watching Clauses in Quantified Boolean Formulae

Watching Clauses in Quantified Boolean Formulae Watching Clauses in Quantified Boolean Formulae Andrew G D Rowley University of St. Andrews, Fife, Scotland agdr@dcs.st-and.ac.uk Abstract. I present a way to speed up the detection of pure literals and

More information

An Introduction to SAT Solvers

An Introduction to SAT Solvers An Introduction to SAT Solvers Knowles Atchison, Jr. Fall 2012 Johns Hopkins University Computational Complexity Research Paper December 11, 2012 Abstract As the first known example of an NP Complete problem,

More information

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley Project Proposals Due Friday, February 13 on bcourses Will discuss project topics on Monday Instructions

More information

Example: Map coloring

Example: Map coloring Today s s lecture Local Search Lecture 7: Search - 6 Heuristic Repair CSP and 3-SAT Solving CSPs using Systematic Search. Victor Lesser CMPSCI 683 Fall 2004 The relationship between problem structure and

More information

space. We will apply the idea of enforcing local consistency to GSAT with the hope that its performance can

space. We will apply the idea of enforcing local consistency to GSAT with the hope that its performance can GSAT and Local Consistency 3 Kalev Kask Computer Science Department University of California at Irvine Irvine, CA 92717 USA Rina Dechter Computer Science Department University of California at Irvine Irvine,

More information

Using Learning Automata to Enhance Local-Search Based SAT Solvers

Using Learning Automata to Enhance Local-Search Based SAT Solvers Using Learning Automata to Enhance Local-Search Based SAT Solvers with Learning Capability 63 5 Using Learning Automata to Enhance Local-Search Based SAT Solvers with Learning Capability Ole-Christoffer

More information

6.034 Notes: Section 3.1

6.034 Notes: Section 3.1 6.034 Notes: Section 3.1 Slide 3.1.1 In this presentation, we'll take a look at the class of problems called Constraint Satisfaction Problems (CSPs). CSPs arise in many application areas: they can be used

More information

A Stochastic Non-CNF SAT Solver

A Stochastic Non-CNF SAT Solver A Stochastic Non-CNF SAT Solver Rafiq Muhammad and Peter J. Stuckey NICTA Victoria Laboratory, Department of Computer Science and Software Engineering, The University of Melbourne, Victoria 3010, Australia

More information

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Formal Methods Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley The Boolean Satisfiability Problem (SAT) Given: A Boolean formula F(x 1, x 2, x 3,, x n ) Can F evaluate to 1

More information

Evolving Variable-Ordering Heuristics for Constrained Optimisation

Evolving Variable-Ordering Heuristics for Constrained Optimisation Griffith Research Online https://research-repository.griffith.edu.au Evolving Variable-Ordering Heuristics for Constrained Optimisation Author Bain, Stuart, Thornton, John, Sattar, Abdul Published 2005

More information

Stochastic greedy local search Chapter 7

Stochastic greedy local search Chapter 7 Stochastic greedy local search Chapter 7 ICS-275 Winter 2016 Example: 8-queen problem Main elements Choose a full assignment and iteratively improve it towards a solution Requires a cost function: number

More information

Random Walk With Continuously Smoothed Variable Weights

Random Walk With Continuously Smoothed Variable Weights Random Walk With Continuously Smoothed Variable Weights Steven Prestwich Cork Constraint Computation Centre Department of Computer Science University College Cork, Ireland s.prestwich@cs.ucc.ie Abstract.

More information

Lecture: Iterative Search Methods

Lecture: Iterative Search Methods Lecture: Iterative Search Methods Overview Constructive Search is exponential. State-Space Search exhibits better performance on some problems. Research in understanding heuristic and iterative search

More information

Solving Problems with Hard and Soft Constraints Using a Stochastic Algorithm for MAX-SAT

Solving Problems with Hard and Soft Constraints Using a Stochastic Algorithm for MAX-SAT This paper appeared at the st International Joint Workshop on Artificial Intelligence and Operations Research, Timberline, Oregon, 995. Solving Problems with Hard and Soft Constraints Using a Stochastic

More information

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro CMU-Q 15-381 Lecture 8: Optimization I: Optimization for CSP Local Search Teacher: Gianni A. Di Caro LOCAL SEARCH FOR CSP Real-life CSPs can be very large and hard to solve Methods so far: construct a

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2012 Rina Dechter ICS-271:Notes 5: 1 Outline The constraint network model Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

Captain Jack: New Variable Selection Heuristics in Local Search for SAT

Captain Jack: New Variable Selection Heuristics in Local Search for SAT Captain Jack: New Variable Selection Heuristics in Local Search for SAT Dave Tompkins, Adrian Balint, Holger Hoos SAT 2011 :: Ann Arbor, Michigan http://www.cs.ubc.ca/research/captain-jack Key Contribution:

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

v b) Λ (a v b) Λ (a v b) AND a v b a _ v b v b

v b) Λ (a v b) Λ (a v b) AND a v b a _ v b v b A NN Algorithm for Boolean Satisability Problems William M. Spears AI Center - Code 5514 Naval Research Laboratory Washington, D.C. 20375-5320 202-767-9006 (W) 202-767-3172 (Fax) spears@aic.nrl.navy.mil

More information

WalkSAT: Solving Boolean Satisfiability via Stochastic Search

WalkSAT: Solving Boolean Satisfiability via Stochastic Search WalkSAT: Solving Boolean Satisfiability via Stochastic Search Connor Adsit cda8519@rit.edu Kevin Bradley kmb3398@rit.edu December 10, 2014 Christian Heinrich cah2792@rit.edu Contents 1 Overview 1 2 Research

More information

Solving the Boolean Satisfiability Problem Using Multilevel Techniques

Solving the Boolean Satisfiability Problem Using Multilevel Techniques Solving the Boolean Satisfiability Problem Using Multilevel Techniques Sirar Salih Yujie Song Supervisor Associate Professor Noureddine Bouhmala This Master s Thesis is carried out as a part of the education

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2013 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

The implementation is written in Python, instructions on how to run it are given in the last section.

The implementation is written in Python, instructions on how to run it are given in the last section. Algorithm I chose to code a version of the WALKSAT algorithm. This algorithm is based on the one presented in Russell and Norvig p226, copied here for simplicity: function WALKSAT(clauses max-flips) returns

More information

Vertex Cover Approximations

Vertex Cover Approximations CS124 Lecture 20 Heuristics can be useful in practice, but sometimes we would like to have guarantees. Approximation algorithms give guarantees. It is worth keeping in mind that sometimes approximation

More information

Boolean Functions (Formulas) and Propositional Logic

Boolean Functions (Formulas) and Propositional Logic EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Part I: Basics Sanjit A. Seshia EECS, UC Berkeley Boolean Functions (Formulas) and Propositional Logic Variables: x 1, x 2, x 3,, x

More information

The Resolution Algorithm

The Resolution Algorithm The Resolution Algorithm Introduction In this lecture we introduce the Resolution algorithm for solving instances of the NP-complete CNF- SAT decision problem. Although the algorithm does not run in polynomial

More information

Random Subset Optimization

Random Subset Optimization Random Subset Optimization Boi Faltings and Quang-Huy Nguyen Artificial Intelligence Laboratory (LIA), Swiss Federal Institute of Technology (EPFL), IN-Ecublens, CH-1015 Ecublens, Switzerland, boi.faltings

More information

Kalev Kask and Rina Dechter

Kalev Kask and Rina Dechter From: AAAI-96 Proceedings. Copyright 1996, AAAI (www.aaai.org). All rights reserved. A Graph-Based Method for Improving GSAT Kalev Kask and Rina Dechter Department of Information and Computer Science University

More information

4.1 Review - the DPLL procedure

4.1 Review - the DPLL procedure Applied Logic Lecture 4: Efficient SAT solving CS 4860 Spring 2009 Thursday, January 29, 2009 The main purpose of these notes is to help me organize the material that I used to teach today s lecture. They

More information

QingTing: A Fast SAT Solver Using Local Search and E cient Unit Propagation

QingTing: A Fast SAT Solver Using Local Search and E cient Unit Propagation QingTing: A Fast SAT Solver Using Local Search and E cient Unit Propagation Xiao Yu Li, Matthias F. Stallmann, and Franc Brglez Dept. of Computer Science, NC State Univ., Raleigh, NC 27695, USA {xyli,mfms,brglez}@unity.ncsu.edu

More information

Full CNF Encoding: The Counting Constraints Case

Full CNF Encoding: The Counting Constraints Case Full CNF Encoding: The Counting Constraints Case Olivier Bailleux 1 and Yacine Boufkhad 2 1 LERSIA, Université de Bourgogne Avenue Alain Savary, BP 47870 21078 Dijon Cedex olivier.bailleux@u-bourgogne.fr

More information

Solving the Maximum Satisfiability Problem Using an Evolutionary Local Search Algorithm

Solving the Maximum Satisfiability Problem Using an Evolutionary Local Search Algorithm 154 The International Arab Journal of Information Technology, Vol. 2, No. 2, April 2005 Solving the Maximum Satisfiability Problem Using an Evolutionary Local Search Algorithm Mohamed El Bachir Menai 1

More information

Pbmodels Software to Compute Stable Models by Pseudoboolean Solvers

Pbmodels Software to Compute Stable Models by Pseudoboolean Solvers Pbmodels Software to Compute Stable Models by Pseudoboolean Solvers Lengning Liu and Mirosław Truszczyński Department of Computer Science, University of Kentucky, Lexington, KY 40506-0046, USA Abstract.

More information

A Multilevel Greedy Algorithm for the Satisfiability Problem

A Multilevel Greedy Algorithm for the Satisfiability Problem A Multilevel Greedy Algorithm for the Satisfiability Problem 3 Noureddine Bouhmala 1 and Xing Cai 2 1 Vestfold University College, 2 Simula Research Laboratory, Norway 1. Introduction The satisfiability

More information

Trends in Constraint Programming. Frédéric BENHAMOU Narendra JUSSIEN Barry O SULLIVAN

Trends in Constraint Programming. Frédéric BENHAMOU Narendra JUSSIEN Barry O SULLIVAN Trends in Constraint Programming Frédéric BENHAMOU Narendra JUSSIEN Barry O SULLIVAN November 24, 2006 2 Contents FIRST PART. LOCAL SEARCH TECHNIQUES IN CONSTRAINT SATISFAC- TION..........................................

More information

An Adaptive Noise Mechanism for WalkSAT

An Adaptive Noise Mechanism for WalkSAT From: AAAI-02 Proceedings. Copyright 2002, AAAI (www.aaai.org). All rights reserved. An Adaptive Noise Mechanism for WalkSAT Holger H. Hoos University of British Columbia Computer Science Department 2366

More information

Extending the Reach of SAT with Many-Valued Logics

Extending the Reach of SAT with Many-Valued Logics Extending the Reach of SAT with Many-Valued Logics Ramón Béjar a, Alba Cabiscol b,césar Fernández b, Felip Manyà b and Carla Gomes a a Dept. of Comp. Science, Cornell University, Ithaca, NY 14853 USA,

More information

Towards an Understanding of Hill-climbing. Procedures for SAT. Ian P. Gent. Abstract

Towards an Understanding of Hill-climbing. Procedures for SAT. Ian P. Gent. Abstract Towards an Understanding of Hill-climbing Procedures for SAT Ian P. Gent Toby Walsh y Draft of January 12, 1993 Abstract Recently several local hill-climbing procedures for propositional satis- ability

More information

CMPUT 366 Intelligent Systems

CMPUT 366 Intelligent Systems CMPUT 366 Intelligent Systems Assignment 1 Fall 2004 Department of Computing Science University of Alberta Due: Thursday, September 30 at 23:59:59 local time Worth: 10% of final grade (5 questions worth

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Chapter 6 of Russell & Norvig] Constraint satisfaction problems (CSPs) Standard search problem:

More information

Speeding Up the ESG Algorithm

Speeding Up the ESG Algorithm Speeding Up the ESG Algorithm Yousef Kilani 1 and Abdullah. Mohdzin 2 1 Prince Hussein bin Abdullah Information Technology College, Al Al-Bayt University, Jordan 2 Faculty of Information Science and Technology,

More information

Homework 2: Search and Optimization

Homework 2: Search and Optimization Scott Chow ROB 537: Learning Based Control October 16, 2017 Homework 2: Search and Optimization 1 Introduction The Traveling Salesman Problem is a well-explored problem that has been shown to be NP-Complete.

More information

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14 600.363 Introduction to Algorithms / 600.463 Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14 23.1 Introduction We spent last week proving that for certain problems,

More information

Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature.

Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature. Simulated Annealing Key idea: Vary temperature parameter, i.e., probability of accepting worsening moves, in Probabilistic Iterative Improvement according to annealing schedule (aka cooling schedule).

More information

Local Search. CS 486/686: Introduction to Artificial Intelligence Winter 2016

Local Search. CS 486/686: Introduction to Artificial Intelligence Winter 2016 Local Search CS 486/686: Introduction to Artificial Intelligence Winter 2016 1 Overview Uninformed Search Very general: assumes no knowledge about the problem BFS, DFS, IDS Informed Search Heuristics A*

More information

Solving the Satisfiability Problem Using Finite Learning Automata

Solving the Satisfiability Problem Using Finite Learning Automata International Journal of Computer Science & Applications Vol. 4 Issue 3, pp 15-29 2007 Technomathematics Research Foundation Solving the Satisfiability Problem Using Finite Learning Automata Ole-Christoffer

More information

Parallelizing SAT Solver With specific application on solving Sudoku Puzzles

Parallelizing SAT Solver With specific application on solving Sudoku Puzzles 6.338 Applied Parallel Computing Final Report Parallelizing SAT Solver With specific application on solving Sudoku Puzzles Hank Huang May 13, 2009 This project was focused on parallelizing a SAT solver

More information

Stochastic Local Search for SMT

Stochastic Local Search for SMT DISI - Via Sommarive, 14-38123 POVO, Trento - Italy http://disi.unitn.it Stochastic Local Search for SMT Silvia Tomasi December 2010 Technical Report # DISI-10-060 Contents 1 Introduction 5 2 Background

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Informed Search and Exploration Chapter 4 (4.3 4.6) Searching: So Far We ve discussed how to build goal-based and utility-based agents that search to solve problems We ve also presented

More information

CMPUT 366 Assignment 1

CMPUT 366 Assignment 1 CMPUT 66 Assignment Instructor: R. Greiner Due Date: Thurs, October 007 at start of class The following exercises are intended to further your understanding of agents, policies, search, constraint satisfaction

More information

Generating Satisfiable Problem Instances

Generating Satisfiable Problem Instances Generating Satisfiable Problem Instances Dimitris Achlioptas Microsoft Research Redmond, WA 9852 optas@microsoft.com Carla Gomes Dept. of Comp. Sci. Cornell Univ. Ithaca, NY 4853 gomes@cs.cornell.edu Henry

More information

Heuristic Backtracking Algorithms for SAT

Heuristic Backtracking Algorithms for SAT Heuristic Backtracking Algorithms for SAT A. Bhalla, I. Lynce, J.T. de Sousa and J. Marques-Silva IST/INESC-ID, Technical University of Lisbon, Portugal fateet,ines,jts,jpmsg@sat.inesc.pt Abstract In recent

More information

Horn Formulae. CS124 Course Notes 8 Spring 2018

Horn Formulae. CS124 Course Notes 8 Spring 2018 CS124 Course Notes 8 Spring 2018 In today s lecture we will be looking a bit more closely at the Greedy approach to designing algorithms. As we will see, sometimes it works, and sometimes even when it

More information

ABHELSINKI UNIVERSITY OF TECHNOLOGY

ABHELSINKI UNIVERSITY OF TECHNOLOGY Local Search Algorithms for Random Satisfiability Pekka Orponen (joint work with Sakari Seitz and Mikko Alava) Helsinki University of Technology Local Search Algorithms for Random Satisfiability 1/30 Outline

More information

Satisfiability (SAT) Applications. Extensions/Related Problems. An Aside: Example Proof by Machine. Annual Competitions 12/3/2008

Satisfiability (SAT) Applications. Extensions/Related Problems. An Aside: Example Proof by Machine. Annual Competitions 12/3/2008 15 53:Algorithms in the Real World Satisfiability Solvers (Lectures 1 & 2) 1 Satisfiability (SAT) The original NP Complete Problem. Input: Variables V = {x 1, x 2,, x n }, Boolean Formula Φ (typically

More information

Handbook of Constraint Programming 245 Edited by F. Rossi, P. van Beek and T. Walsh c 2006 Elsevier All rights reserved

Handbook of Constraint Programming 245 Edited by F. Rossi, P. van Beek and T. Walsh c 2006 Elsevier All rights reserved Handbook of Constraint Programming 245 Edited by F. Rossi, P. van Beek and T. Walsh c 2006 Elsevier All rights reserved Chapter 8 Local Search Methods Holger H. Hoos and Edward Tsang Local search is one

More information

Stochastic Local Search Methods for Dynamic SAT an Initial Investigation

Stochastic Local Search Methods for Dynamic SAT an Initial Investigation Stochastic Local Search Methods for Dynamic SAT an Initial Investigation Holger H. Hoos and Kevin O Neill Abstract. We introduce the dynamic SAT problem, a generalisation of the satisfiability problem

More information

Adaptive Memory-Based Local Search for MAX-SAT

Adaptive Memory-Based Local Search for MAX-SAT Adaptive Memory-Based Local Search for MAX-SAT Zhipeng Lü a,b, Jin-Kao Hao b, Accept to Applied Soft Computing, Feb 2012 a School of Computer Science and Technology, Huazhong University of Science and

More information

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies 55 4 INFORMED SEARCH AND EXPLORATION We now consider informed search that uses problem-specific knowledge beyond the definition of the problem itself This information helps to find solutions more efficiently

More information

Local Search. CS 486/686: Introduction to Artificial Intelligence

Local Search. CS 486/686: Introduction to Artificial Intelligence Local Search CS 486/686: Introduction to Artificial Intelligence 1 Overview Uninformed Search Very general: assumes no knowledge about the problem BFS, DFS, IDS Informed Search Heuristics A* search and

More information

An Experimental Evaluation of Conflict Diagnosis and Recursive Learning in Boolean Satisfiability

An Experimental Evaluation of Conflict Diagnosis and Recursive Learning in Boolean Satisfiability An Experimental Evaluation of Conflict Diagnosis and Recursive Learning in Boolean Satisfiability Fadi A. Aloul and Karem A. Sakallah Department of Electrical Engineering and Computer Science University

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 22.1 Introduction We spent the last two lectures proving that for certain problems, we can

More information

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing 1. Introduction 2. Cutting and Packing Problems 3. Optimisation Techniques 4. Automated Packing Techniques 5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing 6.

More information

Exact Max-SAT solvers for over-constrained problems

Exact Max-SAT solvers for over-constrained problems J Heuristics (2006) 12: 375 392 DOI 10.1007/s10732-006-7234-9 Exact Max-SAT solvers for over-constrained problems Josep Argelich Felip Manyà C Science + Business Media, LLC 2006 Abstract We present a new

More information

Configuration landscape analysis and backbone guided local search. Part I: Satisfiability and maximum satisfiability

Configuration landscape analysis and backbone guided local search. Part I: Satisfiability and maximum satisfiability Artificial Intelligence 158 (2004) 1 26 www.elsevier.com/locate/artint Configuration landscape analysis and backbone guided local search. Part I: Satisfiability and maximum satisfiability Weixiong Zhang

More information

Exact Algorithms Lecture 7: FPT Hardness and the ETH

Exact Algorithms Lecture 7: FPT Hardness and the ETH Exact Algorithms Lecture 7: FPT Hardness and the ETH February 12, 2016 Lecturer: Michael Lampis 1 Reminder: FPT algorithms Definition 1. A parameterized problem is a function from (χ, k) {0, 1} N to {0,

More information

Satisfiability Solvers

Satisfiability Solvers Satisfiability Solvers Part 1: Systematic Solvers 600.325/425 Declarative Methods - J. Eisner 1 Vars SAT solving has made some progress 100000 10000 1000 100 10 1 1960 1970 1980 1990 2000 2010 Year slide

More information

Lookahead Saturation with Restriction for SAT

Lookahead Saturation with Restriction for SAT Lookahead Saturation with Restriction for SAT Anbulagan 1 and John Slaney 1,2 1 Logic and Computation Program, National ICT Australia Ltd., Canberra, Australia 2 Computer Sciences Laboratory, Australian

More information

SAT-Solving: Performance Analysis of Survey Propagation and DPLL

SAT-Solving: Performance Analysis of Survey Propagation and DPLL SAT-Solving: Performance Analysis of Survey Propagation and DPLL Christian Steinruecken June 2007 Abstract The Boolean Satisfiability Problem (SAT) belongs to the class of NP-complete problems, meaning

More information

A Simple ¾-Approximation Algorithm for MAX SAT

A Simple ¾-Approximation Algorithm for MAX SAT A Simple ¾-Approximation Algorithm for MAX SAT David P. Williamson Joint work with Matthias Poloczek (Cornell), Georg Schnitger (Frankfurt), and Anke van Zuylen (William & Mary) Maximum Satisfiability

More information

Welfare Navigation Using Genetic Algorithm

Welfare Navigation Using Genetic Algorithm Welfare Navigation Using Genetic Algorithm David Erukhimovich and Yoel Zeldes Hebrew University of Jerusalem AI course final project Abstract Using standard navigation algorithms and applications (such

More information

Ar#ficial)Intelligence!!

Ar#ficial)Intelligence!! Introduc*on! Ar#ficial)Intelligence!! Roman Barták Department of Theoretical Computer Science and Mathematical Logic We know how to use heuristics in search BFS, A*, IDA*, RBFS, SMA* Today: What if the

More information

On Computing Minimum Size Prime Implicants

On Computing Minimum Size Prime Implicants On Computing Minimum Size Prime Implicants João P. Marques Silva Cadence European Laboratories / IST-INESC Lisbon, Portugal jpms@inesc.pt Abstract In this paper we describe a new model and algorithm for

More information

The MAX-SAX Problems

The MAX-SAX Problems STOCHASTIC LOCAL SEARCH FOUNDATION AND APPLICATION MAX-SAT & MAX-CSP Presented by: Wei-Lwun Lu 1 The MAX-SAX Problems MAX-SAT is the optimization variant of SAT. Unweighted MAX-SAT: Finds a variable assignment

More information

Where the hard problems are. Toby Walsh Cork Constraint Computation Centre

Where the hard problems are. Toby Walsh Cork Constraint Computation Centre Where the hard problems are Toby Walsh Cork Constraint Computation Centre http://4c.ucc.ie/~tw Where the hard problems are That s easy Passing your finals Finding a job Where the hard problems are That

More information

Gen := 0. Create Initial Random Population. Termination Criterion Satisfied? Yes. Evaluate fitness of each individual in population.

Gen := 0. Create Initial Random Population. Termination Criterion Satisfied? Yes. Evaluate fitness of each individual in population. An Experimental Comparison of Genetic Programming and Inductive Logic Programming on Learning Recursive List Functions Lappoon R. Tang Mary Elaine Cali Raymond J. Mooney Department of Computer Sciences

More information

An Evolutionary Framework for 3-SAT Problems

An Evolutionary Framework for 3-SAT Problems Journal of Computing and Information Technology - CIT 11, 2003, 3, 185-191 185 An Evolutionary Framework for 3-SAT Problems István Borgulya University of Pécs, Hungary In this paper we present a new evolutionary

More information

Where Can We Draw The Line?

Where Can We Draw The Line? Where Can We Draw The Line? On the Hardness of Satisfiability Problems Complexity 1 Introduction Objectives: To show variants of SAT and check if they are NP-hard Overview: Known results 2SAT Max2SAT Complexity

More information

Administrative. Local Search!

Administrative. Local Search! Administrative Local Search! CS311 David Kauchak Spring 2013 Assignment 2 due Tuesday before class Written problems 2 posted Class participation http://www.youtube.com/watch? v=irhfvdphfzq&list=uucdoqrpqlqkvctckzqa

More information

The Scaling of Search Cost. Ian P. Gent and Ewan MacIntyre and Patrick Prosser and Toby Walsh.

The Scaling of Search Cost. Ian P. Gent and Ewan MacIntyre and Patrick Prosser and Toby Walsh. The Scaling of Search Cost Ian P. Gent and Ewan MacIntyre and Patrick Prosser and Toby Walsh Apes Research Group, Department of Computer Science, University of Strathclyde, Glasgow G XH, Scotland Email:

More information

A Graph-Based Method for Improving GSAT. Kalev Kask and Rina Dechter. fkkask,

A Graph-Based Method for Improving GSAT. Kalev Kask and Rina Dechter. fkkask, A Graph-Based Method for Improving GSAT Kalev Kask and Rina Dechter Department of Information and Computer Science University of California, Irvine, CA 92717 fkkask, dechterg@ics.uci.edu Abstract GSAT

More information

N-Queens problem. Administrative. Local Search

N-Queens problem. Administrative. Local Search Local Search CS151 David Kauchak Fall 2010 http://www.youtube.com/watch?v=4pcl6-mjrnk Some material borrowed from: Sara Owsley Sood and others Administrative N-Queens problem Assign 1 grading Assign 2

More information

REACTIVE SEARCH FOR MAX-SAT: DIVERSIFICATION- BIAS PROPERTIES WITH PROHIBITIONS AND PENALTIES

REACTIVE SEARCH FOR MAX-SAT: DIVERSIFICATION- BIAS PROPERTIES WITH PROHIBITIONS AND PENALTIES DEPARTMENT OF INFORMATION AND COMMUNICATION TECHNOLOGY 38050 Povo Trento (Italy), Via Sommarive 14 http://dit.unitn.it/ REACTIVE SEARCH FOR MAX-SAT: DIVERSIFICATION- BIAS PROPERTIES WITH PROHIBITIONS AND

More information

Capturing Structure with Satisfiability

Capturing Structure with Satisfiability Capturing Structure with Satisfiability Ramón Béjar 1, Alba Cabiscol 2,Cèsar Fernàndez 2, Felip Manyà 2, and Carla Gomes 1 1 Dept. of Comp. Science, Cornell University, Ithaca, NY 14853, USA {bejar,gomes}@cs.cornell.edu

More information

Real-time Reconfigurable Hardware WSAT Variants

Real-time Reconfigurable Hardware WSAT Variants Real-time Reconfigurable Hardware WSAT Variants Roland Yap, Stella Wang, and Martin Henz School of Computing, National University of Singapore Singapore {ryap,wangzhan,henz}@comp.nus.edu.sg Abstract. Local

More information

The COMPSET Algorithm for Subset Selection

The COMPSET Algorithm for Subset Selection The COMPSET Algorithm for Subset Selection Yaniv Hamo and Shaul Markovitch {hamo,shaulm}@cs.technion.ac.il Computer Science Department, Technion, Haifa 32000, Israel Abstract Subset selection problems

More information

Markov Logic: Representation

Markov Logic: Representation Markov Logic: Representation Overview Statistical relational learning Markov logic Basic inference Basic learning Statistical Relational Learning Goals: Combine (subsets of) logic and probability into

More information