v b) Λ (a v b) Λ (a v b) AND a v b a _ v b v b

Similar documents
Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA

of m clauses, each containing the disjunction of boolean variables from a nite set V = fv 1 ; : : : ; vng of size n [8]. Each variable occurrence with

GSAT and Local Consistency

Massively Parallel Seesaw Search for MAX-SAT

Journal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems *

The temporal explorer who returns to the base 1

Set 5: Constraint Satisfaction Problems Chapter 6 R&N

The Scaling of Search Cost. Ian P. Gent and Ewan MacIntyre and Patrick Prosser and Toby Walsh.

space. We will apply the idea of enforcing local consistency to GSAT with the hope that its performance can

Set 5: Constraint Satisfaction Problems

Satisfiability. Michail G. Lagoudakis. Department of Computer Science Duke University Durham, NC SATISFIABILITY

Discrete Lagrangian-Based Search for Solving MAX-SAT Problems. Benjamin W. Wah and Yi Shang West Main Street. Urbana, IL 61801, USA

algorithms, i.e., they attempt to construct a solution piece by piece and are not able to offer a complete solution until the end. The FM algorithm, l

CS227: Assignment 1 Report

Foundations of AI. 8. Satisfiability and Model Construction. Davis-Putnam, Phase Transitions, GSAT and GWSAT. Wolfram Burgard & Bernhard Nebel

Speeding Up the ESG Algorithm

Hybrid Algorithms for SAT. Irina Rish and Rina Dechter.

Tuning Local Search for Satisability Testing

An Introduction to SAT Solvers

Example: Map coloring

A Graph-Based Method for Improving GSAT. Kalev Kask and Rina Dechter. fkkask,

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

Algorithms for the Satisability (SAT) Problem: A Survey. Jun Gu, Paul W. Purdom, John Franco, and Benjamin W. Wah

Set 5: Constraint Satisfaction Problems

Towards an Understanding of Hill-climbing. Procedures for SAT. Ian P. Gent. Abstract

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

The Resolution Algorithm

Localization in Graphs. Richardson, TX Azriel Rosenfeld. Center for Automation Research. College Park, MD

A Stochastic Non-CNF SAT Solver

Binary Decision Diagrams

Boolean Functions (Formulas) and Propositional Logic

Set 5: Constraint Satisfaction Problems

CS 188: Artificial Intelligence Spring Today

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

Constraint Satisfaction Problems

Solving Constraint Satisfaction Problems by Artificial Bee Colony with Greedy Scouts

Solving the Boolean Satisfiability Problem Using Multilevel Techniques

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Stochastic greedy local search Chapter 7

CS 188: Artificial Intelligence

Solving Problems with Hard and Soft Constraints Using a Stochastic Algorithm for MAX-SAT

2 Keywords Backtracking Algorithms, Constraint Satisfaction Problem, Distributed Articial Intelligence, Iterative Improvement Algorithm, Multiagent Sy

0th iteration 1st iteration 2nd iteration

On the Run-time Behaviour of Stochastic Local Search Algorithms for SAT

CMU-Q Lecture 8: Optimization I: Optimization for CSP Local Search. Teacher: Gianni A. Di Caro

On the Relation between SAT and BDDs for Equivalence Checking

Andrew Davenport and Edward Tsang. fdaveat,edwardgessex.ac.uk. mostly soluble problems and regions of overconstrained, mostly insoluble problems as

Eulerian disjoint paths problem in grid graphs is NP-complete

NP-Completeness of 3SAT, 1-IN-3SAT and MAX 2SAT

Heuristics Based on Unit Propagation for Satisability Problems. Chu Min Li & Anbulagan

Kalev Kask and Rina Dechter

A Simplied NP-complete MAXSAT Problem. Abstract. It is shown that the MAX2SAT problem is NP-complete even if every variable

Lecture: Iterative Search Methods

2 [Ben96]. However, this study was limited to a single mapping. Since the choice of mapping can have a very large impact on our ability to solve probl

Finite Model Generation for Isabelle/HOL Using a SAT Solver

Simple mechanisms for escaping from local optima:

An Analysis and Comparison of Satisfiability Solving Techniques

Daniel Frost and Rina Dechter. fdfrost,

2SAT Andreas Klappenecker

A two phase exact algorithm for MAX SAT and. weighted MAX SAT problems *

SAT problems with chains of dependent variables

In search of the best constraint satisfaction search 3. Daniel Frost and Rina Dechter.

Heuristic Backtracking Algorithms for SAT

Optimal Partitioning of Sequences. Abstract. The problem of partitioning a sequence of n real numbers into p intervals

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Algebraic Properties of CSP Model Operators? Y.C. Law and J.H.M. Lee. The Chinese University of Hong Kong.

Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1

Integrating Probabilistic Reasoning with Constraint Satisfaction

An ATM Network Planning Model. A. Farago, V.T. Hai, T. Cinkler, Z. Fekete, A. Arato. Dept. of Telecommunications and Telematics

Unit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.

Example of a Demonstration that a Problem is NP-Complete by reduction from CNF-SAT

Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature.

A B. A: sigmoid B: EBA (x0=0.03) C: EBA (x0=0.05) U

Evidence for Invariants in Local Search

Parallel Program Graphs and their. (fvivek dependence graphs, including the Control Flow Graph (CFG) which

CMPUT 366 Assignment 1

Proceedings of the First IEEE Conference on Evolutionary Computation - IEEE World Congress on Computational Intelligence, June

Rearrangement of DNA fragments: a branch-and-cut algorithm Abstract. In this paper we consider a problem that arises in the process of reconstruction

CS 188: Artificial Intelligence

Distributed Constraint Satisfaction and the Bounds on Resource Allocation in Wireless Networks

Path Consistency Revisited. Moninder Singh. University of Pennsylvania. Philadelphia, PA

Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines

2 Alan M. Frisch and Timothy J. Peugniez which the problem is rst mapped (manually or automatically) to a non-boolean formula which is then systematic

Rina Dechter. Irvine, California, USA A constraint satisfaction problem (csp) dened over a constraint network

The problem of minimizing the elimination tree height for general graphs is N P-hard. However, there exist classes of graphs for which the problem can

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Hybrid solvers for the Boolean Satisfiability problem: an exploration

Implementation of a Sudoku Solver Using Reduction to SAT

In this lecture we discuss the complexity of approximation problems, and show how to prove they are NP-hard.

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

CS-E3200 Discrete Models and Search

A New Reduction from 3-SAT to Graph K- Colorability for Frequency Assignment Problem

Some Hardness Proofs

On Computing Minimum Size Prime Implicants

CS 188: Artificial Intelligence Spring Announcements

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

Implementations of Dijkstra's Algorithm. Based on Multi-Level Buckets. November Abstract

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Transcription:

A NN Algorithm for Boolean Satisability Problems William M. Spears AI Center - Code 5514 Naval Research Laboratory Washington, D.C. 20375-5320 202-767-9006 (W) 202-767-3172 (Fax) spears@aic.nrl.navy.mil ABSTRACT Satisability (SAT) refers to the task of nding a truth assignment that makes an arbitrary boolean expression true. This paper compares a neural network algorithm (NNSAT) with GSAT [4], a greedy algorithm for solving satisability problems. GSAT can solve problem instances that are dicult for traditional satisability algorithms. Results suggest that NNSAT scales better as the number of variables increase, solving at least as many hard SAT problems. 1. Introduction Satisability (SAT) refers to the task of nding a truth assignment that makes an arbitrary boolean expression true. For example, the boolean expression a ^ b is true if and only if the boolean variables a and b are true. Satisability is of interest to the logic, operations research, and computational complexity communities. Due to the emphasis of the logic community, traditional satisability algorithms tend to be sound and complete 1. However, [4] point out that there exist classes of satisability problems that are extremely hard for these algorithms and have created a greedy algorithm (GSAT) that is sound, yet incomplete (i.e., there is no guarantee that GSAT will nd a satisfying assignment if one exists). The advantage of GSAT is that it can often solve problems that are dicult for the traditional algorithms. Other recent work has also concentrated on incomplete algorithms for satisability ([1], [6], [8], [2]). However, comparisons between the algorithms have been dicult to perform, due to a lack of agreement on what constitutes a reasonable test set of problems. One nice feature of [4] is that a class of hard problems is very precisely dened. This paper compares GSAT with a novel neural network approach on that class of hard problems. The results indicate that the neural network approach (NNSAT) scales better as the number of variables increase, solving at least as many hard problems. 2. GSAT and NNSAT GSAT assumes that the boolean expressions are in conjunctive normal form (CNF) 2. After generating a random truth assignment, it tries new assignments by ipping the truth assignment of the variable that leads to the largest increase in the number of true clauses. GSAT is greedy because it always tries to increase the number of true clauses. If it is unable to do this it will make a \sideways" move (i.e., change the truth assignment of a variable although the number of true clauses remains constant). GSAT can make a \backwards" move, but only if other moves are not available. Furthermore, it can not make two backwards moves in a row, since the backwards move will guarantee that it is possible to increase the number of true clauses in the next move. Since SAT is a constraint satisfaction problem, it can be modeled as a Hopeld neural network [3]. This paper describes NNSAT, a Hopeld network algorithm for solving SAT problems. NNSAT is not restricted 1 Soundness means that any solution found must be correct. Completeness means that a solution must be found if one exists. 2 CNF refers to boolean expressions that are a conjunction of clauses, each clause (conjunct) being a disjunction of negated or non-negated boolean variables. For example, (a _ b) ^ (a _ b) ^ (a _ b) is in CNF.

(a v b) Λ (a v b) Λ (a v b) AND a _ v b OR OR _ a v b OR a v b a b Fig. 1: The neural network for (a _ b) ^ (a _ b) ^ (a _ b) to CNF and can handle arbitrary boolean expressions. The neural network graph is based on the AND/OR parse tree of the boolean expression. It is not necessarily a tree, however, because each instance of the same boolean variable is represented by only one leaf node, which may have multiple parents. The resulting structure is a rooted, directed, acyclic graph. All edges in the graph are directed toward the root node (i.e., there are no symmetric edges). An edge has weight w ij =?1 if there exists a NOT constraint from node i to node j, otherwise it has weight 1. Each node i can be true or false, which is represented using an activation a i of 1 and?1, respectively. See Figure 1 for an example, where NOT s are represented by dashed edges. It is important to note that the \arrows" in Figure 1 are directed from children to parents. Let Energy be the global energy state of the network, representing the degree to which the boolean constraints are P satised. Energy is the combination of local energy contributions at each node: Energy = Pi Energy i = net i ia i. If all boolean constraints are satised, then each local energy contribution will be maximized, and the total energy of the system will be maximized. In this formulation net i represents the net input to a node i from its immediate neighbors. If the net input is positive, then a i must be 1 in order to have a positive energy contribution. If the net input is negative, then a i must be?1. In NNSAT net i is normalized to lie between?1 and 1. Because of this Energy i = 1 if and only if all constraints are satised at node i. If a node i violates all constraints, then Energy i =?1. The energy of a solution is simply Energy = Pi Energy i = n, where n is the number of nodes in the network. All solutions have this energy, and all non-solutions have lower energy. The computation of net i for SAT problems is reasonably complex. Informally, the net input net i is computed using upstream and downstream net inputs to node i. Downstream net inputs ow from children of a node, while upstream net inputs ow from the parents of a node. If a node is a leaf node, there is no reason to consider the downstream net input. If a node is an interior node, and there is no upstream net input, only the downstream net input in considered. If, however, there is upstream net input, it is averaged with the downstream net input (this will be expressed formally at the end of this section). Finally, the root node must always be true, since the goal is to satisfy the boolean expression. The downstream (upstream) net inputs are based on downstream (upstream) constraints. Downstream constraints ow from the children of a node. An AND (OR) node is constrained to be true (false) if and only if all of its children are true (false). An AND (OR) node is constrained to be false (true) if and only if at least one child is false (true). Upstream constraints ow from the parent of a node. If the parent of node i is an AND and the parent is true, then node i is constrained to be true. However, if the parent is an AND and the parent is false, then node i is constrained to be false if and only if all siblings (of node i) are true. Other situations are possible, but they do not constrain the node. Finally, if the parent of node i is an OR and the parent is false, then node i is constrained to be false. However, if the parent is an OR and the parent is true, then node i is constrained to be true if only if all siblings (of node i) are false. Again, other situations do not constrain the node. These rules can be illustrated with a simple example (Figure 1). Suppose we have the following CNF

boolean expression of two variables and three conjuncts: (a _ b) ^ (a _ b) ^ (a _ b). Each node is bound by boolean constraints. Since the root must be true, the conjuncts are constrained to be true. This information is used to probabilistically determine the truth values of each boolean variable. For example, suppose a is true. Then an upstream constraint from the rst conjunct constrains b to be true, while an upstream constraint from the third conjunct constrains b to be false. Due to the conicting constraints, it is not clear whether b should be true or false, so b is true with 50% probability. However, suppose a is false. Then the only constraint is an upstream constraint from the second conjunct, which constrains b to be false. Therefore b is false with high probability. More formally, the net input net i is a function of two contributions: Unet i and Dnet i. Unet i represents the upstream net input to node i from all its parents. Since some nodes may have multiple parents, Unet i will be computed using contributions from each parent. Dnet i represents the downstream net input to node i from its children. Dene AND i (OR i ) as a predicate that is true if and only if node i is an AND (OR). It is now possible to dene the downstream constraint D i for node i as: D i = (AND i ^ 8j(a j w ji = 1)) _ (OR i ^ 9j(a j w ji = 1)) D i is true if and only if node i is an AND node and has all children true, or it is an OR node and has at least one child true. To be precise, the truth of a child j is the product of the activation of the child and the weight from the child to node i (a j w ji ). If the weight w ji is?1, a NOT constraint is implied, reversing the truth value of the child 3. Given this denition, the rules for Dnet i are: 8i[(Dnet i = 1) $ D i ]; 8i[(Dnet i =?1) $ Di ] To see how this aects local energy, consider the case were D i is true (Dnet i = 1). This states that node i should be true. If we assume there are no upstream constraints, then net i = Dnet i = 1 and Energy i = net i a i = a i. If i is true (a i = 1) there is a positive energy contribution; if it is false (a i =?1) there is a negative energy contribution. As an example, consider Figure 2. If a is true and b is false, a node i representing a ^ b (note that Dneti = 1) will contribute positive energy if and only if that node is true. If D i is false (Dnet i =?1) then Energy i =?a i and a similar situation exists. _ a Λ b 1 1 a b Fig. 2: The neural network for a ^ b In a similar fashion, dene the upstream constraint U ij of a node i from a parent j: U ij = (AND j ^ ((a j = 1) _ [(a j =?1) ^ 8k 6=i (a k w kj = 1)])) _ (OR j ^ ((a j =?1) _ [(a j = 1) ^ 8k 6=i (a k w kj =?1)])) The rule for Unet ij is: 8i8j[(Unet ij = a j w ij ) $ U ij ] 4. A constraint can occur if the parent j is an AND or an OR. If it is an AND, and a j = 1, then node i should have an activation a i = a j w ij = w ij. To see how this aects local energy, assume for the sake of illustration that there is no downstream constraint and are no other upstream constraints. Then net i = Unet ij and Energy i = net i a i = w ij a i. Thus there is a positive energy contribution if and only if a i and w ij have the same sign. As an example, consider Figure 2 again. If a node j representing a ^ b is true, then there will be a positive energy contribution at the node i representing b if and only if b is false (because w ij =?1) and a positive energy contribution at the node i representing a if and only if a is true (because for that edge w ij = 1). Similar constraints occur when 3 For example, if a is true and b is false, then both children of the node a ^ b are true. 4 See [6] for motivational details.

Input: A boolean expression, Cutoff, Max temp, and Min temp; Output: A satisfying truth assignment for the boolean expression, if found; restarts = 0; assignments = 0; loop f restarts++; j = 0; Randomly initialize the activations a i for each node i; loop f if the boolean expression is satised terminate successfully; T = Max temp e?jdecay rate ; if (T < Min temp) then exit loop; loop through all the nodes i randomly f Compute the net input net i ; g g probability(a i = 1) = 1 1+e?net i T g j++; assignments++; if (Cutof f = assignments) terminate unsuccessfully; ; Fig. 3: The NNSAT algorithm parent j is an AND, a j =?1 and all siblings are true (a k w kj = 1), as well as the case when the parent j is an OR node (see the denition of U ij above). It is now possible to combine the separate upstream contributions Unet ij from each parent j into the total upstream contribution Unet i to node i. Dene c i as the number of upstream constraints on node i at some particular time (c i = jfjju ij gj). Suppose that a node has no upstream constraints (c i = 0). Then let Unet i = a i. For illustrative purposes, suppose there are also no downstream constraints. Then net i = Unet i and Energy i = a i a i = 1. If there are no constraints associated with node i, there is no reason to change the activation of that node. In essence, the network is rewarded at node i for having no upstream constraints. If there are upstream constraints, Unet i is normalized to fall within?1 and 1 (to ensure that net i is normalized between?1 and 1): 8i[(c i = 0)! (Unet i = a i )]; 8i[(c i 6= 0)! (Unet i = P j Unet ij c i )] It is now possible to formally compute net i. Dene the predicates Leaf i, Int i, and Root i to be true if and only if node i is a leaf, an interior node, or the root node, respectively. Then the net input net i is: 8i[Leaf i! (net i = Unet i )]; 8i[Root i! (net i = 1)] 8i[(Int i ^ (c i = 0))! (net i = Dnet i )]; 8i[(Int i ^ (c i 6= 0))! (net i = Dnet i + Unet i )] 2 NNSAT is shown in Figure 3. The above discussion has explained how to compute net i in the innermost loop. Both GSAT and NNSAT also have outer loops to control restarts if the search stagnates both algorithms are randomly reinitialized until their respective cutos are reached or until a satisfying assignment of the boolean variables is found. NNSAT uses simulated annealing to help it escape local optima, by allowing it to make multiple backwards moves in a row (unlike GSAT). A restart occurs when the temperature T reaches some minimum. The decay rate for the annealing schedule is reduced each time NNSAT is restarted and is inversely proportional to the number of nodes n in the network (decay rate = 1/(restarts n)). This allows NNSAT to spend more time between restarts when the problem is large or dicult. Assignments counts the number of boolean assignments tested during a run of the algorithm. An assignment consists of evaluating net i and a i (via the logistic function) for all nodes, in random order.

3. Experiments and Results Particular classes of problems appear to be dicult for satisability algorithms. This paper concentrates on one such class, a xed clause-length model referred to as Random L-SAT. The Random L-SAT problem generator creates random problems in CNF subject to three parameters: the number of variables V, the number of clauses C, and the number of literals per clause L 5. Each clause is generated by selecting L of the V variables uniformly randomly, negating each variable with probability 50%. Let R denote the ratio of the number of clauses to the number of variables (C=V ). According to [4], hard problems are those where R is roughly 4.25, when L is 3. Although we could not obtain the specic problems used in their experiments, we generated random problems using their random problem generator (with L = 3 and R = 4:25). GSAT NNSAT V C Assignments Assignments # Solved Cuto 100 425 21,000 3,700 18/30 100,000 200 850 497,000 36,000 16/30 200,000 300 1275 1,390,000 24,000 11/30 200,000 400 1700 3,527,000 71,000 8/30 200,000 500 2125 9,958,000 178,000 6/30 400,000 Table: 1: GSAT and NNSAT on hard problems Table 1 presents the comparison. For NNSAT M in temp = 0:01 and M ax temp = 0:15. \Assignments" indicates the average number of assignments tested before a satisfying assignment was found (for those problems actually solved). Since the problems are not all solvable, Table 1 also presents the number of problems solved (out of 30). \Cuto" indicates the number of assignments tried before NNSAT decides a problem is unsolvable. The results for GSAT are taken from [4]. The percentage solved by GSAT is not reported, however Selman (personal communication) states that GSAT solves roughly 50% of the 100 and 200 variable problems, and roughly 20% { 33% of the 500 variable problems. Also, the cutos for GSAT are not fully described. Although the lack of specic information makes a comparison dicult, NNSAT appears to solve roughly the same percentage of hard problems with far fewer assignments. One problem with the above comparison is that each assignment in NNSAT is computationally more expensive than it is in GSAT. This occurs for two reasons. First, NNSAT can handle arbitrary boolean expressions, while GSAT is specically designed for CNF expressions. The increase in generality comes at the price of greater overhead. Second, each assignment in GSAT is equivalent to ipping one boolean variable, while an assignment in NNSAT involves ipping potentially multiple variables (leaf nodes). To address the rst issue a specic version of NNSAT (called NNSAT-CNF) was written for CNF expressions, enormously increasing the eciency of the algorithm. To address the second issue, focus was centered on ips rather than assignments. As described by [7], a ip in GSAT and NNSAT-CNF have almost identical computational complexity, thus making this an ideal measure for comparison 6. Table 2 presents the results for NNSAT-CNF. Due to the increased eciency of the algorithm it was possible to average the results over more problems than when using NNSAT (100 for each choice of V, instead of 30) and to greatly extend the cutos (which are still in terms of \assignments" to facilitate comparison with Table 1). Computational time is also reported (the results for GSAT are from [4]). The results for NNSAT-CNF are encouraging. In terms of \ips" (and time) NNSAT-CNF appears to scale better than GSAT, since it solves a greater percentage of the larger problems with less computational eort. The reason for this performance dierence is not clear. However, [4] report that GSAT performs worse when sideways steps are not allowed. A reasonable hypothesis is that GSAT would perform even better if it could occasionally take sequences of backward steps, as does NNSAT and NNSAT-CNF 7. 5 A literal is a negated or non-negated boolean variable. 6 In [7] NNSAT-CNF is called SASAT, because with CNF the network is simple and the emphasis is on Simulated Annealing. None of the details of NNSAT as described in Section 2, however, are in that paper. 7 GSAT has recently been modied to make heuristically-driven backwards moves, which do indeed improve performance. These special backwards moves also improve the performance of NNSAT-CNF. See [7] for details.

GSAT NNSAT-CNF V C Flips Time Flips # Solved Time Cuto 100 425 21,000.1 min 31,000 58/100.2 min 200,000 200 850 497,000 2.8 min 396,000 44/100 3 min 400,000 300 1275 1,390,000 12 min 1,924,000 48/100 13 min 800,000 400 1700 3,527,000 34 min 2,269,000 45/100 15 min 1,000,000 500 2125 9,958,000 96 min 4,438,000 41/100 30 min 1,600,000 Table: 2: GSAT and NNSAT-CNF on hard problems 4. Conclusion In this paper we consider the application of neural networks to boolean satisability problems and compare the resulting algorithm (NNSAT) with a greedy algorithm (GSAT) on a particular class of hard problems. With the given cutos, NNSAT appears to satisfy roughly as many hard SAT problems as GSAT. A specialized version of NNSAT for CNF expressions performs even better, solving at least as many hard SAT problems with less work. There are a number of potential items for future work. NNSAT uses a very simple annealing schedule, which is not optimized. Recent work in adaptive annealing schedules holds the potential for improved performance. Second, it should be possible to add domain-dependent operators to NNSAT. For example, it may be possible to add stochastic operators based on the Davis-Putnam satisability algorithm. Third, NNSAT is intrinsically parallel and is a good match for parallel architectures. Andrew Sohn of the New Jersey Institute of Technology has ported NNSAT-CNF to a AP1000 multiprocessor [5]. Future work will concentrate on porting the general NNSAT algorithm to a parallel architecture. Acknowledgements I thank Diana Gordon, Ken De Jong, David Johnson, Bart Selman, Ian Gent, Toby Walsh, Antje Beringer, Andrew Sohn, Mitch Potter, and John Grefenstette for provocative and insightful comments. References [1] K. De Jong and W. Spears, \Using Genetic Algorithms to Solve NP-Complete Problems", International Conference on Genetic Algorithms, pp. 124{132, June 1989. [2] J. Gu, \Ecient Local Search for Very Large-Scale Satisability Problems", SIGART Bulletin, 3(1), January 1992. [3] J. Hopeld, \Neural Networks and Physical Systems with Emergent Collective Computational Abilities", Proc. Natl. Acad. Sci, 79, pp. 2554{2558, 1982. [4] B. Selman, H. Levesque, and M. Mitchell, \A New Method for Solving Hard Satisability Problems", Proceedings of the 1992 AAAI Conference, pp. 440{446, 1992. [5] A. Sohn, \Solving Hard Satisability Problem with Synchronous Simulated Annealing on the AP1000 Multiprocessor", Proceedings of the Seventh IEEE Symposium on Parallel and Distributed Processing San Antonio, Texas, pp. 719{722, October 1995. [6] W. Spears, \Using Neural Networks and Genetic Algorithms as Heuristics for NP-Complete Problems", Masters Thesis, Department of Computer Science, George Mason University, 1990. [7] W. Spears, \Simulated Annealing for Hard Satisability Problems", to appear in the DIMACS Series on Discrete Mathematics and Theoretical Computer Science, 1996. [8] R. Young and A. Reel, \A Hybrid Genetic Algorithm for a Logic Problem", Proceedings of the 9th European Conference on Articial Intelligence, pp. 744{746, 1990.