WalkSAT: Solving Boolean Satisfiability via Stochastic Search

Size: px
Start display at page:

Download "WalkSAT: Solving Boolean Satisfiability via Stochastic Search"

Transcription

1 WalkSAT: Solving Boolean Satisfiability via Stochastic Search Connor Adsit Kevin Bradley December 10, 2014 Christian Heinrich Contents 1 Overview 1 2 Research Papers Efficient Implementations of SAT Local Search Parallelization of Stochastic Algorithm for Boolean Satisfiability on GPGPU Architecture Enhancing the Robustness/Efficiency of Local Search Algorithms for SAT Implementation Sequential Program Parallel Program Developer s Guide Usage Guide Data Strong Scaling Weak Scaling Future Work 16 6 Discussion 17 7 Work Breakdown 17 1 Overview Boolean satisfiability is a classic problem in computer science theory. It is best expressed as the question: Given a collection of literals that are combined via Boolean operations (AND, OR and NOT) to make a Boolean expression, 1

2 can we assign a truth value to each literal such that the expression evaluates to true. This problem, often abbreviated to SAT, is credited with being the first problem that was shown to be NP-Complete. This means that the best known algorithm to solve the problem operates in exponential time; in the worst case, it is just as efficient to enumerate exhaustively across the possible solution space, exponential in size, and see if the expression evaluates to true, which we can do in polynomial time. In particular, we focus our attention on a subset of SAT problems, known as CNF-SAT, where the Boolean expressions are in Conjunctive Normal Form. This means that the expression is organized into clauses, in which literals, either a variable or its negation, are ORed together and all clauses are ANDed together In order for a CNF-SAT expression to be satisfiable, every one of its clauses must be satisfied with a given truth assignment. There has been considerable research to craft efficient algorithms that can find actual solutions if they exist. Our focus lies not with these algorithms, but creating a stochastic local search algorithm that estimates an answer. Instead of implementing the popular DPLL algorithm, we instead randomly generate a truth assignment and then apply a small change to it. We repeat this process while the changes improve the number of clauses satisfied. When we re done iterating, we have a potential candidate for a solution. Since this process estimates a solution, repeating it over and over again with additional random assignments increases the likelihood that we are able to find an actual solution. 2 Research Papers 2.1 Efficient Implementations of SAT Local Search This paper analyzes variable selection and variable flipping techniques that are utilized with various local search SAT algorithms. The paper provides an overview of what occurs at each step of an iteration in a SAT local search algorithm. In general there are two parts, deciding which variable to flip, and flipping the decided variable. Namely the paper discusses the techniques of WalkSAT and GSAT (two local search satisfiability algorithms) along with a couple of some additional options for differing heuristics and data structures to use for implementation. The paper also provides some results and analysis from testing these various different ideas. The paper makes the novel contributions of providing details, mathematical analysis and numerical measurements evaluating and comparing various different techniques used in local search SAT algorithms. Additionally they introduce (and prove the efficiency) of a new variable flipping technique they named watch2. By keeping track of additional information (a second satisfied literal for each clause) they reduce the effect cost of updating clauses (and the associated information that needs to be updated) on frequent transition of the 2-1 case. This is the case in which flipping a literal changes the number of satisfied literals in a clause from 2 to 1. Their analysis showed this was an often 2

3 occurring transition, came up with a way to improve this transition (at the cost of making others more expensive) and then analyzed the performance of this. This paper provided us with a better understanding of the time complexity implications of various different means of variable selection and updating. This aided us in understanding that in the case of weak scaling in order to increase our problem size we cannot merely increase the number of clauses, the number of literals, or a simplistic combination of both while achieving a specified sizeup in computational complexity. For our weak scaling we then decided to increase the number of iterations the hill climbing was attempting thus creating a fairly ideal computational sizeup. Furthermore, the paper demonstrated the efficiency and effectiveness of different variable selection techniques for variable flipping. It was shown that WalkSATs variable selection technique (selecting a random clause then deciding which variable to flip) is far more efficient than deciding which variable to flip based on GSAT or SAPS. Deciding which variable to flip in these algorithm takes into account all variables in the equation and not just those within a particular clause. They further show that in cases of a large number of literals the cost associated with selecting the variable to flip in these algorithms can dominate the run time. Due to this noticeable difference we decided to continue along the WalkSAT implementation for this reduced time complexity (opposed to using a different variable selection technique). 2.2 Parallelization of Stochastic Algorithm for Boolean Satisfiability on GPGPU Architecture The paper Parallelization of Stochastic Algorithm for Boolean Satisfiability on GPGPU Architecture discusses a new parallel implementation of WalkSAT intended for GPGPU architectures, named cwsat. The issue cwsat is intended to address is improving the performance of local search algorithms for satisfiability problems, in this case WalkSAT, in order to maximize its performance advantage over other potential solutions. They specifically implement cwsat in the CUDA environment, to take advantage of the potentially massive number of threads they are able to utilize in said environment. The paper summarizes the difference between the two major categories of SAT solving algorithms, DPLL and local search algorithms like WalkSAT, explains the potential for local search algorithms, and the common heuristics used. These are heuristics are Best, Tabu, Novelty, Novelty+, Rnovelty and Rnovelty+. The standard implementation of WalkSAT, like ours, only utilizes the Best heuristic, meaning that based on some probability p, the algorithm will either pick a random literal in the unsatisfied clause to flip, or it will follow the heuristic meaning it will select the literal to flip which causes the least amount of currently satisfied clauses to become unsatisfied. Their GPGPU parallel implementation cwsat drops several of these heuristics, including the random, instead opting to only implement Best, Tabu, Novelty+, and Rnovelty+. They utilize these heuristics by dividing the threads into sets of four threads, with each set of threads assigned its own random model of the problem, however each thread carried out its own WalkSAT computation 3

4 utilizing a specific heuristic based on its thread ID, meaning that each thread in the thread group performed its own WalkSAT over the same model, but each with a different heuristic. Once a thread finds a satisfiable model, all threads stop and that model is reported back to the host. However, it is also possible for cwsat to fail to find a satisfiable model in the time allotted, much like normal WalkSAT. Using this structure allows cwsat to best scale to maximum number of threads available, based on the amount of memory required to store the problem for each thread and the number of cores on the device. For example, they tested cwsat versus four different WalkSAT implementations, each using a different heuristic that cwsat was also using, over two hundred SAT benchmarks from the SAT11 competition. This resulted in improvements over WalkSAT in terms of both runtime and the success rate for problems with a satisfiable solution, over all heuristics. The biggest increase was over WalkSAT using the standard Best heuristic, with a 98 percent decrease in runtime versus WalkSAT using that heuristic, though this decreased to only a 33 percent decrease in runtime when compared to WalkSat using the Novelty+ heuristic, which was the best performing. It also saw an increase in successfully finding a satisfiable model in all but one problem, putting it above both Best and Tabu, equal to Rnovelty+ and one behind Novelty+, making it a very reasonable potential solution to the problem of increasing performance over WalkSAT. While we ultimately decided not to implement a GPGPU solution like cwsat, instead opting to implement a multicore version of standard WalkSAT, this paper did provide insight into possible implementations for the future. While this paper did focus on cwsat, it showcased more potential future directions for our program beyond just the possibility of a GPGPU version, specifically in its analysis of the different possible heuristics we could test in the future. 2.3 Enhancing the Robustness/Efficiency of Local Search Algorithms for SAT This paper discusses a few heuristics used to fine tune the efficiency of WalkSAT and rates their effectiveness on certain types of problems. Of the heuristics discussed, we thought that only two were interesting. The first is the traditional approach, which is what we have implemented in our program: with a certain probability p, we flip either a random variable in a random clause or with probability 1 p, we pick a variable in a random clause such that flipping it changes the fewest clauses that already evaluate to true to false (which is known as the break count). The other interesting heuristic, known as Novelty, builds upon this notion of minimizing the break counts. In cases where more than one variable has the same break count, we pick the variable that was least recently changed. Additionally, the paper suggested that it could be beneficial to keep track of any dependencies between literals. Depending on the structure of the overall expression, the value of certain literals may depend on a single literal. In some cases, they may even be forced to have the same values. Knowing these ahead of time can limit the search space of the WalkSAT algorithm, but we decided 4

5 not to incorporate this into our implementation because we believed that it would add additional overhead to the sequential portion of our algorithm, since the best time to discover these dependencies would be as we were reading in the boolean expression from the file. We were concerned that adding in the additional computation would increase the sequential portion enough to limit the efficiency gains of parallelization and felt that such a feature would be better suited for a DPLL approach rather than a WalkSAT approach. At the end of the paper, the author discussed how the standard WalkSAT algorithm performed in comparison to other WalkSAT implementations with different variable choice heuristics on Latin Square problems (where a matrix is a latin square if each row and column are permutations on the same set of numbers) and the planning logistic problem. The measurement of performance was not done by examining the work, but rather the accuracy of the problem, which we did not value as much as the efficiency, so for the most part, we disregarded this part of the paper. Additionally, we were not able to easily find Latin Square or logistics planning problems in.dimacs format, which lead us to widely ignore the data presented. 3 Implementation 3.1 Sequential Program For our implementation, we represented clauses as a list of literals, which would be ORed together, and additionally we kept track of the current truth value of the evaluation of each clause based on the current truth assignment as a bit vector, indexed by the clause s position in the input file. Truth assignments are bit vectors, where each literal (indexed starting at 1) truth value (0 for false, 1 for true) is indexed by the literal s integer assignment. We perform N walks in sequence. In each walk, we perform as many as nsteps, stopping early if we find a solution or cannot progress to a better assignment by flipping the truth assignment of a single literally. For each possible step, we pick with probability 0.2, a literal in a random broken clause that causes the fewest already true clauses to become false and with probability 0.8, a random literal in a random broken clause. If the result of flipping whichever literal we pick is better than our current result, in that more clauses evaluate to true or fewer clauses were originally true became false and more clauses originally false became true, then we take a future step from the new assignment. 3.2 Parallel Program Our parallel program uses the same data structures and same walking algorithm as our sequential program. Because each walk starts from a randomly generated truth assignment, there are no sequential dependencies between walks. We take advantage of this in our parallel program by splitting the total number of walks amongst the number of threads given to the program. We assumed that we did 5

6 not need to include any sort of load balancing because we can not determine how much work each walk performs, since we start from a randomly generated truth assignment. 3.3 Developer s Guide All the required source files will be in the deliverables.jar file. To compile, first make sure that the pj2 library is in the path and that you have extracted all files out of the supplied jar. To compile the files, simply run: javac *.java If desired, you may wish to change the probabilities that determine which method our implementation of WalkSAT uses to decide which variable assignment to flip. It is by default set to choose the assignment with the lowest breakcount in a random clause with a chance of 20%. We determine probability by using modular arithmatic. We perform a modulus by the final variable P MOD and check to see that the result is less than P REM to determine if the breakcount minimization is used to pick the variable (which means that initially we set P MOD to 5 and P REM equal to 0). 3.4 Usage Guide To run the sequential version, run: java pj2 WalkSATSeq <N> <nstep> <seed> <infile> where <N> is the number of walks to perform, <nstep> is the maximum number of steps to take in a walk, <seed> is a long value that seeds the random number generator and <infile> is the.dimacs input file used to create the Boolean expression. To run the parallel version, execute: java pj2 threads=<nt> WalkSATSmp <N> <nstep> <seed> <infile> where <N>, <nstep>, <seed> and <infile> are the same as the sequential version and <NT> is the desired number of threads. The input files are in the form of.dimacs, automatically generated from It is assumed that these files have a particular structure. They begin with a single line of comments, denoted by a c. This line is ignored. The following line contains information about the problem, that it is in CNF form, followed by the number of variables and the number of clauses. Each subsequent line is a clause, containing integers that correspond to literals. If the integer is negative, then it corresponds to a negated literal. These lines are delimited by a zero followed by a newline character. We have included many files in the zip which you can use to test our program. 6

7 K RT (msec) Speedup Efficiency Figure 1: Problem 1 K RT (msec) Speedup Efficiency Data 4.1 Strong Scaling Figure 2: Problem 2 For testing our implementations strong scaling, we randomly generated 5 different satisfiability problems using Toughsat.appspot.com. Each problem generated was a 3CNF problem of the Random k-sat problem type, with a ten to one clause to variable ratio as this created the most difficult problems, starting with ten variables. So problem one had ten variables and one hundred clauses, K RT (msec) Speedup Efficiency Figure 3: Problem 3 7

8 K RT (msec) Speedup Efficiency Figure 4: Problem 4 K RT (msec) Speedup Efficiency Figure 5: Problem 5 problem two had twenty and two hundred, etc. until problem five with fifty variables and five hundred clauses. Each problem was run with one hundred iterations and one thousand flips, scaling the number of cores from one to eight. Our program ended up demonstrating slightly less than ideal strong scaling performance, with the worst performance being for problems one and the best for problem five, with the scaling improving slightly with each problem. The program also demonstrated a fairly steep drop in speedup and efficiency between using two and eight cores, with around four to five cores seemingly being the breaking point after which the speed up increases far more slowly, resulting in a drop in efficiency. All of which indicates some form of sequential dependency, with the likely culprit being the input read in, as this isnt parallelized. While reading in the input would take longer with each problem as the problem size increased, the computation time would increase more, thus allowing our parallelization to exert more influence over the run time, leading to the increase in speedup and efficiency. This is perhaps best demonstrated by running the fifth problem on two cores, which results in a speedup of leading to efficiency slightly greater than one. The issue of poor scaling past four to five cores is also lessened as the problem size increase, leading to an efficiency of on eight cores for problem five, versus on eight cores for problem one. By that number of cores for problem one, the program has already started to run up against the time spent reading in the input, leading to only slight decreases in 8

9 Figure 6: Strong Scaling: Running Time vs. Number of Cores 9

10 Figure 7: Strong Scaling: Efficiency vs. Number of Cores 10

11 runtime between six, seven and eight cores, with almost no change in runtime between seven and eight cores. Whereas the results from running problem five over six, seven and eight cores actually saw an increase in efficiency between six and seven cores, and a further increase between seven and eight cores. Similar jumps in performance occurred for problems three and four, but less consistently, which may also be caused by the nature of the algorithm, as the variable flipped can either be random or whichever results in the lowest break count, but these two choices result in a different amount of computation, with random being the less computationally intensive, so there is going to be more variation depending on which method was used more often. 11

12 K Iterations RT (msec) Sizeup Efficiency Figure 8: Problem 1, 10 Literals, 100 Clauses, flips K Iterations RT (msec) Sizeup Efficiency Figure 9: Problem 2, 20 Literals, 200 Clauses, flips 4.2 Weak Scaling For our weak scaling we utilized 5 different satisfiability problems each randomly generated. The first problem was an equation in CNF which had 10 literals and 100 constants, the second problem had 20 literals and 200 constants, and so on. This was to maintain a constant to literal ratio of 10 across our problems. Each of these problems was run with the parameter that for each iteration variable flips were to be made (unless a solution was found). Instead of trying to increase the problem size for each of our scaling examples since this is K Iterations RT (msec) Sizeup Efficiency Figure 10: Problem 3, 30 Literals, 300 Clauses, flips 12

13 K Iterations RT (msec) Sizeup Efficiency Figure 11: Problem 4, 40 Literals, 400 Clauses, flips K Iterations RT (msec) Sizeup Efficiency Figure 12: Problem 5, 50 Literals, 500 Clauses, flips non-trivial for boolean satisfiability to find an equation that is precisely twice as computationally complex we simply increased the number of iterations we would perform the hill climbing corresponding to the number of cores we were utilizing. For instance with 1 core we used the parameter of 10 iterations while on 8 cores we would use a parameter for 80 iterations which would properly increase the problem size as desired. Our program showed non-ideal weak scaling but in our problems the weak scaling efficiency never dropped below.85 which is fairly good. It is worth noting the sequential dependency of reading in the initial input file which is not parallelized and is a contributing factor to this non ideal scaling. Additionally, as our problems increased in size (from 10 literals 100 clauses to 50 literals 500 clauses) our efficiency increased. This is likely due to the lower overhead of reading the file. While reading the file of a larger problem will take a little more time, the computation of the problem is so much larger that this sequential time is dominated by our parallel running time yielding a greater efficiency in our larger problems. Another peculiarity is that our efficiency would drop (as expected), increase (unexpected), then drop again (expected) as the number of cores increased. For example in problem 5, our efficiency on 2 cores was.970, on 3 cores was.975 and on 4 cores was.953. This abnormality is probably caused by the nondeterminism associated with our algorithm. Since deciding which variable to flip 13

14 Figure 13: Weak Scaling: Running Time vs. Number of Cores 14

15 Figure 14: Weak Scaling: Efficiency vs. Number of Cores 15

16 in a clause can be random (very inexpensive) or based on the literal resulting in the lowest breakcount (more expensive) the runtime of our algorithm can vary. While in one instance it might have made several variable flips based on finding the literal with the lowest breakcount, in another instance it might have made several variable flips from randomly selecting a variable within the clause. The second instance would result in a lower runtime potentially so much so that the small sequential file reading overhead is completely eclipsed. 5 Future Work One possible extension of this project would be to develop a cluster program in addition to our multicore program. With this cluster program we could experiment on Tardis and analyze our weak and strong scaling performance there. We could then generate a set of problems and try to determine a threshold of when it would (or would not) be more advantageous to use the cluster version. In practice this cluster version would likely be best for a large scale problem in which it is important to arrive at the closest possible solution. Namely, a large problem with a large number of iterations (hill climbing attempts) being performed. In the same spirit another extension would be to find and experiment with different, more realistic satisfiability problems. For this we could find some common satisfiability problems or a suite of them which are used to benchmark other algorithms. By testing our implementation against these would be be able to compare and contrast our performance with that of other implementations on an apples to apples level. Another extension would be to extend our program to be able to work on weighted satisfiability problems. This would mean that each of our clauses in our CNF has a particular weight associated with it. The change being that if it is case that the problem is not satisfiable the goal is to find the literal assignments which would result in the maximum achieved clause value. The achieved clause value would be the summation of the weights of the satisfied clauses in a particular literal assignment configuration. One final change that could be made given more time would be to evolve our program into a multicore automatic theorem prover. Automatic theorem proving is one of the many applications of boolean satisfiability. This would require changing our program to accept a different type of input (a representation of the theorem to prove or disprove) and boiling down this problem into the underlying boolean satisfiability problem or problems which need to be solved. We could then apply our implementation of WalkSAT to solve these in order to prove or disprove the given particular theorem. 16

17 6 Discussion We learned about Boolean Satisfiability, its applications, and specifically the WalkSAT algorithm, and the implementation of WalkSAT in parallel. This included that WalkSATs performance in parallel depends heavily on the size of the problem given to it. If the problem is too small, the performance increase as cores are added is rather inefficient, due to the performance limitations caused by reading in the problem, meaning that parallel implementations of WalkSAT are far more effective when dealing with larger problems, and somewhat inefficient for smaller ones. We also learned about some alternative methods and implementations from our research papers, including DPLL, an alternative to WalkSAT which was covered by another group, as well as GPGPU implementations of WalkSAT, in the form of cwsat. 7 Work Breakdown Connor did the majority of the development for the sequential and parallel versions of the program. He also covered the paper Enhancing the Robustness/Efficiency of Local Search Algorithms for SAT. Christian did all the Strong Scaling testing, and covered the paper Parallelization of stochastic algorithm for boolean satisfiability on GPGPU architecture. Kevin did all the Weak Scaling testing, and covered the paper Efficient Implementations of SAT Local Search. The presentations and other sections of the paper were worked on in collaboration. References [1] A. Fukunaga, Efficient implementations of SAT local search, pp , [Online]. Available: [2] S. Nimnon, M. Phadoongsidhi, and N. Utamaphethai, Parallelization of stochastic algorithm for boolean satisfiability on GPGPU architecture, pp. 1 4, [Online]. Available: [3] D. Habet, Enhancing the robustness/efficiencey of local search algorithms for sat, pp , [Online]. Available: 17

Massively Parallel Seesaw Search for MAX-SAT

Massively Parallel Seesaw Search for MAX-SAT Massively Parallel Seesaw Search for MAX-SAT Harshad Paradkar Rochester Institute of Technology hp7212@rit.edu Prof. Alan Kaminsky (Advisor) Rochester Institute of Technology ark@cs.rit.edu Abstract The

More information

Chapter 13 Strong Scaling

Chapter 13 Strong Scaling Chapter 13 Strong Scaling Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction Variables

More information

N N Sudoku Solver. Sequential and Parallel Computing

N N Sudoku Solver. Sequential and Parallel Computing N N Sudoku Solver Sequential and Parallel Computing Abdulaziz Aljohani Computer Science. Rochester Institute of Technology, RIT Rochester, United States aaa4020@rit.edu Abstract 'Sudoku' is a logic-based

More information

Example: Map coloring

Example: Map coloring Today s s lecture Local Search Lecture 7: Search - 6 Heuristic Repair CSP and 3-SAT Solving CSPs using Systematic Search. Victor Lesser CMPSCI 683 Fall 2004 The relationship between problem structure and

More information

Chapter 16 Heuristic Search

Chapter 16 Heuristic Search Chapter 16 Heuristic Search Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction Variables

More information

Maximum Clique Problem

Maximum Clique Problem Maximum Clique Problem Dler Ahmad dha3142@rit.edu Yogesh Jagadeesan yj6026@rit.edu 1. INTRODUCTION Graph is a very common approach to represent computational problems. A graph consists a set of vertices

More information

Satisfiability. Michail G. Lagoudakis. Department of Computer Science Duke University Durham, NC SATISFIABILITY

Satisfiability. Michail G. Lagoudakis. Department of Computer Science Duke University Durham, NC SATISFIABILITY Satisfiability Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 27708 COMPSCI 271 - Spring 2001 DUKE UNIVERSITY Page 1 Why SAT? Historical Reasons The first NP-COMPLETE problem

More information

CS227: Assignment 1 Report

CS227: Assignment 1 Report 1 CS227: Assignment 1 Report Lei Huang and Lawson Wong April 20, 2008 1 Introduction Propositional satisfiability (SAT) problems have been of great historical and practical significance in AI. Despite

More information

ABC basics (compilation from different articles)

ABC basics (compilation from different articles) 1. AIG construction 2. AIG optimization 3. Technology mapping ABC basics (compilation from different articles) 1. BACKGROUND An And-Inverter Graph (AIG) is a directed acyclic graph (DAG), in which a node

More information

The implementation is written in Python, instructions on how to run it are given in the last section.

The implementation is written in Python, instructions on how to run it are given in the last section. Algorithm I chose to code a version of the WALKSAT algorithm. This algorithm is based on the one presented in Russell and Norvig p226, copied here for simplicity: function WALKSAT(clauses max-flips) returns

More information

Foundations of AI. 8. Satisfiability and Model Construction. Davis-Putnam, Phase Transitions, GSAT and GWSAT. Wolfram Burgard & Bernhard Nebel

Foundations of AI. 8. Satisfiability and Model Construction. Davis-Putnam, Phase Transitions, GSAT and GWSAT. Wolfram Burgard & Bernhard Nebel Foundations of AI 8. Satisfiability and Model Construction Davis-Putnam, Phase Transitions, GSAT and GWSAT Wolfram Burgard & Bernhard Nebel Contents Motivation Davis-Putnam Procedure Average complexity

More information

CS-E3200 Discrete Models and Search

CS-E3200 Discrete Models and Search Shahab Tasharrofi Department of Information and Computer Science, Aalto University Lecture 7: Complete and local search methods for SAT Outline Algorithms for solving Boolean satisfiability problems Complete

More information

Complexity Classes and Polynomial-time Reductions

Complexity Classes and Polynomial-time Reductions COMPSCI 330: Design and Analysis of Algorithms April 19, 2016 Complexity Classes and Polynomial-time Reductions Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we introduce

More information

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley Project Proposals Due Friday, February 13 on bcourses Will discuss project topics on Monday Instructions

More information

CSE Theory of Computing Fall 2017 Project 1-SAT Solving

CSE Theory of Computing Fall 2017 Project 1-SAT Solving CSE 30151 Theory of Computing Fall 2017 Project 1-SAT Solving Version 3: Sept. 21, 2017 The purpose of this project is to gain an understanding of one of the most central problems of computing: Boolean

More information

Boolean Functions (Formulas) and Propositional Logic

Boolean Functions (Formulas) and Propositional Logic EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Part I: Basics Sanjit A. Seshia EECS, UC Berkeley Boolean Functions (Formulas) and Propositional Logic Variables: x 1, x 2, x 3,, x

More information

Cost Optimal Parallel Algorithm for 0-1 Knapsack Problem

Cost Optimal Parallel Algorithm for 0-1 Knapsack Problem Cost Optimal Parallel Algorithm for 0-1 Knapsack Problem Project Report Sandeep Kumar Ragila Rochester Institute of Technology sr5626@rit.edu Santosh Vodela Rochester Institute of Technology pv8395@rit.edu

More information

An Introduction to SAT Solvers

An Introduction to SAT Solvers An Introduction to SAT Solvers Knowles Atchison, Jr. Fall 2012 Johns Hopkins University Computational Complexity Research Paper December 11, 2012 Abstract As the first known example of an NP Complete problem,

More information

NP-Completeness. Algorithms

NP-Completeness. Algorithms NP-Completeness Algorithms The NP-Completeness Theory Objective: Identify a class of problems that are hard to solve. Exponential time is hard. Polynomial time is easy. Why: Do not try to find efficient

More information

2SAT Andreas Klappenecker

2SAT Andreas Klappenecker 2SAT Andreas Klappenecker The Problem Can we make the following boolean formula true? ( x y) ( y z) (z y)! Terminology A boolean variable is a variable that can be assigned the values true (T) or false

More information

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley

EECS 219C: Formal Methods Boolean Satisfiability Solving. Sanjit A. Seshia EECS, UC Berkeley EECS 219C: Formal Methods Boolean Satisfiability Solving Sanjit A. Seshia EECS, UC Berkeley The Boolean Satisfiability Problem (SAT) Given: A Boolean formula F(x 1, x 2, x 3,, x n ) Can F evaluate to 1

More information

4.1 Review - the DPLL procedure

4.1 Review - the DPLL procedure Applied Logic Lecture 4: Efficient SAT solving CS 4860 Spring 2009 Thursday, January 29, 2009 The main purpose of these notes is to help me organize the material that I used to teach today s lecture. They

More information

Vertex Cover Approximations

Vertex Cover Approximations CS124 Lecture 20 Heuristics can be useful in practice, but sometimes we would like to have guarantees. Approximation algorithms give guarantees. It is worth keeping in mind that sometimes approximation

More information

Subset Sum Problem Parallel Solution

Subset Sum Problem Parallel Solution Subset Sum Problem Parallel Solution Project Report Harshit Shah hrs8207@rit.edu Rochester Institute of Technology, NY, USA 1. Overview Subset sum problem is NP-complete problem which can be solved in

More information

6.034 Notes: Section 3.1

6.034 Notes: Section 3.1 6.034 Notes: Section 3.1 Slide 3.1.1 In this presentation, we'll take a look at the class of problems called Constraint Satisfaction Problems (CSPs). CSPs arise in many application areas: they can be used

More information

CMPUT 366 Assignment 1

CMPUT 366 Assignment 1 CMPUT 66 Assignment Instructor: R. Greiner Due Date: Thurs, October 007 at start of class The following exercises are intended to further your understanding of agents, policies, search, constraint satisfaction

More information

algorithms, i.e., they attempt to construct a solution piece by piece and are not able to offer a complete solution until the end. The FM algorithm, l

algorithms, i.e., they attempt to construct a solution piece by piece and are not able to offer a complete solution until the end. The FM algorithm, l The FMSAT Satisfiability Solver: Hypergraph Partitioning meets Boolean Satisfiability Arathi Ramani, Igor Markov framania, imarkovg@eecs.umich.edu February 6, 2002 Abstract This report is intended to present

More information

NP-Completeness of 3SAT, 1-IN-3SAT and MAX 2SAT

NP-Completeness of 3SAT, 1-IN-3SAT and MAX 2SAT NP-Completeness of 3SAT, 1-IN-3SAT and MAX 2SAT 3SAT The 3SAT problem is the following. INSTANCE : Given a boolean expression E in conjunctive normal form (CNF) that is the conjunction of clauses, each

More information

Chapter 26 Cluster Heuristic Search

Chapter 26 Cluster Heuristic Search Chapter 26 Cluster Heuristic Search Part I. Preliminaries Part II. Tightly Coupled Multicore Part III. Loosely Coupled Cluster Chapter 18. Massively Parallel Chapter 19. Hybrid Parallel Chapter 20. Tuple

More information

Massively Parallel Approximation Algorithms for the Knapsack Problem

Massively Parallel Approximation Algorithms for the Knapsack Problem Massively Parallel Approximation Algorithms for the Knapsack Problem Zhenkuang He Rochester Institute of Technology Department of Computer Science zxh3909@g.rit.edu Committee: Chair: Prof. Alan Kaminsky

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

Evidence for Invariants in Local Search

Evidence for Invariants in Local Search This paper appears in the Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI-97), Providence, RI, 1997. Copyright 1997 American Association for Artificial Intelligence.

More information

Exact Algorithms Lecture 7: FPT Hardness and the ETH

Exact Algorithms Lecture 7: FPT Hardness and the ETH Exact Algorithms Lecture 7: FPT Hardness and the ETH February 12, 2016 Lecturer: Michael Lampis 1 Reminder: FPT algorithms Definition 1. A parameterized problem is a function from (χ, k) {0, 1} N to {0,

More information

Chapter 10 Part 1: Reduction

Chapter 10 Part 1: Reduction //06 Polynomial-Time Reduction Suppose we could solve Y in polynomial-time. What else could we solve in polynomial time? don't confuse with reduces from Chapter 0 Part : Reduction Reduction. Problem X

More information

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14 600.363 Introduction to Algorithms / 600.463 Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14 23.1 Introduction We spent last week proving that for certain problems,

More information

The Satisfiability Problem [HMU06,Chp.10b] Satisfiability (SAT) Problem Cook s Theorem: An NP-Complete Problem Restricted SAT: CSAT, k-sat, 3SAT

The Satisfiability Problem [HMU06,Chp.10b] Satisfiability (SAT) Problem Cook s Theorem: An NP-Complete Problem Restricted SAT: CSAT, k-sat, 3SAT The Satisfiability Problem [HMU06,Chp.10b] Satisfiability (SAT) Problem Cook s Theorem: An NP-Complete Problem Restricted SAT: CSAT, k-sat, 3SAT 1 Satisfiability (SAT) Problem 2 Boolean Expressions Boolean,

More information

An Analysis and Comparison of Satisfiability Solving Techniques

An Analysis and Comparison of Satisfiability Solving Techniques An Analysis and Comparison of Satisfiability Solving Techniques Ankur Jain, Harsha V. Madhyastha, Craig M. Prince Department of Computer Science and Engineering University of Washington Seattle, WA 98195

More information

Hardware-Software Codesign

Hardware-Software Codesign Hardware-Software Codesign 4. System Partitioning Lothar Thiele 4-1 System Design specification system synthesis estimation SW-compilation intellectual prop. code instruction set HW-synthesis intellectual

More information

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989 University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science October 1989 P Is Not Equal to NP Jon Freeman University of Pennsylvania Follow this and

More information

Chapter 11 Overlapping

Chapter 11 Overlapping Chapter 11 Overlapping Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction Variables

More information

QingTing: A Fast SAT Solver Using Local Search and E cient Unit Propagation

QingTing: A Fast SAT Solver Using Local Search and E cient Unit Propagation QingTing: A Fast SAT Solver Using Local Search and E cient Unit Propagation Xiao Yu Li, Matthias F. Stallmann, and Franc Brglez Dept. of Computer Science, NC State Univ., Raleigh, NC 27695, USA {xyli,mfms,brglez}@unity.ncsu.edu

More information

CMPUT 366 Intelligent Systems

CMPUT 366 Intelligent Systems CMPUT 366 Intelligent Systems Assignment 1 Fall 2004 Department of Computing Science University of Alberta Due: Thursday, September 30 at 23:59:59 local time Worth: 10% of final grade (5 questions worth

More information

Speeding Up the ESG Algorithm

Speeding Up the ESG Algorithm Speeding Up the ESG Algorithm Yousef Kilani 1 and Abdullah. Mohdzin 2 1 Prince Hussein bin Abdullah Information Technology College, Al Al-Bayt University, Jordan 2 Faculty of Information Science and Technology,

More information

Hybrid solvers for the Boolean Satisfiability problem: an exploration

Hybrid solvers for the Boolean Satisfiability problem: an exploration Rowan University Rowan Digital Works Theses and Dissertations 12-12-2012 Hybrid solvers for the Boolean Satisfiability problem: an exploration Nicole Nelson Follow this and additional works at: http://rdw.rowan.edu/etd

More information

The van der Waerden Number W (2, 6) Is 1132

The van der Waerden Number W (2, 6) Is 1132 The van der Waerden Number W (2, 6) Is 1132 Michal Kouril and Jerome L. Paul CONTENTS 1. Introduction 2. Preprocessing: Eliminating Redundancies 3. Finding Unavoidable Patterns 4. Preprocessing Patterns:

More information

Foundation of Parallel Computing- Term project report

Foundation of Parallel Computing- Term project report Foundation of Parallel Computing- Term project report Shobhit Dutia Shreyas Jayanna Anirudh S N (snd7555@rit.edu) (sj7316@rit.edu) (asn5467@rit.edu) 1. Overview: Graphs are a set of connections between

More information

CSCI 5454 Ramdomized Min Cut

CSCI 5454 Ramdomized Min Cut CSCI 5454 Ramdomized Min Cut Sean Wiese, Ramya Nair April 8, 013 1 Randomized Minimum Cut A classic problem in computer science is finding the minimum cut of an undirected graph. If we are presented with

More information

A Virtual Laboratory for Study of Algorithms

A Virtual Laboratory for Study of Algorithms A Virtual Laboratory for Study of Algorithms Thomas E. O'Neil and Scott Kerlin Computer Science Department University of North Dakota Grand Forks, ND 58202-9015 oneil@cs.und.edu Abstract Empirical studies

More information

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR Detecting Missing and Spurious Edges in Large, Dense Networks Using Parallel Computing Samuel Coolidge, sam.r.coolidge@gmail.com Dan Simon, des480@nyu.edu Dennis Shasha, shasha@cims.nyu.edu Technical Report

More information

Set 5: Constraint Satisfaction Problems Chapter 6 R&N

Set 5: Constraint Satisfaction Problems Chapter 6 R&N Set 5: Constraint Satisfaction Problems Chapter 6 R&N ICS 271 Fall 2017 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples:

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2014 Kalev Kask ICS-271:Notes 5: 1 The constraint network model Outline Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

Satisfiability (SAT) Applications. Extensions/Related Problems. An Aside: Example Proof by Machine. Annual Competitions 12/3/2008

Satisfiability (SAT) Applications. Extensions/Related Problems. An Aside: Example Proof by Machine. Annual Competitions 12/3/2008 15 53:Algorithms in the Real World Satisfiability Solvers (Lectures 1 & 2) 1 Satisfiability (SAT) The original NP Complete Problem. Input: Variables V = {x 1, x 2,, x n }, Boolean Formula Φ (typically

More information

Hashing. Hashing Procedures

Hashing. Hashing Procedures Hashing Hashing Procedures Let us denote the set of all possible key values (i.e., the universe of keys) used in a dictionary application by U. Suppose an application requires a dictionary in which elements

More information

Stochastic greedy local search Chapter 7

Stochastic greedy local search Chapter 7 Stochastic greedy local search Chapter 7 ICS-275 Winter 2016 Example: 8-queen problem Main elements Choose a full assignment and iteratively improve it towards a solution Requires a cost function: number

More information

Implementation of Parallel Path Finding in a Shared Memory Architecture

Implementation of Parallel Path Finding in a Shared Memory Architecture Implementation of Parallel Path Finding in a Shared Memory Architecture David Cohen and Matthew Dallas Department of Computer Science Rensselaer Polytechnic Institute Troy, NY 12180 Email: {cohend4, dallam}

More information

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA GSAT and Local Consistency 3 Kalev Kask and Rina Dechter Department of Information and Computer Science University of California, Irvine, CA 92717-3425 fkkask,dechterg@ics.uci.edu Abstract It has been

More information

Supplementary Material for The Generalized PatchMatch Correspondence Algorithm

Supplementary Material for The Generalized PatchMatch Correspondence Algorithm Supplementary Material for The Generalized PatchMatch Correspondence Algorithm Connelly Barnes 1, Eli Shechtman 2, Dan B Goldman 2, Adam Finkelstein 1 1 Princeton University, 2 Adobe Systems 1 Overview

More information

Clustering Using Graph Connectivity

Clustering Using Graph Connectivity Clustering Using Graph Connectivity Patrick Williams June 3, 010 1 Introduction It is often desirable to group elements of a set into disjoint subsets, based on the similarity between the elements in the

More information

Implementation of a Sudoku Solver Using Reduction to SAT

Implementation of a Sudoku Solver Using Reduction to SAT Implementation of a Sudoku Solver Using Reduction to SAT For this project you will develop a Sudoku solver that receives an input puzzle and computes a solution, if one exists. Your solver will: read an

More information

The Resolution Algorithm

The Resolution Algorithm The Resolution Algorithm Introduction In this lecture we introduce the Resolution algorithm for solving instances of the NP-complete CNF- SAT decision problem. Although the algorithm does not run in polynomial

More information

1 Definition of Reduction

1 Definition of Reduction 1 Definition of Reduction Problem A is reducible, or more technically Turing reducible, to problem B, denoted A B if there a main program M to solve problem A that lacks only a procedure to solve problem

More information

Lecture 2: NP-Completeness

Lecture 2: NP-Completeness NP and Latin Squares Instructor: Padraic Bartlett Lecture 2: NP-Completeness Week 4 Mathcamp 2014 In our last class, we introduced the complexity classes P and NP. To motivate why we grouped all of NP

More information

Polynomial SAT-Solver Algorithm Explanation

Polynomial SAT-Solver Algorithm Explanation 1 Polynomial SAT-Solver Algorithm Explanation by Matthias Mueller (a.k.a. Louis Coder) louis@louis-coder.com Explanation Version 1.0 - December 1, 2013 Abstract This document describes an algorithm that

More information

8 NP-complete problem Hard problems: demo

8 NP-complete problem Hard problems: demo Ch8 NPC Millennium Prize Problems http://en.wikipedia.org/wiki/millennium_prize_problems 8 NP-complete problem Hard problems: demo NP-hard (Non-deterministic Polynomial-time hard), in computational complexity

More information

Chapter 2 PRELIMINARIES

Chapter 2 PRELIMINARIES 8 Chapter 2 PRELIMINARIES Throughout this thesis, we work with propositional or Boolean variables, that is, variables that take value in the set {true, false}. A propositional formula F representing a

More information

Parallelizing SAT Solver With specific application on solving Sudoku Puzzles

Parallelizing SAT Solver With specific application on solving Sudoku Puzzles 6.338 Applied Parallel Computing Final Report Parallelizing SAT Solver With specific application on solving Sudoku Puzzles Hank Huang May 13, 2009 This project was focused on parallelizing a SAT solver

More information

Probabilistic Abstraction Lattices: A Computationally Efficient Model for Conditional Probability Estimation

Probabilistic Abstraction Lattices: A Computationally Efficient Model for Conditional Probability Estimation Probabilistic Abstraction Lattices: A Computationally Efficient Model for Conditional Probability Estimation Daniel Lowd January 14, 2004 1 Introduction Probabilistic models have shown increasing popularity

More information

Exploring Performance Tradeoffs in a Sudoku SAT Solver CS242 Project Report

Exploring Performance Tradeoffs in a Sudoku SAT Solver CS242 Project Report Exploring Performance Tradeoffs in a Sudoku SAT Solver CS242 Project Report Hana Lee (leehana@stanford.edu) December 15, 2017 1 Summary I implemented a SAT solver capable of solving Sudoku puzzles using

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Chapter 6 of Russell & Norvig] Constraint satisfaction problems (CSPs) Standard search problem:

More information

The MAX-SAX Problems

The MAX-SAX Problems STOCHASTIC LOCAL SEARCH FOUNDATION AND APPLICATION MAX-SAT & MAX-CSP Presented by: Wei-Lwun Lu 1 The MAX-SAX Problems MAX-SAT is the optimization variant of SAT. Unweighted MAX-SAT: Finds a variable assignment

More information

Subset Sum - A Dynamic Parallel Solution

Subset Sum - A Dynamic Parallel Solution Subset Sum - A Dynamic Parallel Solution Team Cthulu - Project Report ABSTRACT Tushar Iyer Rochester Institute of Technology Rochester, New York txi9546@rit.edu The subset sum problem is an NP-Complete

More information

Unit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.

Unit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W. : Coping with NP-Completeness Course contents: Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems Reading: Chapter 34 Chapter 35.1, 35.2 Y.-W. Chang 1 Complexity

More information

Giovanni De Micheli. Integrated Systems Centre EPF Lausanne

Giovanni De Micheli. Integrated Systems Centre EPF Lausanne Two-level Logic Synthesis and Optimization Giovanni De Micheli Integrated Systems Centre EPF Lausanne This presentation can be used for non-commercial purposes as long as this note and the copyright footers

More information

Notes on CSP. Will Guaraldi, et al. version 1.7 4/18/2007

Notes on CSP. Will Guaraldi, et al. version 1.7 4/18/2007 Notes on CSP Will Guaraldi, et al version 1.7 4/18/2007 Abstract Original abstract This document is a survey of the fundamentals of what we ve covered in the course up to this point. The information in

More information

Fundamentals of the J Programming Language

Fundamentals of the J Programming Language 2 Fundamentals of the J Programming Language In this chapter, we present the basic concepts of J. We introduce some of J s built-in functions and show how they can be applied to data objects. The pricinpals

More information

Non-deterministic Search techniques. Emma Hart

Non-deterministic Search techniques. Emma Hart Non-deterministic Search techniques Emma Hart Why do local search? Many real problems are too hard to solve with exact (deterministic) techniques Modern, non-deterministic techniques offer ways of getting

More information

JULIA ENABLED COMPUTATION OF MOLECULAR LIBRARY COMPLEXITY IN DNA SEQUENCING

JULIA ENABLED COMPUTATION OF MOLECULAR LIBRARY COMPLEXITY IN DNA SEQUENCING JULIA ENABLED COMPUTATION OF MOLECULAR LIBRARY COMPLEXITY IN DNA SEQUENCING Larson Hogstrom, Mukarram Tahir, Andres Hasfura Massachusetts Institute of Technology, Cambridge, Massachusetts, USA 18.337/6.338

More information

Massively Parallel Approximation Algorithms for the Traveling Salesman Problem

Massively Parallel Approximation Algorithms for the Traveling Salesman Problem Massively Parallel Approximation Algorithms for the Traveling Salesman Problem Vaibhav Gandhi May 14, 2015 Abstract This paper introduces the reader to massively parallel approximation algorithms which

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Partitioning Methods. Outline

Partitioning Methods. Outline Partitioning Methods 1 Outline Introduction to Hardware-Software Codesign Models, Architectures, Languages Partitioning Methods Design Quality Estimation Specification Refinement Co-synthesis Techniques

More information

A Parallel Algorithm for Finding Sub-graph Isomorphism

A Parallel Algorithm for Finding Sub-graph Isomorphism CS420: Parallel Programming, Fall 2008 Final Project A Parallel Algorithm for Finding Sub-graph Isomorphism Ashish Sharma, Santosh Bahir, Sushant Narsale, Unmil Tambe Department of Computer Science, Johns

More information

An algorithm for Performance Analysis of Single-Source Acyclic graphs

An algorithm for Performance Analysis of Single-Source Acyclic graphs An algorithm for Performance Analysis of Single-Source Acyclic graphs Gabriele Mencagli September 26, 2011 In this document we face with the problem of exploiting the performance analysis of acyclic graphs

More information

NP and computational intractability. Kleinberg and Tardos, chapter 8

NP and computational intractability. Kleinberg and Tardos, chapter 8 NP and computational intractability Kleinberg and Tardos, chapter 8 1 Major Transition So far we have studied certain algorithmic patterns Greedy, Divide and conquer, Dynamic programming to develop efficient

More information

Learning a SAT Solver from Single-

Learning a SAT Solver from Single- Learning a SAT Solver from Single- Bit Supervision Daniel Selsman, Matthew Lamm, Benedikt Bunz, Percy Liang, Leonardo de Moura and David L. Dill Presented By Aditya Sanghi Overview NeuroSAT Background:

More information

1 Inference for Boolean theories

1 Inference for Boolean theories Scribe notes on the class discussion on consistency methods for boolean theories, row convex constraints and linear inequalities (Section 8.3 to 8.6) Speaker: Eric Moss Scribe: Anagh Lal Corrector: Chen

More information

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search Informal goal Clustering Given set of objects and measure of similarity between them, group similar objects together What mean by similar? What is good grouping? Computation time / quality tradeoff 1 2

More information

Part 1: Written Questions (60 marks):

Part 1: Written Questions (60 marks): COMP 352: Data Structure and Algorithms Fall 2016 Department of Computer Science and Software Engineering Concordia University Combined Assignment #3 and #4 Due date and time: Sunday November 27 th 11:59:59

More information

CS1800 Discrete Structures Fall 2016 Profs. Aslam, Gold, Ossowski, Pavlu, & Sprague December 16, CS1800 Discrete Structures Final

CS1800 Discrete Structures Fall 2016 Profs. Aslam, Gold, Ossowski, Pavlu, & Sprague December 16, CS1800 Discrete Structures Final CS1800 Discrete Structures Fall 2016 Profs. Aslam, Gold, Ossowski, Pavlu, & Sprague December 16, 2016 Instructions: CS1800 Discrete Structures Final 1. The exam is closed book and closed notes. You may

More information

Workloads Programmierung Paralleler und Verteilter Systeme (PPV)

Workloads Programmierung Paralleler und Verteilter Systeme (PPV) Workloads Programmierung Paralleler und Verteilter Systeme (PPV) Sommer 2015 Frank Feinbube, M.Sc., Felix Eberhardt, M.Sc., Prof. Dr. Andreas Polze Workloads 2 Hardware / software execution environment

More information

Steven Skiena. skiena

Steven Skiena.   skiena Lecture 22: Introduction to NP-completeness (1997) Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.sunysb.edu/ skiena Among n people,

More information

Notes on CSP. Will Guaraldi, et al. version /13/2006

Notes on CSP. Will Guaraldi, et al. version /13/2006 Notes on CSP Will Guaraldi, et al version 1.5 10/13/2006 Abstract This document is a survey of the fundamentals of what we ve covered in the course up to this point. The information in this document was

More information

Graph Structure Over Time

Graph Structure Over Time Graph Structure Over Time Observing how time alters the structure of the IEEE data set Priti Kumar Computer Science Rensselaer Polytechnic Institute Troy, NY Kumarp3@rpi.edu Abstract This paper examines

More information

Hill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable.

Hill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable. Hill Climbing Many search spaces are too big for systematic search. A useful method in practice for some consistency and optimization problems is hill climbing: Assume a heuristic value for each assignment

More information

P and NP (Millenium problem)

P and NP (Millenium problem) CMPS 2200 Fall 2017 P and NP (Millenium problem) Carola Wenk Slides courtesy of Piotr Indyk with additions by Carola Wenk CMPS 2200 Introduction to Algorithms 1 We have seen so far Algorithms for various

More information

Example of a Demonstration that a Problem is NP-Complete by reduction from CNF-SAT

Example of a Demonstration that a Problem is NP-Complete by reduction from CNF-SAT 20170926 CNF-SAT: CNF-SAT is a problem in NP, defined as follows: Let E be a Boolean expression with m clauses and n literals (literals = variables, possibly negated), in which - each clause contains only

More information

Set 5: Constraint Satisfaction Problems

Set 5: Constraint Satisfaction Problems Set 5: Constraint Satisfaction Problems ICS 271 Fall 2012 Rina Dechter ICS-271:Notes 5: 1 Outline The constraint network model Variables, domains, constraints, constraint graph, solutions Examples: graph-coloring,

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

PARALLELIZATION OF THE NELDER-MEAD SIMPLEX ALGORITHM

PARALLELIZATION OF THE NELDER-MEAD SIMPLEX ALGORITHM PARALLELIZATION OF THE NELDER-MEAD SIMPLEX ALGORITHM Scott Wu Montgomery Blair High School Silver Spring, Maryland Paul Kienzle Center for Neutron Research, National Institute of Standards and Technology

More information

Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms

Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms Frank Hutter 1, Youssef Hamadi 2, Holger Hoos 1, and Kevin Leyton-Brown 1 1 University of British Columbia, Vancouver,

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 22.1 Introduction We spent the last two lectures proving that for certain problems, we can

More information