Question. Lecture 11: CSPs I. Course plan. Review: paradigm

Size: px
Start display at page:

Download "Question. Lecture 11: CSPs I. Course plan. Review: paradigm"

Transcription

1 Lecture 11: CSPs I cs221.stanford.edu/q uestion Find two neighboring countries, one that begins with an A and the other that speaks Hungarian. CS221 / Autumn 2016 / Liang CS221 / Autumn 2016 / Liang 1 Course plan We ve finished our tour of machine learning and state-based models, which brings us to the midpoint of this course. Let s reflect a bit on what we ve covered so far. Search problems Markov decision processes Constraint satisfaction problems Adversarial games Bayesian networks Reflex States ariables Logic Low-level intelligence Machine learning High-level intelligence CS221 / Autumn 2016 / Liang 2 Review: paradigm Recall the paradigm that we ve been working in: separate modeling (what to compute) from algorithms (how to compute it). Real-world task Modeling Formal task (model) Algorithms Program CS221 / Autumn 2016 / Liang 4

2 State-based models [Modeling] Framework search problems MDPs/games Objective minimum cost paths maximum value policies [Algorithms] Modeling: In the context of state-based models, we seek to find minimum cost paths (for search problems) or maximum value policies (for MDPs and games). Algorithms: o compute these solutions, we can either work on the search/game tree or on the state graph. In the former case, we end up with recursive procedures which take exponential time but require very little memory (generally linear in the size of the solution). In the latter case, where we are fortunate to have few enough states to fit into memory, we can work directly on the graph, which can often yield an exponential savings in time. Given that we can find the optimal solution with respect to a fixed model, the final question is where this model actually comes from. Learning provides the answer: from data. You should think of machine learning as not just a way to do binary classification, but more as a way of life, which can be used to support a variety of different models. In the rest of the course, modeling, algorithms, and learning will continue to be the three structural supports of the techniques we will develop. ree-based backtracking minimax/expectimax Graph-based DP, UCS, A* value/policy iteration [Learning] Methods structured Perceptron -learning, D learning CS221 / Autumn 2016 / Liang 6 State-based models: takeaway 1 B S G A One high-level takeaway is the motto: specify locally, optimize globally. When we re building a search problem, we only need to specify how the states are connected through actions and what the local action costs are; we need not specify the long-term consequences of taking an action. It is the job of the algorithms to take all of this local information into account and produce globally optimal solutions (minimum cost paths). his separation is quite powerful in light of modeling and algorithms: having to worry only about local interactions makes modeling easier, but we still get the benefits of a globally optimal solution via algorithms which are constructed independent of the domain-specific details. We will see this local specification + global optimization pattern again in the context of variable-based models. Key idea: specify locally, optimize globally Modeling: specifies local interactions Algorithms: find globally optimal solutions CS221 / Autumn 2016 / Liang 8 State-based models: takeaway 2 B S G he second high-level takeaway which is core to state-based models is the notion of state. he state, which summarizes previous actions, is one of the key tools that allows us to manage the exponential search problems frequently encountered in AI. We will see the notion of state appear again in the context of conditional independence in variable-based models. With states, we were in the mindset of thinking about taking a sequence of actions (where order is important) to reach a goal. However, in some tasks, order is irrelevant. In these cases, maybe search isn t the best way to model the task. Let s see an example. A Key idea: state A state is a summary of all the past actions sufficient to choose future actions optimally. Mindset: move through states (nodes) via actions (edges) CS221 / Autumn 2016 / Liang 10

3 Map coloring uestion: how can we color each of the 7 provinces {red,green,blue} so that no two neighboring provinces have the same color? (one possible solution) CS221 / Autumn 2016 / Liang 12 CS221 / Autumn 2016 / Liang 13 Search,,,,,, As a search problem State: partial assignment of colors to provinces Action: assign next uncolored province a compatible color What s next? Exploit the problem structure! ariable ordering doesn t affect correctness ariables are interdependent in a local way CS221 / Autumn 2016 / Liang 14 CS221 / Autumn 2016 / Liang 15 We can certainly use search to find an assignment of colors to the provinces of Australia. Let s fix an arbitrary ordering of the provinces. Each state contains an assignment of colors to a subset of the provinces (a partial assignment), and each action chooses a color for the next unassigned province as long as the color isn t already assigned to one of its neighbors. In this way, all the leaves of the search tree are solutions (18 of them). (In the slide, in the interest of space, we ve only shown the subtree rooted at a partial assignment to 3 variables.) his is a fine way to solve this problem, and in general, it shows how powerful search is: we don t actually need any new machinery to solve this problem. But the question is: can we do better? First, the particular search tree that we drew had several dead ends; it would be better if we could detect these earlier. We will see in this lecture that the fact that the order in which we assign variables doesn t matter for correctness gives us the flexibility to dynamically choose a better ordering of the variables. hat, with a bit of lookahead will allow us to dramatically improve the efficiency over naive tree search. Second, it s clear that asmania s color can be any of the three colors regardless of the colors on the mainland. his is an instance of independence, and next time, we ll see how to exploit these observations systematically. A new framework... Key idea: variables ariable-based models Solutions to problems assignments to variables (modeling). Decisions about variable ordering, etc. chosen by algorithms. CS221 / Autumn 2016 / Liang 17

4 With that motivation in mind, we now embark on our journey into variable-based models. ariablebased models is an umbrella term that includes constraint satisfaction problems (CSPs), Markov networks, Bayesian networks, hidden Markov models (HMMs), conditional random fields (CRFs), etc., which we ll get to later in the course. he term graphical models can be used interchangeably with variable-based models, and the term probabilistic graphical models (PGMs) generally encompasses both Markov networks (also called undirected graphical models) and Bayesian networks (directed graphical models). he unifying theme is the idea of thinking about solutions to problems as assignments of values to variables (this is the modeling part). All the details about how to find the assignment (in particular, which variables to try first) are delegated to the algorithms. So the advantage of using variable-based models over statebased models is that it s making the algorithms do more of the work, freeing up more time for modeling. State-based models: An analogy ariable-based models: CS221 / Autumn 2016 / Liang 19 An apt analogy are programming languages. Solving a problem directly by implementing an ad-hoc program is like using assembly language. Solving a problem using state-based models is like using C. Solving a problem using variable-based models is like using Python. By moving to a higher language, you might forgo some amount of ability to optimize manually, but the advantage is that (i) you can think at a higher level and (ii) there are more opportunities for optimizing automatically. How to describe your solution uestion: How are you solving the problem? Bad answer: I m using MDPs. Good answer: he states capture position and velocity. CS221 / Autumn 2016 / Liang 21 While we re on the topic of programming languages and tools, it s worth making one final general remark: What is the best way to describe a solution (e.g., in writing up your final project)? It s not very informative to simply say that you used SMs, because you could have used SMs in a thousand different ways, just like you could have used C++ in a thousand different ways. SMs, MDPs, and C++ are just tools and frameworks that help you get the job done, but should only play a supporting role in your description. It is much more useful to describe the model that you used to represent your real-world task. What are the features trying to capture about the problem (in the case of machine learning)? What are the states, actions, costs, etc. capturing (in the case of state-based models)? Once the tools (MDPs, etc.) become second nature, it is almost as if they are invisible. It s like when you master a language, you can think in it without constantly referring to the framework. Roadmap Factor graphs Dynamic ordering Arc consistency Modeling CS221 / Autumn 2016 / Liang 23

5 B or R? Factor graph (example) must agree B or R? tend to agree X 1 X 2 X 3 B or R? he most important concept for the next three weeks will be that of a factor graph. But before we define it formally, let us consider a simple example. Suppose there are three people, each of which will vote for a color, red or blue. We know that Person 1 is leaning pretty set on blue, and Person 3 is leaning red. Person 1 and Person 2 must have the same color, while Person 2 and Person 3 would weakly prefer to have the same color. We can model this as a factor graph consisting of three variables, X 1, X 2, X 3, each of which must be assigned red (R) or blue (B). We encode each of the constraint/preference as a factor, which assigns a non-negative number based on the assignment to a subset of the variables. We can either describe the factor as an explicit table, or via a function (e.g., [x 1 = x 2]). f 1 f 2 f 3 f 4 x 1 f 1(x 1) R 0 B 1 x 1 x 2 f 2(x 1, x 2) R R 1 R B 0 B R 0 B B 1 x 2 x 3 f 3(x 2, x 3) R R 3 R B 2 B R 2 B B 3 x 3 f 4(x 3) R 2 B 1 f 2(x 1, x 2) = [x 1 = x 2] f 3(x 2, x 3) = [x 2 = x 3] + 2 [demo] CS221 / Autumn 2016 / Liang 24 Factor graph X 1 X 2 X 3 Now we proceed to the general definition. a factor graph consists of a set of variables and a set of factors: (i) n variables X 1,..., X n, which are represented as circular nodes in the graphical notation; and (ii) m factors (also known as potentials) f 1,..., f m, which are represented as square nodes in the graphical notation. Each variable X i can take on values in its domain Domain i. Each factor f j is a function that takes an assignment x to all the variables and returns a non-negative number representing how good that assignment is (from the factor s point of view). Usually, each factor will depend only on a small subset of the variables. f 1 f 2 f 3 f 4 Definition: factor graph ariables: X = (X 1,..., X n ), where X i Domain i Factors (potentials): f 1,..., f m, with each f j (X) 0 CS221 / Autumn 2016 / Liang 26 Example: map coloring Notation: we use [condition] to represent the indicator function which is equal to 1 if the condition is true and 0 if not. Normally, this is written 1[condition], but we drop the 1 for succinctness. ariables: X = (,,,,,, ) Domain i {R, G, B} Factors: f 1 (X) = [ ] f 2 (X) = [ ]... CS221 / Autumn 2016 / Liang 28

6 Factors he key aspect that makes factor graphs useful is that each factor f j only depends on a subset of variables, called the scope. he arity of the factors is generally small (think 1 or 2). Definition: scope and arity Scope of a factor f j : set of variables it depends on. Arity of f j is the number of variables in the scope. Unary factors (arity 1); Binary factors (arity 2). Example: map coloring Scope of f 1 (X) = [ ] is {, } f 1 is a binary factor CS221 / Autumn 2016 / Liang 30 Assignment weights (example) x 1 x 2 f 2(x 1, x 2) x 2 x 3 f 3(x 2, x 3) x 1 f 1(x 1) R R 1 R R 3 x 3 f 4(x 3) R 0 R B 0 R B 2 R 2 B 1 B R 0 B R 2 B 1 B B 1 B B 3 A factor graph specifies all the local interactions between variables. We wish to find a global solution. A solution is called an assignment, which specifies a value for each variable. Each assignment is associated with a weight, which is just the product over each factor evaluated on that assignment. Intuitively, each factor contributes to the weight. Note that any factor has veto power: if it returns zero, then the entire weight is irrecoverably zero. In this setting, the maximum weight assignment is (B, B, R), which has a weight of 4. You can think of this is the optimal configuration or the most likely outcome. x 1 x 2 x 3 Weight R R R = 0 R R B = 0 R B R = 0 R B B = 0 B R R = 0 B R B = 0 B B R = 4 B B B = 3 CS221 / Autumn 2016 / Liang 32 [demo] Assignment weights Definition: assignment weight Formally, the weight of an assignment x is the product of all the factors applied to that assignment ( m j=1 fj(x)). hink of all the factors chiming in on their opinion of x. We multiply all these opinions together to get the global opinion. Our objective will be to find the maximum weight assignment. Note: do not confuse the term weight in the context of factor graphs with the weight vector in machine learning. Each assignment x = (x 1,..., x n ) has a weight: m Weight(x) = f j (x) j=1 Objective: find the maximum weight assignment arg max Weight(x) x CS221 / Autumn 2016 / Liang 34

7 Example: map coloring In the map coloring example, each factor only looks at the variables of two adjacent countries and checks if the colors are different (returning 1) or the same (returning 0). From a modeling perspective, this allows us to specify local interactions in a modular way. A global notion of consistency is achieved by multiplying together all the factors. Again note that the factors are multiplied (not added), which means that any factor has veto power: a single zero causes the entire weight to be zero. Assignment: x = { : R, : G, : B, : R, : G, : R, : G} Weight: Weight(x) = = 1 Assignment: x = { : R, : R, : B, : R, : G, : R, : G} Weight: Weight(x ) = = 0 CS221 / Autumn 2016 / Liang 36 Constraint satisfaction problems Definition: constraint satisfaction problem (CSP) Constraint satisfaction problems are just a special case of factor graphs where each of the factors returns either 0 or 1. Such a factor is a constraint, where 1 means the constraint is satisfied and 0 means that it is not. In a CSP, all assignments have either weight 1 or 0. Assignments with weight 1 are called consistent (they satisfy all the constraints), and the assignments with weight 0 are called inconsistent. Our goal is to find any consistent assignment (if one exists). A CSP is a factor graph where all factors are constraints: f j (x) {0, 1} for all j = 1,..., m he constraint is satisfied iff f j (x) = 1. Definition: consistent assignments An assignment x is consistent iff Weight(x) = 1 (i.e., all constraints are satisfied). CS221 / Autumn 2016 / Liang 38 Summary so far Roadmap X 1 X 2 X 3 Factor graphs f 1 f 2 f 3 f 4 Dynamic ordering Factor graph (general) variables factors assignment weight CSP (all or nothing) variables constraints consistent or inconsistent Arc consistency Modeling CS221 / Autumn 2016 / Liang 40 CS221 / Autumn 2016 / Liang 41

8 Extending partial assignments he general idea, as we ve already seen in our search-based solution is to work with partial assignments. We ve defined the weight of a full assignment to be the product of all the factors applied to that assignment. We extend this definition to partial assignments: he weight of a partial assignment is defined to be the product of all the factors whose scope includes only assigned variables. For example, if only and are assigned, the weight is just value of the single factor between them. When we assign a new variable a value, the weight of the new extended assignment is defined to be the original weight times all the factors that depend on the new variable and only previously assigned variables. [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] CS221 / Autumn 2016 / Liang 42 Dependent factors Formally, we will use D(x, X i) to denote this set of these factors, which we will call dependent factors. For example, if we assign, then D(x, ) contains two factors: the one between and and the one between and. Partial assignment (e.g., x = { : R, : G}) Definition: dependent factors Let D(x, X i ) be set of factors depending on X i but not on unassigned variables. D({ : R, : G}, ) = {[ ], [ ]} CS221 / Autumn 2016 / Liang 44 Backtracking search Algorithm: backtracking search Backtrack(x, w, Domains): If x is complete assignment: update best and return Choose unassigned ARIABLE X i Order ALUES Domain i of chosen X i For each value v in that order: δ f j (x {X i : v}) f j D(x,X i) If δ = 0: continue Domains Domains via LOOKAHEAD Backtrack(x {X i : v}, wδ, Domains ) Now we are ready to present the full backtracking search, which is a recursive procedure that takes in a partial assignment x, its weight w, and the domains of all the variables Domains = (Domain 1,..., Domain n). If the assignment x is complete (all variables are assigned), then we update our statistics based on what we re trying to compute: We can increment the total number of assignments seen so far, check to see if x is better than the current best assignment that we ve seen so far (based on w), etc. (For CSPs where all the weights are 0 or 1, we can stop as soon as we find one consistent assignment, just as in DFS for search problems.) Otherwise, we choose an unassigned variable X i. Given the choice of X i, we choose an ordering of the values of that variable X i. Next, we iterate through all the values v Domain i in that order. For each value v, we compute δ, which is the product of the dependent factors D(x, X i); recall this is the multiplicative change in weight from assignment x to the new assignment x {X i : v}. If δ = 0, that means a constraint is violated, and we can ignore this partial assignment completely, because multiplying more factors later on cannot make the weight non-zero. We then perform lookahead, removing values from the domains Domains to produce Domains. his is not required (we can just use Domains = Domains), but it can make our algorithm run faster. (We ll see one type of lookahead in the next slide.) Finally, we recurse on the new partial assignment x {X i : v}, the new weight wδ, and the new domain Domains. If we choose an unassigned variable according to an arbitrary fixed ordering, order the values arbitrarily, and do not perform lookahead, we get the basic tree search algorithm that we would have used if we were thinking in terms of a search problem. We will next start to improve the efficiency by exploiting properties of the CSP. CS221 / Autumn 2016 / Liang 46

9 Lookahead: forward checking (example) First, we will look at forward checking, which is a way to perform a one-step lookahead. he idea is that as soon as we assign a variable (e.g., = R), we can pre-emptively remove inconsistent values from the domains of neighboring variables (i.e., those that share a factor). If we keep on doing this and get to a point where some variable has an empty domain, then we can stop and backtrack immediately, since there s no possible way to assign a value to that variable which is consistent with the previous partial assignment. In this example, after is assigned blue, we remove inconsistent values (blue) from s domain, emptying it. At this point, we need not even recurse further, since there s no way to extend the current assignment. We would then instead try assigning to red. Inconsistent - prune! CS221 / Autumn 2016 / Liang 48 Lookahead: forward checking Key idea: forward checking (one-step lookahead) After assigning a variable X i, eliminate inconsistent values from the domains of X i s neighbors. If any domain becomes empty, don t recurse. When unassign X i, restore neighbors domains. When unassigning a variable, remember to restore the domains of its neighboring variables! he simplest way to implement this is to make a copy of the domains of the variables before performing forward checking. his is foolproof, but can be quite slow. A fancier solution is to keep a counter (initialized to be zero) c iv for each variable X i and value v in its domain. When we remove a value v from the domain of X i, we increment c iv. An element is deemed to be removed when c iv > 0. When we want to un-remove a value, we decrement c iv. his way, the remove operation is reversible, which is important since a value might get removed multiple times due to multiple neighboring variables. Later, we will look at arc consistency, which will allow us to lookahead even more. CS221 / Autumn 2016 / Liang 50 Choosing an unassigned variable Now let us look at the problem of choosing an unassigned variable. Intuitively, we want to choose the variable which is most constrained, that is, the variable whose domain has the fewest number of remaining valid values (based on forward checking), because those variables yield smaller branching factors. Which variable to assign next? Key idea: most constrained variable Choose variable that has the fewest consistent values. his example: (has only one value) CS221 / Autumn 2016 / Liang 52

10 Order values of a selected variable What values to try for? Once we ve selected an unassigned variable X i, we need to figure out which order to try the different values in. Here the principle we will follow is to first try values which are less constrained. here are several ways we can think about measuring how constrained a variable is, but for the sake of concreteness, here is the heuristic we ll use: just count the number of values in the domains of all neighboring variables (those that share a factor with X i). If we color red, then we have 2 valid values for, 2 for, and 2 for. If we color blue, then we have only 1 for, 1 for, and 2 for. herefore, red is preferable (6 total valid values versus 4). he intuition is that we want values which impose the fewest number of constraints on the neighbors, so that we are more likely to find a consistent assignment = 6 consistent values = 4 consistent values Key idea: least constrained value Order values of selected X i by decreasing number of consistent values of neighboring variables. CS221 / Autumn 2016 / Liang 54 When to fail? Most constrained variable (MC): Must assign every variable If going to fail, fail early more pruning he most constrained variable and the least constrained value heuristics might seem conflicting, but there is a good reason for this superficial difference. An assignment involves every variable whereas for each variable we only need to choose some value. herefore, for variables, we want to try to detect failures early on if possible (because we ll have to confront those variables sooner or later), but for values we want to steer away from possible failures because we might not have to consider those other values. Least constrained value (LC): Need to choose some value Choosing value most likely to lead to solution CS221 / Autumn 2016 / Liang 56 When do these heuristics help? Most constrained variable: useful when at least some factors are constraints Most constrained variable is useful for finding maximum weight assignments in any factor graph as long as there are some factors which are constraints, because we only save work if we can prune away assignments with zero weight, and this only happens with violated constraints (weight 0). On the other hand, least constrained value only makes sense if all the factors are constraints (CSPs). Ordering the values makes sense if we re going to just find the first consistent assignment. If there are any non-constraint factors, then would need to look at all consistent assignments to see which one has the maximum weight. Analogy: think about when depth-first search is guaranteed to find the minimum cost path. Least constrained value: useful when all factors are constraints Need lookahead (e.g., forward checking) to actually prune domains! CS221 / Autumn 2016 / Liang 58

11 Review: backtracking search Roadmap Algorithm: backtracking search Backtrack(x, w, Domains): If x is complete assignment: update best and return Choose unassigned ARIABLE X i Order ALUES Domain i of chosen X i For each value v in that order: δ f j (x {X i : v}) f j D(x,X i) If δ = 0: continue Domains Domains via LOOKAHEAD Backtrack(x {X i : v}, wδ, Domains ) Factor graphs Dynamic ordering Arc consistency Modeling CS221 / Autumn 2016 / Liang 60 CS221 / Autumn 2016 / Liang 61 Arc consistency Idea: eliminate values from domains reduce branching Example: numbers Before enforcing arc consistency on X i : X i Domain i = {1, 2, 3, 4, 5} X j Domain j = {1, 2, 3, 4, 5} f 1 (X) = [X i + X j = 4] f 2 (X) = [X j 2] After enforcing arc consistency on X i : X i Domain i = {2, 3} Now let us return to the issue of using lookahead to eliminate values from domains of unassigned variables. One motivation is that smaller domains lead to smaller branching factors, which makes search faster. A second motivation is that since the domain sizes are used in the context of the dynamic ordering heuristics (most constrained variable and least constrained value), we can hope to choose better orderings with domains that more accurately reflect what values are actually possible. We ve already seen forward checking as a simple way of using lookahead to prune the domains of unassigned variables. Shortly, we will introduce AC-3, which is forward checking without brakes. o build up to that, we need to introduce the idea of arc consistency. he idea behind enforcing arc consistency is to look at all the factors that involve just two variables X i and X j and rule out any values in the domain of X i which are obviously bad without even looking at another variables. o enforce arc consistency on X i with respect to X j, we go through each of the values in the domain of X i and remove it if there is no value in the domain of X j that is consistent with X i. For example, X i = 4 is ruled out because no value X j {1, 2, 3, 4, 5} satisfies X i + X j = 4. [whiteboard: bipartite graph] CS221 / Autumn 2016 / Liang 62 Arc consistency AC-3 (example) Definition: arc consistency A variable X i is arc consistent with respect to X j if for each x i Domain i, there exists x j Domain j such that f({x i : x i, X j : x j }) 0 for all factors f whose scope contains X i and X j. Algorithm: enforce arc consistency EnforceArcConsistency(X i, X j ): Remove values from Domain i to make X i arc consistent with respect to X j. CS221 / Autumn 2016 / Liang 64 CS221 / Autumn 2016 / Liang 65

12 AC-3 Forward checking: when assign X j : x j, set Domain j = {x j } and enforce arc consistency on all neighbors X i with respect to X j AC-3: repeatedly enforce arc consistency on all variables Algorithm: AC-3 Add X j to set. While set is non-empty: Remove any X j from set. For all neighbors X i of X j : Enforce arc consistency on X i w.r.t. X j. If Domain i changed, add X i to set. In fact, we already saw a limited version of arc consistency. In forward checking, when we assign a variable X i to a value, we are actually enforcing arc consistency on the neighbors of X i with respect to X i. Why stop there? AC-3 doesn t. In AC-3, we start by enforcing arc consistency on the neighbors of X i (forward checking). But then, if the domains of any neighbor X j changes, then we enforce arc consistency on the neighbors of X j, etc. In the example, after we assign : R, performing AC-3 is the same as forward checking. But after the assignment : G, AC-3 goes wild and eliminates all but one value from each of the variables on the mainland. Note that unlike BFS graph search, a variable could get added to the set multiple times because its domain can get updated more than once. More specifically, we might enforce arc consistency on (X i, X j) up to D times in the worst case, where D = max 1 i n Domain i is the size of the largest domain. here are at most m different pairs (X i, X j) and each call to enforce arc consistency takes O(D 2 ) time. herefore, the running time of this algorithm is O(ED 3 ) in the very worst case where E is the number of edges (usually, it s much better than this). CS221 / Autumn 2016 / Liang 66 Limitations of AC-3 Ideally, if no solutions, AC-3 would remove all values from a domain AC-3 isn t always effective: In the best case, if there is no way to consistently assign values all the variables, then running AC-3 will detect that there is no solution by emptying out a domain. However, this is not always the case, as the example above shows. Locally, everything looks fine, even though there s no global solution. Advanced: We could generalize arc consistency to fix this problem. Instead of looking at every 2 variables and the factors between them, we could look at every subset of k variables, and check that there s a way to consistently assign values to all k, taking into account all the factors involving those k variables. However, there is a substantial cost to doing this (the running time is exponential in k in the worst case), so generally arc consistency (k = 2) is good enough. No actual solutions, but AC-3 doesn t do anything! Intuition: if we look locally at the graph, nothing blatantly wrong... CS221 / Autumn 2016 / Liang 68 Summary Roadmap Basic template: backtracking search on partial assignments Factor graphs Dynamic ordering: most constrained variable (fail early), least constrained value (try to succeed) Dynamic ordering Lookahead: forward checking (enforces arc consistency on neighbors), AC-3 (enforces arc consistency on neighbors and their neighbors, etc.) Arc consistency Modeling CS221 / Autumn 2016 / Liang 70 CS221 / Autumn 2016 / Liang 71

13 Example: L question hree sculptures (A, B, C) are to be exhibited in rooms 1, 2 of an art gallery. he exhibition must satisfy the following conditions: Sculptures A and B cannot be in the same room. Sculptures B and C must be in the same room. Room 2 can only hold one sculpture. [demo] Example: event scheduling (section) event e time slot t Setup: Have E events and time slots Each event e must be put in exactly one time slot Each time slot t can have at most one event Event e allowed in time slot t only if (e, t) A CS221 / Autumn 2016 / Liang 72 CS221 / Autumn 2016 / Liang 73 Consider a simple scheduling problem, where we have E events that we want to schedule into time slots. here are three families of requirements: (i) every event must be scheduled into a time slot; (ii) every time slot can have at most one event (zero is possible); and (iii) we are given a fixed set A of (event, time slot) pairs which are allowed. here are in general multiple ways to cast a problem as a CSP, and the purpose of this example is to show two reasonable ways to do it. Example: event scheduling (section) event e time slot t CSP formulation 1: ariables: for each event e, X e {1,..., } Constraints (only one event per time slot): for each pair of events e e, enforce [X e X e ] Constraints (only scheduled allowed times): for each event e, enforce [(e, X e ) A] CS221 / Autumn 2016 / Liang 75 he first formulation is perhaps the more natural one. We make a variable X e for each event, whose value will be the time slot that the event is scheduled into. Since each variable can only take on one value, we automatically satisfy the requirement that every event must be put in exactly one time slot. However, we need to make sure no two events end up in the same time slot. o do this, we can create a binary constraint between every pair of distinct event variables X e and X e that enforces their values to be different (X e X e ). Finally, to deal with the requirement that an event is scheduled only in allowed time slots, we just need to add a unary constraint for each variable saying that the time slot X e that s chosen for that event is allowed. Note that we end up with E variables with domain size, and O(E 2 ) binary constraints. Example: event scheduling (section) event e time slot t CSP formulation 2: ariables: for each time slot t, Y t {1,..., E} { } Constraints (each event is scheduled exactly once): for each event e, enforce [Y t = e for exactly one t] Constraints (only schedule allowed times): for each time slot t, enforce [Y t = or (Y t, t) A] CS221 / Autumn 2016 / Liang 77

14 Alternatively, we can take the perspective of the time slots and ask which event was scheduled in each time slot. So we introduce a variable Y t for each time slot t which takes on a value equal to one of the events or none ( ). Unlike the first formulation, we don t get for free the requirement that each event is put in exactly one time slot. o add it, we introduce E constraints, one for each event. Each constraint needs to depend on all variables and check that the number of time slots t which have event e assigned to that slot (Y t = e) is exactly 1. On the other hand, the requirement that each time slot has at most one event assigned to it we get for free, since each variable takes on exactly one value. Finally, we add constraints, one for each time slot t enforcing that if there was an event scheduled there (Y t ), then it better be allowed according to A. With this formulation, we have variables with domain size E +1, and E -ary constraints. We will show shortly that each -ary constraints can be converted into O( ) binary constraints with O( ) variables. herefore, the resulting formulation has variables with domain size E + 1, O( 2 ) variables with domain size 2 and O( 2 ) binary constraints. Which one is better? Since E is required for the existence of a consistent solution, the first formulation is better. If the problem were modified so that not all events had to be scheduled and E, then the second formulation would be better. Arity Modeling: allow arity of factors to be arbitrary X 1 X 2 X 3 X 4 X 5 X 6 Algorithms: assume arity of all factors is 1 or 2 X 1 X 2 Later: reduction from n to 2 with auxiliary variables CS221 / Autumn 2016 / Liang 79 When we are modeling with factor graphs, we would like the factors to have any arity (depend on any number of variables). his allows us to, for example, require that at least one of the provinces is colored red. However, from an algorithms and implementation perspective, it is often useful to just think about unary and binary factors. For example, arc consistency is defined with respect to binary factors. It appears that there is a tradeoff between modeling expressivity and algorithmic efficiency, but this is actual not a real tradeoff, since we can reduce the general arity case to the unary-binary case. N-ary constraints ariables: X 1, X 2, X 3, X 4 {0, 1} Factor: [X 1 X 2 X 3 X 4 ] Examples: X 1 X 2 X 3 X 4 Weight({X 1 : 0, X 2 : 0, X 3 : 0, X 4 : 0}) = 0 Weight({X 1 : 0, X 2 : 1, X 3 : 0, X 4 : 0}) = 1 Weight({X 1 : 0, X 2 : 1, X 3 : 0, X 4 : 1}) = 1 Problem: algorithms so far only take unary/binary factors... CS221 / Autumn 2016 / Liang 81 Consider the simple problem: given n variables X 1,..., X n, where each X i {0, 1}, impose the requirement that at least one X i = 1. he case of n = 4 is shown in the slide. N-ary constraints: first attempt X 1 X 2 X 3 X 4 Key idea: auxiliary variable Auxiliary variables hold intermediate computation. Factors: Initialization: [A 0 = 0] Processing: [A i = A i 1 X i ] Final output: [A 4 = 1] Still have factors involving 3 variables... i X i : A i : CS221 / Autumn 2016 / Liang 83

15 he key idea is to break down the computation of the n-ary constraint into n simple steps. As a first attempt, let s introduce an auxiliary variable A i for i = 1,..., n which represents the OR of variables X 1,..., X i. hen we can write a simple recurrence that updates A i with A i 1. he constraint [A n = 1] enforces that the OR of all the variables is 1. It is important to note that while our intuitions are based on procedurally computing A i s, one after the other, these computations are actually represented declaratively as constraints in the CSP. We have eliminated the massive n-ary constraint with ternary constraints (depending on A i, A i 1, X i). Can we replace the ternary constraint with unary and binary constraints? N-ary constraints: second attempt Key idea: pack A i 1 and A i into one variable B i (0, 0) (0, 1) (1, 1) (1, 1) B 1 B 2 B 3 B 4 X 1 X 2 X 3 X ariables: B i is (pre, post) pair from processing X i Factors: Initialization: [B 1 [1] = 0] Processing: [B i [2] = B i [1] X i ] Final output: [B 4 [2] = 1] Consistency: [B i 1 [2] = B i [1]] CS221 / Autumn 2016 / Liang 85 he key idea to turn the ternary constraint [A i = A i 1 X i] into a binary constraint is to merge A i 1 and A i into one variable, represented as one variable B i. he variable B i will represent a pair of booleans, where B i[1] represents A i 1 and B i[2] represents A i. Now, the ternary constraint is just a binary constraint: [B i[2] = B i[1] X i]! However, note that A i 1 is represented twice, both in B i and B i 1. So we need to add another binary constraint to enforce that the two are equal: [B i 1[2] = B i[1]]. he initialization and final output factors are the same as before. Example: relation extraction Motivation: build a question-answering system Which US presidents played the guitar? Prerequisite: learn knowledge by reading the web Systems: [NELL (CMU)] [OpenIE (UW)] CS221 / Autumn 2016 / Liang 87 Now let s look at a different problem. Some background which is unrelated to CSPs: A major area of research in natural language processing is relation extraction, the task of building systems that can process the enormous amount of unstructured text on the web, and populate a structured knowledge base, so that we can answer complex questions by querying the knowledge base. Example: relation extraction Input (hundreds of millions of web pages): Barack Obama is the 44th and current President of the United States... Output (database of relations): EmployedBy(BarackObama, UnitedStates) Profession(BarackObama, President)... CS221 / Autumn 2016 / Liang 89

16 Example: relation extraction ypical predictions of classifiers: State-of-the-art methods typically use machine learning, casting it as a classification problem. However, relation extraction is a very difficult problem, and even the best systems today often fail, producing nonsensical facts. A key observation is that these classification decisions are not independent, and we have some prior knowledge on how they should be related. For example, you can t be born in two places, and you also can t be born in an instrument (not usually, anyway). BornIn(BarackObama,UnitedStates) 0.9 BornIn(BarackObama,Kenya) 0.6 BornIn(JohnLennon,guitar) 0.7 ype(guitar,instrument) How do reconcile conflicting predictions? CS221 / Autumn 2016 / Liang 90 Example: relation extraction BornIn(BarackObama,UnitedStates) can t be born in two places BornIn(BarackObama,Kenya) BornIn(JohnLennon,guitar) can t be born in an instrument ype(guitar,instrument) General framework Classification decisions are generally related: Y 1 Y 2 Y 3 Y 4 Unary factors: local classifiers (provide evidence) exp(w φ(x i )Y i ) Binary factors: enforce that outputs are consistent [Y i consistent with Y i ] CS221 / Autumn 2016 / Liang 92 CS221 / Autumn 2016 / Liang 93 o operationalize this intuition, we can leverage factor graphs. hink about each of the classification decisions as a variable, which can take on 1 or 0 (assume binary classification for now). We have a unary factor which specifies the contribution of the classifier. Recall that linear classifiers return a score w φ(x i), which is a real number. Factors must be non-negative, so it s typical to exponentiate the score. We can add binary factors between pairs of classification decisions which are related in some way (e.g., [BornIn(BarackObama,UnitedStates)+BornIn(BarackObama,Kenya) 1]). he factors do not have to be hard constraints, but rather general preferences that encode soft preferences (e.g., returning weight 0.01 instead of 0). Once we have a CSP, we can ask for the maximum weight assignment, which takes into account all the information available and reasons about it globally. Summary Factor graphs: modeling framework (variables, factors) Key property: ordering decisions pushed to algorithms Algorithms: backtracking search + dynamic ordering + lookahead Modeling: lots of possibilities! CS221 / Autumn 2016 / Liang 95

CS 4100 // artificial intelligence

CS 4100 // artificial intelligence CS 4100 // artificial intelligence instructor: byron wallace Constraint Satisfaction Problems Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials

More information

Constraint Satisfaction Problems. Chapter 6

Constraint Satisfaction Problems. Chapter 6 Constraint Satisfaction Problems Chapter 6 Constraint Satisfaction Problems A constraint satisfaction problem consists of three components, X, D, and C: X is a set of variables, {X 1,..., X n }. D is a

More information

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence Introduction to Artificial Intelligence 22.0472-001 Fall 2009 Lecture 5: Constraint Satisfaction Problems II Announcements Assignment due on Monday 11.59pm Email search.py and searchagent.py to me Next

More information

CSE 473: Artificial Intelligence

CSE 473: Artificial Intelligence CSE 473: Artificial Intelligence Constraint Satisfaction Luke Zettlemoyer Multiple slides adapted from Dan Klein, Stuart Russell or Andrew Moore What is Search For? Models of the world: single agent, deterministic

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Constraint Satisfaction Problems Prof. Scott Niekum The University of Texas at Austin [These slides are based on those of Dan Klein and Pieter Abbeel for CS188 Intro to

More information

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2011 Announcements Project 1: Search is due next week Written 1: Search and CSPs out soon Piazza: check it out if you haven t CS 188: Artificial Intelligence Fall 2011 Lecture 4: Constraint Satisfaction 9/6/2011

More information

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems CS 188: Artificial Intelligence Constraint Satisfaction Problems What is Search For? Assumptions about the world: a single agent, deterministic actions, fully observed state, discrete state space Planning:

More information

Announcements. Homework 4. Project 3. Due tonight at 11:59pm. Due 3/8 at 4:00pm

Announcements. Homework 4. Project 3. Due tonight at 11:59pm. Due 3/8 at 4:00pm Announcements Homework 4 Due tonight at 11:59pm Project 3 Due 3/8 at 4:00pm CS 188: Artificial Intelligence Constraint Satisfaction Problems Instructor: Stuart Russell & Sergey Levine, University of California,

More information

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2011 CS 188: Artificial Intelligence Fall 2011 Lecture 5: CSPs II 9/8/2011 Dan Klein UC Berkeley Multiple slides over the course adapted from either Stuart Russell or Andrew Moore 1 Today Efficient Solution

More information

Constraint Satisfaction Problems. Chapter 6

Constraint Satisfaction Problems. Chapter 6 Constraint Satisfaction Problems Chapter 6 Office hours Office hours for Assignment 1 (ASB9810 in CSIL): Sep 29th(Fri) 12:00 to 13:30 Oct 3rd(Tue) 11:30 to 13:00 Late homework policy You get four late

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

CS 188: Artificial Intelligence. What is Search For? Constraint Satisfaction Problems. Constraint Satisfaction Problems

CS 188: Artificial Intelligence. What is Search For? Constraint Satisfaction Problems. Constraint Satisfaction Problems CS 188: Artificial Intelligence Constraint Satisfaction Problems Constraint Satisfaction Problems N variables domain D constraints x 1 x 2 Instructor: Marco Alvarez University of Rhode Island (These slides

More information

Announcements. CS 188: Artificial Intelligence Fall 2010

Announcements. CS 188: Artificial Intelligence Fall 2010 Announcements Project 1: Search is due Monday Looking for partners? After class or newsgroup Written 1: Search and CSPs out soon Newsgroup: check it out CS 188: Artificial Intelligence Fall 2010 Lecture

More information

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2008 CS 188: Artificial Intelligence Fall 2008 Lecture 4: CSPs 9/9/2008 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 1 Announcements Grading questions:

More information

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens CS 188: Artificial Intelligence Fall 2008 Announcements Grading questions: don t panic, talk to us Newsgroup: check it out Lecture 4: CSPs 9/9/2008 Dan Klein UC Berkeley Many slides over the course adapted

More information

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability. CS 188: Artificial Intelligence Fall 2008 Lecture 5: CSPs II 9/11/2008 Announcements Assignments: DUE W1: NOW P1: Due 9/12 at 11:59pm Assignments: UP W2: Up now P2: Up by weekend Dan Klein UC Berkeley

More information

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2008 CS 188: Artificial Intelligence Fall 2008 Lecture 5: CSPs II 9/11/2008 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 1 Assignments: DUE Announcements

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Last update: February 25, 2010 Constraint Satisfaction Problems CMSC 421, Chapter 5 CMSC 421, Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local

More information

CS 188: Artificial Intelligence. Recap: Search

CS 188: Artificial Intelligence. Recap: Search CS 188: Artificial Intelligence Lecture 4 and 5: Constraint Satisfaction Problems (CSPs) Pieter Abbeel UC Berkeley Many slides from Dan Klein Recap: Search Search problem: States (configurations of the

More information

Lecture 6: Constraint Satisfaction Problems (CSPs)

Lecture 6: Constraint Satisfaction Problems (CSPs) Lecture 6: Constraint Satisfaction Problems (CSPs) CS 580 (001) - Spring 2018 Amarda Shehu Department of Computer Science George Mason University, Fairfax, VA, USA February 28, 2018 Amarda Shehu (580)

More information

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries. CS 188: Artificial Intelligence Fall 2007 Lecture 5: CSPs II 9/11/2007 More CSPs Applications Tree Algorithms Cutset Conditioning Today Dan Klein UC Berkeley Many slides over the course adapted from either

More information

Constraint Satisfaction Problems (CSPs)

Constraint Satisfaction Problems (CSPs) 1 Hal Daumé III (me@hal3.name) Constraint Satisfaction Problems (CSPs) Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 7 Feb 2012 Many

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] What is Search

More information

CS221 / Autumn 2017 / Liang & Ermon. Lecture 12: CSPs II

CS221 / Autumn 2017 / Liang & Ermon. Lecture 12: CSPs II CS221 / Autumn 2017 / Liang & Ermon Lecture 12: CSPs II Review: definition X 1 X 2 X 3 f 1 f 2 f 3 f 4 Definition: factor graph Variables: X = (X 1,..., X n ), where X i Domain i Factors: f 1,..., f m,

More information

Announcements. Homework 1: Search. Project 1: Search. Midterm date and time has been set:

Announcements. Homework 1: Search. Project 1: Search. Midterm date and time has been set: Announcements Homework 1: Search Has been released! Due Monday, 2/1, at 11:59pm. On edx online, instant grading, submit as often as you like. Project 1: Search Has been released! Due Friday 2/5 at 5pm.

More information

Announcements. CS 188: Artificial Intelligence Spring Today. Example: Map-Coloring. Example: Cryptarithmetic.

Announcements. CS 188: Artificial Intelligence Spring Today. Example: Map-Coloring. Example: Cryptarithmetic. CS 188: Artificial Intelligence Spring 2010 Lecture 5: CSPs II 2/2/2010 Pieter Abbeel UC Berkeley Many slides from Dan Klein Announcements Project 1 due Thursday Lecture videos reminder: don t count on

More information

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1 Lecture 18 Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1 Outline Chapter 6 - Constraint Satisfaction Problems Path Consistency & Global Constraints Sudoku Example Backtracking

More information

Example: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph

Example: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph Example: Map-Coloring Constraint Satisfaction Problems Western Northern erritory ueensland Chapter 5 South New South Wales asmania Variables, N,,, V, SA, Domains D i = {red,green,blue} Constraints: adjacent

More information

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17 CSE 473: Artificial Intelligence Constraint Satisfaction Dieter Fox What is Search For? Models of the world: single agent, deterministic actions, fully observed state, discrete state space Planning: sequences

More information

CS 188: Artificial Intelligence Spring Today

CS 188: Artificial Intelligence Spring Today CS 188: Artificial Intelligence Spring 2006 Lecture 7: CSPs II 2/7/2006 Dan Klein UC Berkeley Many slides from either Stuart Russell or Andrew Moore Today More CSPs Applications Tree Algorithms Cutset

More information

4 Search Problem formulation (23 points)

4 Search Problem formulation (23 points) 4 Search Problem formulation (23 points) Consider a Mars rover that has to drive around the surface, collect rock samples, and return to the lander. We want to construct a plan for its exploration. It

More information

Constraint Satisfaction Problems Part 2

Constraint Satisfaction Problems Part 2 Constraint Satisfaction Problems Part 2 Deepak Kumar October 2017 CSP Formulation (as a special case of search) State is defined by n variables x 1, x 2,, x n Variables can take on values from a domain

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University References: 1. S. Russell and P. Norvig. Artificial Intelligence:

More information

CONSTRAINT SATISFACTION

CONSTRAINT SATISFACTION 9// CONSRAI ISFACION oday Reading AIMA Chapter 6 Goals Constraint satisfaction problems (CSPs) ypes of CSPs Inference Search + Inference 9// 8-queens problem How would you go about deciding where to put

More information

Constraint Satisfaction

Constraint Satisfaction Constraint Satisfaction Philipp Koehn 1 October 2015 Outline 1 Constraint satisfaction problems (CSP) examples Backtracking search for CSPs Problem structure and problem decomposition Local search for

More information

Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3)

Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3) Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.) Some slides adapted from Richard Lathrop, USC/ISI, CS 7 Review: The Minimax Rule Idea: Make the best move for MAX assuming that MIN always replies

More information

CSPs: Search and Arc Consistency

CSPs: Search and Arc Consistency CSPs: Search and Arc Consistency CPSC 322 CSPs 2 Textbook 4.3 4.5 CSPs: Search and Arc Consistency CPSC 322 CSPs 2, Slide 1 Lecture Overview 1 Recap 2 Search 3 Consistency 4 Arc Consistency CSPs: Search

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Chapter 5 Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local search for CSPs Chapter 5 2 Constraint satisfaction

More information

What is Search For? CS 188: Artificial Intelligence. Example: Map Coloring. Example: N-Queens. Example: N-Queens. Constraint Satisfaction Problems

What is Search For? CS 188: Artificial Intelligence. Example: Map Coloring. Example: N-Queens. Example: N-Queens. Constraint Satisfaction Problems CS 188: Artificial Intelligence Constraint Satisfaction Problems What is Search For? Assumptions about the world: a single agent, deterministic actions, fully observed state, discrete state space Planning:

More information

Constraint Satisfaction. AI Slides (5e) c Lin

Constraint Satisfaction. AI Slides (5e) c Lin Constraint Satisfaction 4 AI Slides (5e) c Lin Zuoquan@PKU 2003-2018 4 1 4 Constraint Satisfaction 4.1 Constraint satisfaction problems 4.2 Backtracking search 4.3 Constraint propagation 4.4 Local search

More information

Space of Search Strategies. CSE 573: Artificial Intelligence. Constraint Satisfaction. Recap: Search Problem. Example: Map-Coloring 11/30/2012

Space of Search Strategies. CSE 573: Artificial Intelligence. Constraint Satisfaction. Recap: Search Problem. Example: Map-Coloring 11/30/2012 /0/0 CSE 57: Artificial Intelligence Constraint Satisfaction Daniel Weld Slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer Space of Search Strategies Blind Search DFS, BFS,

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Chapter 5 Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local search for CSPs Chapter 5 2 Constraint satisfaction

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Revised by Hankui Zhuo, March 14, 2018 Constraint Satisfaction Problems Chapter 5 Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local search

More information

CS 771 Artificial Intelligence. Constraint Satisfaction Problem

CS 771 Artificial Intelligence. Constraint Satisfaction Problem CS 771 Artificial Intelligence Constraint Satisfaction Problem Constraint Satisfaction Problems So far we have seen a problem can be solved by searching in space of states These states can be evaluated

More information

CONSTRAINT SATISFACTION

CONSTRAINT SATISFACTION CONSRAI ISFACION oday Constraint satisfaction problems (CSPs) ypes of CSPs Inference Search 1 8-queens problem Exploit the constraints Constraint Satisfaction Problems Advantages of CSPs Use general-purpose

More information

ARTIFICIAL INTELLIGENCE (CS 370D)

ARTIFICIAL INTELLIGENCE (CS 370D) Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) (CHAPTER-6) CONSTRAINT SATISFACTION PROBLEMS Outline What is a CSP CSP applications Backtracking search

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 5. Constraint Satisfaction Problems CSPs as Search Problems, Solving CSPs, Problem Structure Wolfram Burgard, Bernhard Nebel, and Martin Riedmiller Albert-Ludwigs-Universität

More information

Chapter 6 Constraint Satisfaction Problems

Chapter 6 Constraint Satisfaction Problems Chapter 6 Constraint Satisfaction Problems CS5811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline CSP problem definition Backtracking search

More information

Outline. Best-first search

Outline. Best-first search Outline Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Beam search Simulated annealing search Genetic algorithms Constraint Satisfaction Problems

More information

Lecture 9 Arc Consistency

Lecture 9 Arc Consistency Computer Science CPSC 322 Lecture 9 Arc Consistency (4.5, 4.6) Slide 1 Lecture Overview Recap of Lecture 8 Arc Consistency for CSP Domain Splitting 2 Problem Type Static Sequential Constraint Satisfaction

More information

Arc Consistency and Domain Splitting in CSPs

Arc Consistency and Domain Splitting in CSPs Arc Consistency and Domain Splitting in CSPs CPSC 322 CSP 3 Textbook Poole and Mackworth: 4.5 and 4.6 Lecturer: Alan Mackworth October 3, 2012 Lecture Overview Solving Constraint Satisfaction Problems

More information

Artificial Intelligence Constraint Satisfaction Problems

Artificial Intelligence Constraint Satisfaction Problems Artificial Intelligence Constraint Satisfaction Problems Recall Search problems: Find the sequence of actions that leads to the goal. Sequence of actions means a path in the search space. Paths come with

More information

CONSTRAINT SATISFACTION

CONSTRAINT SATISFACTION CONSRAI ISFACION oday Reading AIMA Read Chapter 6.1-6.3, Skim 6.4-6.5 Goals Constraint satisfaction problems (CSPs) ypes of CSPs Inference (Search + Inference) 1 8-queens problem How would you go about

More information

CS 4100/5100: Foundations of AI

CS 4100/5100: Foundations of AI CS 4100/5100: Foundations of AI Constraint satisfaction problems 1 Instructor: Rob Platt r.platt@neu.edu College of Computer and information Science Northeastern University September 5, 2013 1 These notes

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Chapter 5 Section 1 3 Constraint Satisfaction 1 Outline Constraint Satisfaction Problems (CSP) Backtracking search for CSPs Local search for CSPs Constraint Satisfaction

More information

Australia Western Australia Western Territory Northern Territory Northern Australia South Australia South Tasmania Queensland Tasmania Victoria

Australia Western Australia Western Territory Northern Territory Northern Australia South Australia South Tasmania Queensland Tasmania Victoria Constraint Satisfaction Problems Chapter 5 Example: Map-Coloring Western Northern Territory South Queensland New South Wales Tasmania Variables WA, NT, Q, NSW, V, SA, T Domains D i = {red,green,blue} Constraints:

More information

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong CS 88: Artificial Intelligence Spring 2009 Lecture 4: Constraint Satisfaction /29/2009 John DeNero UC Berkeley Slides adapted from Dan Klein, Stuart Russell or Andrew Moore Announcements The Python tutorial

More information

Example: Map coloring

Example: Map coloring Today s s lecture Local Search Lecture 7: Search - 6 Heuristic Repair CSP and 3-SAT Solving CSPs using Systematic Search. Victor Lesser CMPSCI 683 Fall 2004 The relationship between problem structure and

More information

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Constraint Satisfaction Problems slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Standard search problems: State is a black box : arbitrary data structure Goal test

More information

CS W4701 Artificial Intelligence

CS W4701 Artificial Intelligence CS W4701 Artificial Intelligence Fall 2013 Chapter 6: Constraint Satisfaction Problems Jonathan Voris (based on slides by Sal Stolfo) Assignment 3 Go Encircling Game Ancient Chinese game Dates back At

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Search and Lookahead Bernhard Nebel, Julien Hué, and Stefan Wölfl Albert-Ludwigs-Universität Freiburg June 4/6, 2012 Nebel, Hué and Wölfl (Universität Freiburg) Constraint

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Constraint Satisfaction Problems II Instructors: Dan Klein and Pieter Abbeel University of California, Berkeley [These slides were created by Dan Klein and Pieter Abbeel

More information

Artificial Intelligence

Artificial Intelligence Contents Artificial Intelligence 5. Constraint Satisfaction Problems CSPs as Search Problems, Solving CSPs, Problem Structure Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller What

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence 5. Constraint Satisfaction Problems CSPs as Search Problems, Solving CSPs, Problem Structure Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller SA-1 Contents

More information

Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8

Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8 Constraint t Satisfaction Problems Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Tuesday: Chapter 7 Thursday: Chapter 8 Outline What is a CSP Backtracking for CSP Local search for

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence Constraint Satisfaction Problems II and Local Search Instructors: Sergey Levine and Stuart Russell University of California, Berkeley [These slides were created by Dan Klein

More information

10/11/2017. Constraint Satisfaction Problems II. Review: CSP Representations. Heuristic 1: Most constrained variable

10/11/2017. Constraint Satisfaction Problems II. Review: CSP Representations. Heuristic 1: Most constrained variable //7 Review: Constraint Satisfaction Problems Constraint Satisfaction Problems II AIMA: Chapter 6 A CSP consists of: Finite set of X, X,, X n Nonempty domain of possible values for each variable D, D, D

More information

Outline. Best-first search

Outline. Best-first search Outline Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Beam search Simulated annealing search Genetic algorithms Constraint Satisfaction Problems

More information

Admin. Quick search recap. Constraint Satisfaction Problems (CSPs)! Intro Example: 8-Queens. Where should I put the queens in columns 3 and 4?

Admin. Quick search recap. Constraint Satisfaction Problems (CSPs)! Intro Example: 8-Queens. Where should I put the queens in columns 3 and 4? Admin Constraint Satisfaction Problems (CSPs)! CS David Kauchak Spring 0 Final project comments: Use preexisting code Get your data now! Use preexisting data sets Finding good references Google scholar

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling CS 188: Artificial Intelligence Spring 2009 Lecture : Constraint Satisfaction 2/3/2009 Announcements Project 1 (Search) is due tomorrow Come to office hours if you re stuck Today at 1pm (Nick) and 3pm

More information

(Refer Slide Time: 01.26)

(Refer Slide Time: 01.26) Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture # 22 Why Sorting? Today we are going to be looking at sorting.

More information

CS188 Spring 2010 Section 2: CSP s

CS188 Spring 2010 Section 2: CSP s CS188 Spring 2010 Section 2: CSP s 1 Campus Layout ( ) You are asked to determine the layout of a new, small college. The campus will have three structures: an administration building (A), a bus stop (B),

More information

Constraint satisfaction search. Combinatorial optimization search.

Constraint satisfaction search. Combinatorial optimization search. CS 1571 Introduction to AI Lecture 8 Constraint satisfaction search. Combinatorial optimization search. Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Constraint satisfaction problem (CSP) Objective:

More information

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 30 January, 2018

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 30 January, 2018 DIT411/TIN175, Artificial Intelligence Chapter 7: Constraint satisfaction problems CHAPTER 7: CONSTRAINT SATISFACTION PROBLEMS DIT411/TIN175, Artificial Intelligence Peter Ljunglöf 30 January, 2018 1 TABLE

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Constraint Satisfaction Problems Marc Toussaint University of Stuttgart Winter 2015/16 (slides based on Stuart Russell s AI course) Inference The core topic of the following lectures

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Robert Platt Northeastern University Some images and slides are used from: 1. AIMA What is a CSP? The space of all search problems states and actions are atomic goals are

More information

Constraint Satisfaction Problems: A Deeper Look

Constraint Satisfaction Problems: A Deeper Look Constraint Satisfaction Problems: A Deeper Look The last problem set covered the topic of constraint satisfaction problems. CSP search and solution algorithms are directly applicable to a number of AI

More information

Constraint satisfaction search

Constraint satisfaction search CS 70 Foundations of AI Lecture 6 Constraint satisfaction search Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Search problem A search problem: Search space (or state space): a set of objects among

More information

4.1 Review - the DPLL procedure

4.1 Review - the DPLL procedure Applied Logic Lecture 4: Efficient SAT solving CS 4860 Spring 2009 Thursday, January 29, 2009 The main purpose of these notes is to help me organize the material that I used to teach today s lecture. They

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems In which we see how treating states as more than just little black boxes leads to the invention of a range of powerful new search methods and a deeper understanding of

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2006 Lecture 4: CSPs 9/7/2006 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore Announcements Reminder: Project

More information

What is Search For? CS 188: Ar)ficial Intelligence. Constraint Sa)sfac)on Problems Sep 14, 2015

What is Search For? CS 188: Ar)ficial Intelligence. Constraint Sa)sfac)on Problems Sep 14, 2015 CS 188: Ar)ficial Intelligence Constraint Sa)sfac)on Problems Sep 14, 2015 What is Search For? Assump)ons about the world: a single agent, determinis)c ac)ons, fully observed state, discrete state space

More information

Recap: Search Problem. CSE 473: Artificial Intelligence. Space of Search Strategies. Constraint Satisfaction. Example: N-Queens 4/9/2012

Recap: Search Problem. CSE 473: Artificial Intelligence. Space of Search Strategies. Constraint Satisfaction. Example: N-Queens 4/9/2012 CSE 473: Artificial Intelligence Constraint Satisfaction Daniel Weld Slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer Recap: Search Problem States configurations of the world

More information

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017 CS6 Lecture 4 Greedy Algorithms Scribe: Virginia Williams, Sam Kim (26), Mary Wootters (27) Date: May 22, 27 Greedy Algorithms Suppose we want to solve a problem, and we re able to come up with some recursive

More information

6.034 Notes: Section 3.1

6.034 Notes: Section 3.1 6.034 Notes: Section 3.1 Slide 3.1.1 In this presentation, we'll take a look at the class of problems called Constraint Satisfaction Problems (CSPs). CSPs arise in many application areas: they can be used

More information

Week 8: Constraint Satisfaction Problems

Week 8: Constraint Satisfaction Problems COMP3411/ 9414/ 9814: Artificial Intelligence Week 8: Constraint Satisfaction Problems [Russell & Norvig: 6.1,6.2,6.3,6.4,4.1] COMP3411/9414/9814 18s1 Constraint Satisfaction Problems 1 Outline Constraint

More information

Week - 04 Lecture - 01 Merge Sort. (Refer Slide Time: 00:02)

Week - 04 Lecture - 01 Merge Sort. (Refer Slide Time: 00:02) Programming, Data Structures and Algorithms in Python Prof. Madhavan Mukund Department of Computer Science and Engineering Indian Institute of Technology, Madras Week - 04 Lecture - 01 Merge Sort (Refer

More information

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Spring Announcements CS 188: Artificial Intelligence Spring 2010 Lecture 4: A* wrap-up + Constraint Satisfaction 1/28/2010 Pieter Abbeel UC Berkeley Many slides from Dan Klein Announcements Project 0 (Python tutorial) is due

More information

Constraint satisfaction problems. CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop

Constraint satisfaction problems. CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop Constraint satisfaction problems CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop Constraint Satisfaction Problems What is a CSP? Finite set of variables, X 1, X 2,, X n

More information

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence CS 188: Artificial Intelligence CSPs II + Local Search Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

6.001 Notes: Section 6.1

6.001 Notes: Section 6.1 6.001 Notes: Section 6.1 Slide 6.1.1 When we first starting talking about Scheme expressions, you may recall we said that (almost) every Scheme expression had three components, a syntax (legal ways of

More information

Integrating Probabilistic Reasoning with Constraint Satisfaction

Integrating Probabilistic Reasoning with Constraint Satisfaction Integrating Probabilistic Reasoning with Constraint Satisfaction IJCAI Tutorial #7 Instructor: Eric I. Hsu July 17, 2011 http://www.cs.toronto.edu/~eihsu/tutorial7 Getting Started Discursive Remarks. Organizational

More information

CHAPTER 3. Register allocation

CHAPTER 3. Register allocation CHAPTER 3 Register allocation In chapter 1 we simplified the generation of x86 assembly by placing all variables on the stack. We can improve the performance of the generated code considerably if we instead

More information

1 Achieving IND-CPA security

1 Achieving IND-CPA security ISA 562: Information Security, Theory and Practice Lecture 2 1 Achieving IND-CPA security 1.1 Pseudorandom numbers, and stateful encryption As we saw last time, the OTP is perfectly secure, but it forces

More information

Lecture 3: Linear Classification

Lecture 3: Linear Classification Lecture 3: Linear Classification Roger Grosse 1 Introduction Last week, we saw an example of a learning task called regression. There, the goal was to predict a scalar-valued target from a set of features.

More information

Comments about assign 1. Quick search recap. Constraint Satisfaction Problems (CSPs) Search uninformed BFS, DFS, IDS. Adversarial search

Comments about assign 1. Quick search recap. Constraint Satisfaction Problems (CSPs) Search uninformed BFS, DFS, IDS. Adversarial search Constraint Satisfaction Problems (CSPs) CS5 David Kauchak Fall 00 http://www.xkcd.com/78/ Some material borrowed from: Sara Owsley Sood and others Comments about assign Grading actually out of 60 check

More information

Satisfiability Solvers

Satisfiability Solvers Satisfiability Solvers Part 1: Systematic Solvers 600.325/425 Declarative Methods - J. Eisner 1 Vars SAT solving has made some progress 100000 10000 1000 100 10 1 1960 1970 1980 1990 2000 2010 Year slide

More information

Backtracking Search (CSPs)

Backtracking Search (CSPs) CSC384: Intro to Artificial Intelligence Backtracking Search (CSPs STATE REPRESENTATION: Factored representation of state ALGORITHMS: general purpose for particular types of constraints (versus problem

More information

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16 600.463 Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Dynamic Programming I Date: 10/6/16 11.1 Introduction Dynamic programming can be very confusing until you ve used it a

More information