Constraint satisfaction problems (problémy s obmedzujúcimi podmienkami)

Similar documents
Constraint Satisfaction Problems. Chapter 6

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1

Constraint Satisfaction Problems

Constraint Satisfaction Problems Part 2

CS 771 Artificial Intelligence. Constraint Satisfaction Problem

Constraint Satisfaction Problems

Artificial Intelligence

Artificial Intelligence

Week 8: Constraint Satisfaction Problems

Constraint Satisfaction Problems

Artificial Intelligence Constraint Satisfaction Problems

ARTIFICIAL INTELLIGENCE (CS 370D)

Constraint Satisfaction Problems

Solving Problems by Searching: Constraint Satisfaction Problems

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

Foundations of Artificial Intelligence

Constraint Satisfaction Problems

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 30 January, 2018

Constraint Satisfaction. AI Slides (5e) c Lin

Constraint Satisfaction. CS 486/686: Introduction to Artificial Intelligence

CS 4100/5100: Foundations of AI

10/11/2017. Constraint Satisfaction Problems II. Review: CSP Representations. Heuristic 1: Most constrained variable

Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8

Chapter 6 Constraint Satisfaction Problems

Constraint Satisfaction Problems

Lecture 6: Constraint Satisfaction Problems (CSPs)

CS 343: Artificial Intelligence

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems

Constraint Satisfaction Problems (CSPs)

Constraint Satisfaction Problems. Chapter 6

CS 188: Artificial Intelligence Fall 2011

CS W4701 Artificial Intelligence

Constraint Satisfaction Problems

Constraint Satisfaction Problems. A Quick Overview (based on AIMA book slides)

CSE 473: Artificial Intelligence

CS 188: Artificial Intelligence. What is Search For? Constraint Satisfaction Problems. Constraint Satisfaction Problems

Constraint Satisfaction

Constraint Satisfaction Problems

Announcements. CS 188: Artificial Intelligence Fall 2010

General Methods and Search Algorithms

AI Fundamentals: Constraints Satisfaction Problems. Maria Simi

Announcements. Homework 1: Search. Project 1: Search. Midterm date and time has been set:

Outline. Best-first search

Outline. Best-first search

Constraint satisfaction problems. CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Moving to a different formalism... SEND + MORE MONEY

Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3)

Discussion Section Week 1

CS 188: Artificial Intelligence Fall 2008

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

Constraint Satisfaction Problems

Announcements. Homework 4. Project 3. Due tonight at 11:59pm. Due 3/8 at 4:00pm

CONSTRAINT SATISFACTION

Constraint satisfaction search. Combinatorial optimization search.

Constraint Satisfaction Problems (CSPs) Introduction and Backtracking Search

Constraint Satisfaction Problems

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 6 February, 2018

Constraints. CSC 411: AI Fall NC State University 1 / 53. Constraints

CS 188: Artificial Intelligence Fall 2011

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Spezielle Themen der Künstlichen Intelligenz

Australia Western Australia Western Territory Northern Territory Northern Australia South Australia South Tasmania Queensland Tasmania Victoria

CMU-Q Lecture 7: Searching in solution space Constraint Satisfaction Problems (CSPs) Teacher: Gianni A. Di Caro

Today. Introduction to Artificial Intelligence COMP 3501 / COMP Lecture 5. Constraint Satisfaction Problems (CSP) CSP Definition

Lecture 9 Arc Consistency

CS 188: Artificial Intelligence. Recap: Search

Constraint Satisfaction Problems

Comments about assign 1. Quick search recap. Constraint Satisfaction Problems (CSPs) Search uninformed BFS, DFS, IDS. Adversarial search

Constraint Satisfaction Problems

Space of Search Strategies. CSE 573: Artificial Intelligence. Constraint Satisfaction. Recap: Search Problem. Example: Map-Coloring 11/30/2012

Set 5: Constraint Satisfaction Problems

CONSTRAINT SATISFACTION

Constraint satisfaction search

CS 4100 // artificial intelligence

Arc Consistency and Domain Splitting in CSPs

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence

Material. Thought Question. Outline For Today. Example: Map-Coloring EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Mathematical Programming Formulations, Constraint Programming

CS 188: Artificial Intelligence Spring Announcements

Example: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph

CS 188: Artificial Intelligence Spring Announcements

6.034 Quiz 1, Spring 2004 Solutions

Constraint Satisfaction Problems (CSP)

Set 5: Constraint Satisfaction Problems Chapter 6 R&N

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Constraint Satisfaction Problems (Backtracking Search)

CS 188: Artificial Intelligence

A fuzzy constraint assigns every possible tuple in a relation a membership degree. The function

1 Tree Search (12 points)

Constraint Satisfaction Problems

Artificial Intelligence

Game theory (Ch. 17.5)

Backtracking Search (CSPs)

Set 5: Constraint Satisfaction Problems

Example: Map coloring

Transcription:

I2AI: Lecture 04 Constraint satisfaction problems (problémy s obmedzujúcimi podmienkami) Lubica Benuskova Reading: AIMA 3 rd ed. chap. 6 ending with 6.3.2 1 Constraint satisfaction problems (CSP) We w ill examine cases w hen w e can define a structured representation for each state: a set of variables, each of w hich has a value. A CSP problem is solved when each variable has a value that satisfies all the constraints on the variable. CSP algorithms take advantage of the structure of states and use general-purpose rather than problem-specific heuristics to enable the solution of complex problems. The goal is to eliminate large portions of the search space by identifying variable/value combinations that violate the constraints. 2 More about constraints The unary constraint restricts the values of a single variable to certain value(s). A binary constraint relates tw o variables. A binary CSP is one w ith only binary constraints; it can be represented as a constraint graph. A constraint involving an arbitrary number of variables is called a global constraint. One of the most common global constraints is Alldiff (= All values different), w hich says that all of the variables involved in the constraint must have different values. Defining CSP CSP consists of three components, X, D and C: X is a set of variables {X 1,, X n } D is a set of domains, {D 1,, D n }, one for each variable C is a set of constraints that specify allowable (povolené) combinations of values. Each domain D i consists of a set of allowable values {v 1,, v k } for variable X i. Each constraint (obmedzujúca podmienka) C i consists of a pair scope, rel, where scope is a tuple of variables that participate in the constraint and rel is a relation that defines the values that those variables can take on, e.g. (X 1, X 2 ), X 1 X 2. 3 4 Solving CSP First, w e need to define a state space and the notion of solution. Each state in a CSP is defined by an assignment of values to some or all of the variables, i.e. {X i = v i, X j = v j, } An assignment that does not violate any constraints is called a consistent or legal assignment. A partial assignment is one that assigns values to only some of the variables. A complete assignment is one in w hich every variable is assigned, and a solution to a CSP is a consistent and complete assignment. 5 The task: color each region either red, green, or blue in such a way that no neighboring regions have the same color. Variables: X = {WA, NT, Q, NSW, V, SA, T } The domain for each variable: D i = {red, green, blue} 1

Constraint graph If the CSP has only binary constraints, i.e. such that relate tw o variables, w e can represent a problem as a constraint graph. The nodes of the graph correspond to variables of the problem; A link connects any tw o variables that participate in a constraint. Binary constraints: neighboring regions must be of different colors. Since there are 10 places w here regions border, there are nine constraints: C = {SA WA, SA NT, SA Q, SA NSW, SA V, WA NT, NT Q, NSW Q, NSW V, V T} Here w e are using abbreviations; SA WA instead of (SA, WA), SA WA, etc. It is useful to fully enumerate SA WA (and all other constraints) as { (red, green), (red, blue), (green, red), (green, blue), (blue, red), (blue, green)} Why formulate a problem as a CSP? The main reason is that the CSP solving can be faster than statespace searchers because the CSP solving can quickly eliminate large portions of the search space. For example, once w e have chosen {SA = blue} in the Australia coloring problem, w e can conclude that none of the five neighboring variables can take on the value blue. There are many solutions to this problem, such as this complete and consistent assignment of values to variables: { WA = red, NT= green, Q = red, NSW = green, V = red, SA = blue, T = red } Without taking advantage of constraint propagation, a search procedure w ould have to consider 3 5 =243 assignments for the five neighboring variables; w ith constraint propagation w e never have to consider blue as a value, so w e have only 2 5 =32 assignments to look at, a reduction of 87%. 10 Constraint propagation: specific inference in CSP In CSPs there is a specific type of inference called constraint propagation. This type of inference does not require search. Constraint propagation uses the constraints to reduce the number of legal values for a variable, w hich in turn can reduce the legal values for another variable, and so on. The key idea is local consistency. If w e treat each variable as a node (uzol) in a graph and each binary constraint as an arc (hrana), then the process of enforcing local consistency in each part of the graph causes inconsistent values to be eliminated throughout the graph. Let us consider the node consistency and arc consistency. 11 Node consistency A single variable (corresponding to a node in the CSP graph) is nodeconsistent if all the values in the variable s domain satisfy the variable s unary constraints. Uzol X i je uzlovo konzistentný (node consistent) ak pre každú hodnotu v i v doméne danej premennej D i sú splnené unárne podmienky. We say that a netw ork is node-consistent if every variable in the netw ork is node-consistent. It is alw ays possible to eliminate all the unary constraints in a CSP by running node consistency. Uzlovú nekonzistentnosť eliminujeme odstránením tých hodnôt z domény, ktoré ju vytvárajú. 12 2

Arc consistency A variable in a CSP is arc-consistent if every value in its domain satisfies the variable s binary constraints. Arc consistency: example For example, consider the constraint Y = X 2 w here the domain of both X and Y is the set {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}. Thus, X i is arc-consistent w ith respect to another variable X j if for every value v i in the current domain D i there is some value in the domain D j that satisfies the binary constraint on the arc (X i, X j ). Hrana (X i, X j ) je hranovo konzistentná (arc consistent) ak pre každú hodnotu v i z domény D i premennej X i existuje nejaká hodnota v j z domény D j taká, že priradenia X i = v i a X j = v j sú dovolené danou binárnou podmienkou. Ak to tak nie je, vymažeme tie hodnoty z domény D i ku ktorým niet korešpondujúcich hodnôt v doméne D j We can w rite this constraint explicitly as (X, Y), { (0, 0), (1, 1), (2, 4), (3, 9) To make X arc-consistent w ith respect to Y, w e must reduce X s domain to {0, 1, 2, 3}. If w e also make Y arc-consistent w ith respect to X, then Y s domain becomes {0, 1, 4, 9} and the w hole CSP is arc-consistent. A netw ork is arc-consistent if every variable is arc consistent w ith every 13 other variable. 14 AC-3 algorithm (Mackworth, 1977) To make every variable arc-consistent, the AC-3 algorithm maintains a queue of arcs to consider. In fact, the order of consideration is not important, so the data structure is really a set, but tradition calls it a queue. Initially, the queue contains all the arcs in the CSP. AC-3 then pops off an arbitrary arc (X i, X j ) from the queue and makes X i is arcconsistent w ith respect to variable X j, see function REVISE: AC-3 algorithm continuation If this leaves D i revised (makes the domain smaller), then w e add to the queue all arcs (X k, X i ) w here X k is a neighbor of X i. If D i is revised dow n to nothing, then w e know the w hole CSP has no consistent solution, and AC-3 returns failure. Otherw ise, w e keep checking, trying to remove values from the domains of variables until no more arcs are in the queue. 15 16 Search in CSP: commutativity Many CSPs cannot be solved using just the inference by constraint satisfaction; thus w e must search for a solution. We use the crucial property common to all CSPs: commutativity. CSPs are commutative because w hen assigning values to variables, w e reach the same partial assignment regardless of their order. Backtracking search in CSP The term backtracking search is used for a depth-first search that chooses values for one variable at a time and backtracks w hen a variable has no legal values left to assign. Backracking je v podstate prehľadávanie do hĺbky, ktoré v každom čase priradí hodnotu len jednej premennej. Algoritmus sa vracia ak už neexistuje legálna hodnota na priradenie. In fact, w e need only consider a single variable at each node in the search tree. For example, at the root node of a search tree for coloring the map of Australia, w e might make a choice betw een SA=red, SA=green, and SA=blue, but w e w ould never choose (SA=red and WA=blue). 17 18 3

Algoritmus backtrackingu (BT) 1. Najprv je mapa Austrálie prázdna (počiatočný stav). 2. Urobíme si poradie premenných (napr. Q, NSW, V, T, SA, WA, NT). 3. Hodnoty sa priraďujú premenným postupne, takto: 4. Priradíme všetky možné hodnoty prvej premennej, t.j priradíme tri farby (red, green, blue) Q teritóriu. 5. Vezmeme prvý nerozvinutý uzol ako v depth first search a priradíme druhej premennej NSW zvyšné farby, blue a green. 6. Opakujeme 5 pre ďalšiu premennú až kým nedospejeme k správnemu priradeniu, alebo k porušeniu obmedzujúcej podmienky. 7. Ak nájdené riešenie porušuje obmedzujúce podmienky, BT algoritmus sa vráti k poslednej priradenej hodnote, ktorá má inú alternatívu, a pokračuje odtiaľ. Backtracking search in CSP: example Let us consider static ordering of variables: Q, NSW, V, T, SA, WA, NT Here is a scheme of the search tree: Backtrac ki ng Conflict for SA Solution 20 Backtracking search in CSP: example Ordering of variables and values Part of the search tree for the Australia map-coloring problem w ith another order of variables is: WA, NT, Q, SA, NSW, V, T Chosen ordering of variables and value assignment affects the speed of finding the solution. (In the w orst case, the BT can lead to a cycle or failure.) Thus w e use several heuristics to make the BT more efficient. These are: 1. Minimum-remaining-value (MRV) heuristic for optimizing the order of variables; (heuristika minima ostávajúcich hodnôt) 2. Degree heuristic for choosing the first variable; (hranová heuristika) 21 3. Least-constraining-value heuristic to decide on the order, in w hich to examine the values of a variable. 22 MRV heuristic: example MRV heuristic The intuitive idea choosing the variable w ith the few est legal values as the next variable to assign values is called the minimum remaining-values (MRV) heuristic. The order of variables is: WA, NT, Q, SA, NSW, V, T. E.g, after the assignments for WA=red and NT =green, next it makes sense to assign SA=blue rather than assigning Q. Thus, w e w ill change the order to: WA, NT, SA, Q, NSW, V, T. Then, the solution is found faster. 23 It also has been called the most constrained variable or fail-first heuristic, the latter because it picks a variable that is most likely to cause a failure soon, thereby pruning the search tree. If some variable X has no legal values left, the MRV heuristic w ill select X and failure w ill be detected immediately avoiding pointless searches through other variables. The MRV heuristic usually performs much better than a random or static ordering, sometimes by a factor of 1,000 or more. 24 4

Degree heuristic (degree = number of constraints) The least-constraining-value heuristic: example The MRV heuristic may not help at all in choosing the first variable in Australia task, initially every region is initially equal DH reduces the branching factor on future choices by selecting the variable that is involved in the largest number of constraints on other unassigned variables. In SA is the variable w ith the highest degree, 5; the other variables have degree 3 25 The order of variables is: WA, NT, Q, SA, NSW, V, T. E.g, after the assignments for WA=red and NT =green, the next is Q. Blue w ould be a bad choice for Q, because it eliminates the last legal value left for SA, thus w e choose Q = red. 26 The least-constraining-value heuristic The least-constraining-value heuristic prefers the value that rules out the smallest number of choices for the neighboring variables in the constraint graph. Forward checking AC-3 and other algorithms for inference based on constraint propagation can infer reductions in the domain of variables either w ithout or before and / or during the search. In general, the heuristic is trying to leave the maximum flexibility for subsequent variable assignments. One of the simplest forms of search combined with constraint propagation is the so-called forward checking. In other w ords, the variable selection is fail-first, but value selection is the fail-last value. Therefore, it makes sense to look for the most likely value first. In other w ords for the value w hich is the least restricting 27 Whenever a variable X i is assigned a value, the forw ard-checking process establishes arc consistency for it, that is If X i is arc-consistent w ith respect to another variable X j then for every value v i in the current domain D i there is some value in the domain D j that satisfies the binary constraint on the arc (X i, X j ). 28 Forward checking For each to be assigned variable X j that is connected to X j by a constraint, delete from X j s domain D j any value that is inconsistent w ith the value chosen for X i. (remember there is no backtracking) The problem w ith forw ard checking is that it makes the current variable arc-consistent, but doesn t look ahead and make all the other variables arc-consistent. Thus, although forw ard checking detects many inconsistencies, it does not detect all of them, thus this is not the algorithm of the first choice. 29 Summary Many important real-w orld problems can be described as CSPs. Constraint propagation is the main tool in CSP inference. Backtracking search, a form of depth-first search, is commonly used for solving CSPs. The minimum-remaining-values and degree heuristics are domainindependent methods for deciding w hich variable to choose next in a backtracking search, w hile the least-constraining-value heuristic to decide on the order, in w hich to examine the values of a variable. Inference (by constraint propagation) can be interw oven w ith search in the forward checking. 30 5