Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

Similar documents
Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8

CS 771 Artificial Intelligence. Constraint Satisfaction Problem

CS 343: Artificial Intelligence

CS 188: Artificial Intelligence. What is Search For? Constraint Satisfaction Problems. Constraint Satisfaction Problems

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems

Constraint Satisfaction Problems

Announcements. Homework 4. Project 3. Due tonight at 11:59pm. Due 3/8 at 4:00pm

CSE 473: Artificial Intelligence

Announcements. Homework 1: Search. Project 1: Search. Midterm date and time has been set:

CS 188: Artificial Intelligence Fall 2011

Constraint Satisfaction Problems

Constraint Satisfaction Problems

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Constraint Satisfaction

Announcements. CS 188: Artificial Intelligence Fall 2010

Constraint Satisfaction Problems

Constraint Satisfaction Problems (CSPs)

CS 4100 // artificial intelligence

Constraint Satisfaction Problems

Constraint Satisfaction Problems. A Quick Overview (based on AIMA book slides)

Lecture 6: Constraint Satisfaction Problems (CSPs)

Constraint Satisfaction Problems. Chapter 6

Constraint Satisfaction Problems

Constraint Satisfaction Problems

Spezielle Themen der Künstlichen Intelligenz

CS 188: Artificial Intelligence Fall 2008

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

Constraint Satisfaction. AI Slides (5e) c Lin

Space of Search Strategies. CSE 573: Artificial Intelligence. Constraint Satisfaction. Recap: Search Problem. Example: Map-Coloring 11/30/2012

Outline. Best-first search

Constraint Satisfaction Problems. Chapter 6

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 30 January, 2018

Artificial Intelligence Constraint Satisfaction Problems

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1

CS 188: Artificial Intelligence. Recap: Search

Australia Western Australia Western Territory Northern Territory Northern Australia South Australia South Tasmania Queensland Tasmania Victoria

Week 8: Constraint Satisfaction Problems

Chapter 6 Constraint Satisfaction Problems

Constraint Satisfaction Problems

What is Search For? CS 188: Artificial Intelligence. Example: Map Coloring. Example: N-Queens. Example: N-Queens. Constraint Satisfaction Problems

Constraint Satisfaction Problems (CSPs) Introduction and Backtracking Search

Announcements. CS 188: Artificial Intelligence Spring Today. Example: Map-Coloring. Example: Cryptarithmetic.

What is Search For? CS 188: Ar)ficial Intelligence. Constraint Sa)sfac)on Problems Sep 14, 2015

Recap: Search Problem. CSE 473: Artificial Intelligence. Space of Search Strategies. Constraint Satisfaction. Example: N-Queens 4/9/2012

CS W4701 Artificial Intelligence

Example: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph

Constraint Satisfaction Problems (CSP)

Constraint satisfaction problems. CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop

Constraint Satisfaction Problems

Iterative improvement algorithms. Today. Example: Travelling Salesperson Problem. Example: n-queens

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

CS 188: Artificial Intelligence Spring Announcements

10/11/2017. Constraint Satisfaction Problems II. Review: CSP Representations. Heuristic 1: Most constrained variable

CONSTRAINT SATISFACTION

Artificial Intelligence

Artificial Intelligence

Constraint Satisfaction Problems

Constraint Satisfaction. CS 486/686: Introduction to Artificial Intelligence

CS 188: Artificial Intelligence Spring Announcements

Artificial Intelligence

Constraint Satisfaction Problems Part 2

Foundations of Artificial Intelligence

ARTIFICIAL INTELLIGENCE (CS 370D)

Constraints. CSC 411: AI Fall NC State University 1 / 53. Constraints

Constraint Satisfaction Problems

CS 188: Artificial Intelligence Fall 2011

Constraint Satisfaction Problems

Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3)

Example: Map coloring

General Methods and Search Algorithms

Outline. Best-first search

AI Fundamentals: Constraints Satisfaction Problems. Maria Simi

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence

Constraint Satisfaction Problems (CSP) (Where we postpone making difficult

Discussion Section Week 1

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

Map Colouring. Constraint Satisfaction. Map Colouring. Constraint Satisfaction

Solving Problems by Searching: Constraint Satisfaction Problems

Comments about assign 1. Quick search recap. Constraint Satisfaction Problems (CSPs) Search uninformed BFS, DFS, IDS. Adversarial search

Material. Thought Question. Outline For Today. Example: Map-Coloring EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Mathematical Programming Formulations, Constraint Programming

Constraint Satisfaction Problems

CONSTRAINT SATISFACTION

CS 188: Artificial Intelligence

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

Constraint Satisfaction Problems Chapter 3, Section 7 and Chapter 4, Section 4.4 AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 3, Section

CS 4100/5100: Foundations of AI

CS 188: Artificial Intelligence Spring Today

Admin. Quick search recap. Constraint Satisfaction Problems (CSPs)! Intro Example: 8-Queens. Where should I put the queens in columns 3 and 4?

CS 188: Artificial Intelligence

Constraint Satisfaction Problems

G53CLP Constraint Logic Programming

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 6 February, 2018

Moving to a different formalism... SEND + MORE MONEY

Lecture 9 Arc Consistency

CS 188: Artificial Intelligence

Artificial Intelligence

Transcription:

Constraint Satisfaction Problems slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

Standard search problems: State is a black box : arbitrary data structure Goal test can be any function over states Successor function can also be anything Constraint satisfaction problems (CSPs): A special subset of search problems State is defined by a set of variables X i with values from a domain D Goal test is a set of constraints specifying allowable combinations of values for the variables Allows useful general-purpose algorithms with more power than standard search algorithms

Example: N-Queens Formulation 1: Variables: Domains: Constraints:

Example: N-Queens Formulation 2: Variables: Domains: Constraints: Implicit: Explicit:

Example: Sudoku Variables: Each (open) square Domains: {1,2,,9} Constraints: 9-way alldiff for each column 9-way alldiff for each row 9-way alldiff for each region (or can have a bunch of pairwise inequality constraints)

Real-World CSPs Assignment problems: e.g., who teaches what class Timetabling problems: e.g., which class is offered when and where? Hardware configuration Transportation scheduling Factory scheduling Circuit layout Fault diagnosis

Constraint Satisfaction Problems, CSPs To formulate a problem as a CSP we need to define: a finite set of variables V 1, V 2, V n for each variable, a domain of possible values D 1, D 2, D n a finite set of constraints C 1, C 2, C m (each constraint C i limits the values that variables can take, e.g., V 1 V 2 ) each state in a CSP is defined as an assignment of values to some or all variables a partial assignment is one that assigns values to only some of the variables a complete assignment is one in which every variable is assigned a consistent assignment is an assignment that does not violate the constraints a solution to a CSP is a complete and consistent assignment

CSP example: map coloring Variables: {WA, NT, Q, NSW, V, SA, T} Domains: each variable has the domain D i ={red,green,blue} Constraints: adjacent regions must have different colors. eg. WA NT

CSP example: map coloring Solutions are complete assignments that satisfy all constraints, eg. {WA=red, NT=green, Q=red, NSW=green, V=red, SA=blue, T=green}

CSPs are often visualized with a constraint graph nodes correspond to variables an arc connects any two variables that participate in a constraint can use the graph structure to speed up search. E.g., Tasmania is an independent subproblem

Varieties of CSPs Discrete variables Finite domains; size d O(d n ) complete assignments. Infinite domains (integers, strings, etc.) E.g., job scheduling, variables are start/end times for each job Linear constraints solvable, nonlinear undecidable Continuous variables E.g., start/end times for Hubble Telescope observations Linear constraints solvable in polynomial time by LP methods.

Varieties of constraints Unary constraints involve a single variable. e.g. SA green Binary constraints involve pairs of variables. e.g. SA WA Higher-order constraints involve 3 or more variables. Professors A, B,and C cannot be on a committee together Can always be represented by multiple binary constraints Preference (soft constraints) e.g. red is better than green often can be represented by a cost for each variable assignment combination of optimization with CSPs

CSPs CSP benefits Standard representation Generic goal and successor functions Generic heuristics (no domain specific expertise). Many Applications: Airline schedules Cryptography Computer vision -> image interpretation

CSP as a Search Problem (incremental formulation) n variables X 1,..., X n Valid assignment: {X i1 v i1,..., X ik v ik }, 0 k n, such that the values v i1,... v ik satisfy all constraints relating the variables X i1,... X ik Complete assignment: one where k = n [if all variable domains have size d, there are O(d n ) complete assignments] Goal test: k = n Initial state: empty assignment {}, i.e. k = 0 Successor of a state: {X i1 v i1,..., X ik v ik } {X i1 v i1,..., X ik v ik, X ik+1 v ik+1 }

What would BFS do? - bunch of assignments to one variable, followed by assignments to two variables What would DFS do? What problems does naïve search have? ( this is not a goal yet, let s try successors)

A Key property of CSPs: Commutativity The order in which variables are assigned values has no impact on the reachable complete valid assignments [WA=red then NT=green] same as [NT=green then WA=red] Hence: One can expand a node N by first selecting one variable X not in the assignment A associated with N and then assigning any value v in the domain of X big reduction in branching factor (depth of solution is still n) One need not store the path to a node Backtracking search algorithm (DFS) Chooses values for one variable at a time and backtracks when a variable has no legal values left to assign.

Backtracking search is the basic uninformed algorithm to solve CSPs Idea 1: One variable at a time Variable assignments are commutative, so fix ordering I.e., [WA = red then NT = green] same as [NT = green then WA = red] Only need to consider assignments to a single variable at each step Idea 2: Check constraints as you go I.e. consider only values which do not conflict previous assignments Might have to do some computation to check the constraints Incremental goal test Depth-first search with these two improvements is called backtracking search (not the best name) Can solve n-queens for n 25

Assignment = {} Backtracking Search (3 variables)

Backtracking Search (3 variables) X 1 v 11 Assignment = {(X 1,v 11 )}

Backtracking Search (3 variables) X 1 v 11 X 3 v 31 Assignment = {(X 1,v 11 ), (X 3,v 31 )}

Backtracking Search (3 variables) X 1 v 31 v 11 X 3 Then, the search algorithm backtracks to the previous variable (X 3 ) and tries another value X 2 Assume that no value of X2 leads to a valid assignment Assignment = {(X 1,v 11 ), (X 3,v 31 )}

Backtracking Search (3 variables) X 1 v 11 X 3 v 31 v 32 X 2 Assignment = {(X 1,v 11 ), (X 3,v 32 )}

Backtracking Search (3 variables) v 31 v 11 X 3 v 32 X 1 The search algorithm backtracks to the previous variable (X 3 ) and tries another value. But assume that X 3 has only two possible values. The algorithm backtracks to X 1 X 2 X 2 Assume again that no value of X 2 leads to a valid assignment Assignment = {(X 1,v 11 ), (X 3,v 32 )}

Backtracking Search (3 variables) X 1 v 11 v 12 X 3 v 31 v 32 X 2 X 2 Assignment = {(X 1,v 12 )}

Backtracking Search (3 variables) X 1 v 11 v 12 X 3 X 2 v 31 v 32 v 21 X 2 X 2 Assignment = {(X 1,v 12 ), (X 2,v 21 )}

Backtracking Search (3 variables) X 1 v 11 v 12 X 3 X 2 v 31 v 32 v 21 X 2 X 2 The algorithm need not consider the variables in the same order in this sub-tree as in the other Assignment = {(X 1,v 12 ), (X 2,v 21 )}

Backtracking Search (3 variables) X 1 v 11 v 12 X 3 X 2 v 31 v 32 v 21 X 2 X 2 X 3 v 32 Assignment = {(X 1,v 12 ), (X 2,v 21 ), (X 3,v 32 )}

Backtracking Search (3 variables) X 1 v 11 v 12 X 3 X 2 v 31 v 32 v 21 X 2 X 2 v 32 X 3 The algorithm need not consider the values of X 3 in the same order in this sub-tree Assignment = {(X 1,v 12 ), (X 2,v 21 ), (X 3,v 32 )}

Backtracking Search (3 variables) X 1 v 11 v 12 X 3 X 2 v 31 v 32 v 21 X 2 X 2 X 3 Since there are only three variables, the assignment is complete v 32 Assignment = {(X 1,v 12 ), (X 2,v 21 ), (X 3,v 32 )}

CSP Backtracking Algorithm CSP-BACKTRACKING(A) 1. If assignment A is complete then return A 2. X select a variable not in A 3. D select an ordering on the domain of X 4. For each value v in D do a. Add (X v) to A b. If A is valid then i. result CSP-BACKTRACKING(A) ii. If result failure then return result c. Remove (X v) from A 5. Return failure Call CSP-BACKTRACKING({})

How to improve the Backtracking Algorithm so that it scales up better...

- inference in the course of a search - every time we assign a value to a variable we can infer new domain reductions in neighboring variables Forward Checking (filtering of domains) A simple constraint-propagation technique: 1 2 3 4 5 6 7 8 Assigning the value 5 to X 1 leads to removing values from the domains of X 2, X 3,..., X 8 X 1 X 2 X 3 X 4 X 5 X 6 X 7 X 8

Forward Checking in Map Coloring WA NT SA Q NSW Constraint graph T V WA NT Q NSW V SA T RGB RGB RGB RGB RGB RGB RGB

Forward Checking in Map Coloring WA NT SA Q NSW T V WA NT Q NSW V SA T RGB RGB RGB RGB RGB RGB RGB R RGB RGB RGB RGB RGB RGB Forward checking removes the value Red of NT and of SA

Forward Checking in Map Coloring WA NT SA Q NSW T V WA NT Q NSW V SA T RGB RGB RGB RGB RGB RGB RGB R GB RGB RGB RGB GB RGB R GB G RGB RGB GB RGB

Forward Checking in Map Coloring WA NT SA Q NSW T V WA NT Q NSW V SA T RGB RGB RGB RGB RGB RGB RGB R GB RGB RGB RGB GB RGB R B G RB RGB B RGB R B G RB B B RGB

Forward Checking in Map Coloring Empty set: the current assignment {(WA R), (Q G), (V B)} does not lead to a solution WA NT Q NSW V SA T RGB RGB RGB RGB RGB RGB RGB R GB RGB RGB RGB GB RGB R B G RB RGB B RGB R B G RB B B RGB

Forward Checking (General Form) Whenever a pair (X v) is added to assignment A do: For each variable Y not in A do: For every constraint C relating Y to the variables in A do: Remove all values from Y s domain that do not satisfy C

Modified Backtracking Algorithm CSP-BACKTRACKING(A, var-domains) 1. If assignment A is complete then return A 2. X select a variable not in A 3. D select an ordering on the domain of X 4. For each value v in D do a. Add (X v) to A b. var-domains forward checking(var-domains, X, v, A) c. If no variable has an empty domain then (i) result CSP-BACKTRACKING(A, var-domains) (ii) If result failure then return result d. Remove (X v) from A 5. Return failure

Modified Backtracking Algorithm CSP-BACKTRACKING(A, var-domains) 1. If assignment A is complete then return A 2. X select a variable not in A 3. D select an ordering on the domain of X 4. For each value v in D do a. Add (X v) to A No need any more to verify that A is valid b. var-domains forward checking(var-domains, X, v, A) c. If no variable has an empty domain then (i) result CSP-BACKTRACKING(A, var-domains) (ii) If result failure then return result d. Remove (X v) from A 5. Return failure

Modified Backtracking Algorithm CSP-BACKTRACKING(A, var-domains) 1. If assignment A is complete then return A 2. X select a variable not in A 3. D select an ordering on the domain of X 4. For each value v in D do a. Add (X v) to A b. var-domains forward checking(var-domains, X, v, A) c. If no variable has an empty domain then (i) result CSP-BACKTRACKING(A, var-domains) (ii) If result failure then return result d. Remove (X v) from A 5. Return failure Need to pass down the updated variable domains

N-Queens Problem How to formulate as a CSP?

4-Queens Problem How to solve using CSP-Backtracking? 1 1 2 3 4 X 1 {1,2,3,4} X 2 {1,2,3,4} 2 3 4 X 3 {1,2,3,4} X 4 {1,2,3,4} The algorithm performs forward checking, which eliminates 2 values in each other variable s domain

1) Which variable X should be assigned a value next? The current assignment may not lead to any solution, but the algorithm still does know it. Selecting the right variable to which to assign a value may help discover the contradiction more quickly 2) In which order should X s values be assigned? The current assignment may be part of a solution. Selecting the right value to assign to X may help discover this solution more quickly More on these questions in a short while...

1) Which variable X should be assigned a value next? The current assignment may not lead to any solution, but the algorithm does not know it yet. Selecting the right variable X may help discover the contradiction more quickly 2) In which order should X s values be assigned? The current assignment may be part of a solution. Selecting the right value to assign to X may help discover this solution more quickly More on these questions in a short while...

1) Which variable X should be assigned a value next? The current assignment may not lead to any solution, but the algorithm does not know it yet. Selecting the right variable X may help discover the contradiction more quickly 2) In which order should X s values be assigned? The current assignment may be part of a solution. Selecting the right value to assign to X may help discover this solution more quickly More on these questions in a short while...

Heuristics 1) Which variable X i should be assigned a value next? Most-constrained-variable heuristic (minimum-remaining-values) Most-constraining-variable heuristic (degree) 2) In which order should its values be assigned? Least-constraining-value heuristic These heuristics can be confusing Keep in mind that all variables must eventually get a value, while only one value from a domain must be assigned to each variable

Modified Backtracking Algorithm CSP-BACKTRACKING(A, var-domains) 1. If assignment A is complete then return A 2. X select a variable not in A 3. D select an ordering on the domain of X 4. For each value v in D do a. Add (X v) to A b. var-domains forward checking(var-domains, X, v, A) c. If no variable has an empty domain then (i) result CSP-BACKTRACKING(A, var-domains) (ii) If result failure then return result d. Remove (X v) from A 5. Return failure

Most-Constrained-Variable Heuristic 1) Which variable X i should be assigned a value next? Select the variable with the smallest remaining domain [Rationale: Minimize the branching factor]

Map Coloring WA NT Q SA V NSW T SA s remaining domain has size 1 (value Blue remaining) Q s remaining domain has size 2 NSW s, V s, and T s remaining domains have size 3 Select SA

Most-Constraining-Variable Heuristic 1) Which variable X i should be assigned a value next? Among the variables with the smallest remaining domains (ties with respect to the most-constrained-variable heuristic), select the one that appears in the largest number of constraints on variables not in the current assignment [Rationale: Increase future elimination of values, to reduce future branching factors]

Map Coloring WA NT Q SA V NSW Before any value has been assigned, all variables have a domain of size 3, but SA is involved in more constraints (5) than any other variable Select SA and assign a value to it (e.g., Blue) T

Modified Backtracking Algorithm 1) Most-constrained-variable heuristic 2) Most-constraining-variable heuristic 3) Least-constraining-value heuristic 5. Return failure CSP-BACKTRACKING(A, var-domains) 1. If assignment A is complete then return A 2. X select a variable not in A 3. D select an ordering on the domain of X 4. For each value v in D do a. Add (X v) to A b. var-domains forward checking(var-domains, X, v, A) c. If no variable has an empty domain then (i) result CSP-BACKTRACKING(A, var-domains) (ii) If result failure then return result d. Remove (X v) from A 1) Select the variable with the smallest remaining domain 2) Select the variable that appears in the largest number of constraints on variables not in the current assignment

Least-Constraining-Value Heuristic 2) In which order should X s values be assigned? Select the value of X that removes the smallest number of values from the domains of those variables which are not in the current assignment [Rationale: Since only one value will eventually be assigned to X, pick the least-constraining value first, since it is the most likely not to lead to an invalid assignment] [Note: Using this heuristic requires performing a forward-checking step for every value, not just for the selected value]

Map Coloring WA NT Q {} SA V NSW T Q s domain has two remaining values: Blue and Red Assigning Blue to Q would leave 0 value for SA, while assigning Red would leave 1 value

Map Coloring WA NT Q SA {Blue} V NSW T Q s domain has two remaining values: Blue and Red Assigning Blue to Q would leave 0 value for SA, while assigning Red would leave 1 value So, assign Red to Q

Modified Backtracking Algorithm 1) Most-constrained-variable heuristic 2) Most-constraining-variable heuristic 3) Least-constraining-value heuristic 5. Return failure CSP-BACKTRACKING(A, var-domains) 1. If assignment A is complete then return A 2. X select a variable not in A 3. D select an ordering on the domain of X 4. For each value v in D do a. Add (X v) to A b. var-domains forward checking(var-domains, X, v, A) c. If no variable has an empty domain then (i) result CSP-BACKTRACKING(A, var-domains) (ii) If result failure then return result d. Remove (X v) from A

- inference before the search begins - a pre-processing step - constraint propagation, like forward checking - the AC-3 algorithm (Arc Consistency Algorithm 3) A.K. Mackworth. Consistency in networks of relations. Artificial Intelligence, 8:99-118, 1977.

Constraint propagation Solving CSPs with combination of heuristics plus forward checking is more efficient than either approach alone FC checking does not detect all failures (just checks for assigned variables and their neighbors) E.g., NT and SA cannot be blue

Constraint propagation Techniques like CP and FC are in effect eliminating parts of the search space Somewhat complementary to search Constraint propagation goes further than FC by repeatedly enforcing constraints locally Needs to be faster than actually searching to be effective Arc-consistency (AC) is a systematic procedure for constraing propagation

Arc consistency An Arc X Y is consistent if for every value x of X there is some value y consistent with x (note that this is a directed property) Consider state of search after WA and Q are assigned: SA NSW is consistent if SA=blue and NSW=red

Arc consistency X Y is consistent if for every value x of X there is some value y consistent with x NSW SA is consistent if NSW=red and SA=blue NSW=blue and SA=???

Arc consistency Can enforce arc-consistency: Arc can be made consistent by removing blue from NSW Continue to propagate constraints. Check V NSW Not consistent for V = red Remove red from V

Arc consistency Continue to propagate constraints. SA NT is not consistent and cannot be made consistent Arc consistency detects failure earlier than FC

Constraint Propagation for Binary Constraints REMOVE-VALUES(X,Y) removes every value of Y that is incompatible with the values of X REMOVE-VALUES(X,Y) 1. removed false 2. For every value v in the domain of Y do If there is no value u in the domain of X such that the constraint on (X,Y) is satisfied then a. Remove v from Y s domain b. removed true 3. Return removed

Constraint Propagation for Binary Constraints AC3 1. Initialize queue Q with all variables (not yet instantiated) 2. While Q do a. X Remove(Q) b. For every (not yet instantiated) variable Y related to X by a (binary) constraint do If REMOVE-VALUES(X,Y) then i. If Y s domain = then exit ii. Insert(Y,Q)

Complexity Analysis of AC3 n = number of variables d = size of initial domains s = maximum number of constraints involving a given variable (s n-1) Each variable is inserted in Q up to d times REMOVE-VALUES takes O(d 2 ) time AC3 takes O(n d s d 2 ) = O(n s d 3 ) time Usually more expensive than forward checking AC3 1. Initialize queue Q with all variables (not yet instantiated) 2. While Q do a. X Remove(Q) b. For every (not yet instantiated) variable Y related to X by a (binary) constraint do If REMOVE-VALUES(X,Y) then i. If Y s domain = then exit ii. Insert(Y,Q) REMOVE-VALUES(X,Y) 1. removed false 2. For every value v in the domain of Y do If there is no value u in the domain of X such that the constraint on (x,y) is satisfied then a. Remove v from Y s domain b. removed true 3. Return removed 69

K-consistency Arc consistency does not detect all inconsistencies: Partial assignment {WA=red, NSW=red} is inconsistent. Stronger forms of propagation can be defined using the notion of k-consistency. A CSP is k-consistent if for any set of k-1 variables and for any consistent assignment to those variables, a consistent value can always be assigned to any kth variable. E.g. 1-consistency = node-consistency E.g. 2-consistency = arc-consistency E.g. 3-consistency = path-consistency Strongly k-consistent: k-consistent for all values {k, k-1, 2, 1}

Trade-offs Running stronger consistency checks Takes more time But will reduce branching factor and detect more inconsistent partial assignments No free lunch In worst case n-consistency takes exponential time

Further improvements Checking special constraints Checking Alldif( ) constraint E.g. {WA=red, NSW=red} Checking Atmost( ) constraint Bounds propagation for larger value domains Intelligent backtracking Standard form is chronological backtracking i.e. try different value for preceding variable. More intelligent, backtrack to conflict set. Set of variables that caused the failure or set of previously assigned variables that are connected to X by constraints. Backjumping moves back to most recent element of the conflict set. Forward checking can be used to determine conflict set.

Local search for CSPs Use complete-state representation Initial state = all variables assigned values Successor states = change 1 (or more) values For CSPs allow states with unsatisfied constraints (unlike backtracking) operators reassign variable values hill-climbing with n-queens is an example Variable selection: randomly select any conflicted variable Value selection: min-conflicts heuristic Select new value that results in a minimum number of conflicts with the other variables

Local search for CSP function MIN-CONFLICTS(csp, max_steps) return solution or failure inputs: csp, a constraint satisfaction problem max_steps, the number of steps allowed before giving up current an initial complete assignment for csp for i = 1 to max_steps do if current is a solution for csp then return current var a randomly chosen, conflicted variable from VARIABLES[csp] value the value v for var that minimize CONFLICTS(var,v,current,csp) set var = value in current return failure

Min-conflicts example 1 h=5 h=3 h=1 Use of min-conflicts heuristic in hill-climbing.

Min-conflicts example 2 A two-step solution for an 8-queens problem using min-conflicts heuristic At each stage a queen is chosen for reassignment in its column The algorithm moves the queen to the min-conflict square breaking ties randomly.

Comparison of CSP algorithms on different problems Median number of consistency checks over 5 runs to solve problem Parentheses -> no solution found USA: 4 coloring n-queens: n = 2 to 50 Zebra: see exercise 5.13

Advantages of local search Local search can be particularly useful in an online setting Airline schedule example E.g., mechanical problems require than 1 plane is taken out of service Can locally search for another close solution in state-space Much better (and faster) in practice than finding an entirely new schedule The runtime of min-conflicts is roughly independent of problem size. Can solve the millions-queen problem in roughly 50 steps. Why? n-queens is easy for local search because of the relatively high density of solutions in state-space