Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Similar documents
Constraint Satisfaction Problems

Constraint Satisfaction Problems

Constraint Satisfaction Problems

Constraint Satisfaction Problems. Chapter 6

Constraint Satisfaction Problems

Chapter 6 Constraint Satisfaction Problems

Constraint Satisfaction. AI Slides (5e) c Lin

10/11/2017. Constraint Satisfaction Problems II. Review: CSP Representations. Heuristic 1: Most constrained variable

Australia Western Australia Western Territory Northern Territory Northern Australia South Australia South Tasmania Queensland Tasmania Victoria

Constraint Satisfaction Problems Part 2

Week 8: Constraint Satisfaction Problems

Foundations of Artificial Intelligence

Constraint Satisfaction

Constraint Satisfaction Problems (CSPs)

Constraint Satisfaction Problems

Example: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph

CSE 473: Artificial Intelligence

Lecture 6: Constraint Satisfaction Problems (CSPs)

Constraint Satisfaction Problems. A Quick Overview (based on AIMA book slides)

Constraint Satisfaction Problems

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Fall 2008

Announcements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 30 January, 2018

Constraint Satisfaction Problems

Outline. Best-first search

CS 771 Artificial Intelligence. Constraint Satisfaction Problem

Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8

Announcements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence

CS 343: Artificial Intelligence

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

Moving to a different formalism... SEND + MORE MONEY

CS 4100 // artificial intelligence

CS 188: Artificial Intelligence Fall 2011

Constraint Satisfaction Problems. Chapter 6

What is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems

Space of Search Strategies. CSE 573: Artificial Intelligence. Constraint Satisfaction. Recap: Search Problem. Example: Map-Coloring 11/30/2012

Announcements. Homework 1: Search. Project 1: Search. Midterm date and time has been set:

Announcements. Homework 4. Project 3. Due tonight at 11:59pm. Due 3/8 at 4:00pm

CS 188: Artificial Intelligence. What is Search For? Constraint Satisfaction Problems. Constraint Satisfaction Problems

Announcements. CS 188: Artificial Intelligence Fall 2010

Lecture 18. Questions? Monday, February 20 CS 430 Artificial Intelligence - Lecture 18 1

CS 188: Artificial Intelligence. Recap: Search

Artificial Intelligence Constraint Satisfaction Problems

ARTIFICIAL INTELLIGENCE (CS 370D)

Constraint Satisfaction Problems

Artificial Intelligence

Artificial Intelligence

Artificial Intelligence

Constraint Satisfaction Problems

Announcements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Spring Announcements

Constraint Satisfaction Problems

Constraint Satisfaction Problems

Solving Problems by Searching: Constraint Satisfaction Problems

Announcements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong

CS W4701 Artificial Intelligence

Recap: Search Problem. CSE 473: Artificial Intelligence. Space of Search Strategies. Constraint Satisfaction. Example: N-Queens 4/9/2012

Announcements. CS 188: Artificial Intelligence Spring Today. Example: Map-Coloring. Example: Cryptarithmetic.

CONSTRAINT SATISFACTION

CS 188: Artificial Intelligence

CS 188: Artificial Intelligence

Spezielle Themen der Künstlichen Intelligenz

Material. Thought Question. Outline For Today. Example: Map-Coloring EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Outline. Best-first search

CS 188: Artificial Intelligence Spring Announcements

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

What is Search For? CS 188: Artificial Intelligence. Example: Map Coloring. Example: N-Queens. Example: N-Queens. Constraint Satisfaction Problems

Constraint satisfaction problems. CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop

Constraint Satisfaction Problems

Constraint Satisfaction Problems (CSPs) Introduction and Backtracking Search

Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3)

Constraints. CSC 411: AI Fall NC State University 1 / 53. Constraints

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 6 February, 2018

Mathematical Programming Formulations, Constraint Programming

General Methods and Search Algorithms

Constraint Satisfaction. CS 486/686: Introduction to Artificial Intelligence

Example: Map coloring

Iterative improvement algorithms. Today. Example: Travelling Salesperson Problem. Example: n-queens

Announcements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling

Foundations of Artificial Intelligence

AI Fundamentals: Constraints Satisfaction Problems. Maria Simi

CONSTRAINT SATISFACTION

What is Search For? CS 188: Ar)ficial Intelligence. Constraint Sa)sfac)on Problems Sep 14, 2015

CONSTRAINT SATISFACTION

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Artificial Intelligence

CS 188: Artificial Intelligence

Constraint Satisfaction Problems

Artificial Intelligence

CS 4100/5100: Foundations of AI

Constraint Satisfaction Problems

Today. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.

Constraint Satisfaction

CS 188: Artificial Intelligence Spring Today

CMU-Q Lecture 7: Searching in solution space Constraint Satisfaction Problems (CSPs) Teacher: Gianni A. Di Caro

Map Colouring. Constraint Satisfaction. Map Colouring. Constraint Satisfaction

Constraint Satisfaction Problems (CSP)

Constraint Satisfaction Problems

Transcription:

Course on Artificial Intelligence, winter term 2012/2013 0/35

Artificial Intelligence Artificial Intelligence 3. Constraint Satisfaction Problems Lars Schmidt-Thieme Information Systems and Machine Learning Lab (ISMLL) Institute of Economics and Information Systems & Institute of Computer Science University of Hildesheim http://www.ismll.uni-hildesheim.de Course on Artificial Intelligence, winter term 2012/2013 1/35

Artificial Intelligence 1. Constraint Satisfaction Problems 2. Backtracking Search 3. Local Search 4. The Structure of Problems Course on Artificial Intelligence, winter term 2012/2013 1/35

Artificial Intelligence / 1. Constraint Satisfaction Problems Problem Definition A constraint satisfaction problem consists of variables X 1, X 2,... X n with values from given domains dom X i (i = 1,....n). constraints C 1, C 2,..., C m i.e., functions defined on some variables var C j {X 1,..., X n }: C j : dom X {true, false}, j = 1,..., m X var C j Course on Artificial Intelligence, winter term 2012/2013 1/35

Artificial Intelligence / 1. Constraint Satisfaction Problems Assignments assignment: assignment A of values to some variables var A {X 1,..., X n }, i.e., A : X 3 = 7, X 5 = 1, X 6 = 2 An assignment A that does not violate any constraint is called consistent / legal: C j (A) = true for C j with var C j var A, j = 1,..., m An assignment A for all variables is called complete: var A = {X 1,..., X n } A consistent complete assignment is called solution. Some CSPs additionally require an objective function to be maximal. Course on Artificial Intelligence, winter term 2012/2013 2/35

Artificial Intelligence / 1. Constraint Satisfaction Problems Example / 8-Queens variables: Q 1, Q 2, Q 3, Q 4, Q 5, Q 6, Q 7, Q 8 domains: {1, 2, 3, 4, 5, 6, 7, 8}. constraints: Q 1 Q 2, Q 1 Q 2 1, Q 1 Q 2 + 1, Q 1 Q 3, Q 1 Q 3 + 2, Q 1 Q 3 2,... consistent assignment: Q 1 = 1, Q 2 = 3, Q 3 = 5, Q 4 = 7, Q 5 = 2, Q 6 = 4, Q 7 = 6 Course on Artificial Intelligence, winter term 2012/2013 3/35

Artificial Intelligence / 1. Constraint Satisfaction Problems Example / Map Coloring Western Australia Northern Territory South Australia Queensland New South Wales Victoria Tasmania variables: WA, NT, SA, Q, NSW, V, T domains: { red, green, blue } constraints: WA NT, WA SA, NT SA, NT Q,... solution: WA = red, NT = green, SA = blue, Q = red, NSW = green, V = red, T = green Course on Artificial Intelligence, winter term 2012/2013 4/35

Artificial Intelligence / 1. Constraint Satisfaction Problems CSP as Search Problems Incremental formulation: states: consistent assignments. initial state: empty assignment. successor function: assign any not yet assigned variable s.t. the resulting assignment still is consistent. goal test: assignment is complete. path cost: constant cost 1 for each step. Course on Artificial Intelligence, winter term 2012/2013 5/35

Artificial Intelligence / 1. Constraint Satisfaction Problems Types of Variables & Constraints finite domains infinite domains condition: dom X i N i otherwise example: 8-queens: dom Q i = 8. map coloring: dom X i = 3. scheduling: dom X i = N (number of days from now) special cases: binary CSPs: dom X i = 2 integer domains: dom X i = N continuous domains: dom X i = R (or an interval) constraints:can be provided by enumeration, must be specified using a e.g., constraint language, (WA, NT) e.g., {(r, g), (r, b), (g, r), (g, b), (b, r), (b, g)} linear constraints. Course on Artificial Intelligence, winter term 2012/2013 6/35

Artificial Intelligence / 1. Constraint Satisfaction Problems Binary Constraints Constraints can be classified by the number var C j of variables they depend on: unary constraint: depends on a single variable X i. uninteresting: can be eliminated by inclusion in the domain dom X i. binary constraint: depends on two variables X i and X j. can be represented as a constraint graph. Western Australia Northern Territory South Australia Queensland New South Wales WA NT SA V Q NSW Victoria Tasmania original map T constraint graph Course on Artificial Intelligence, winter term 2012/2013 7/35

Artificial Intelligence / 1. Constraint Satisfaction Problems n-ary Constraints constraint of higher order / n-ary constraint: depends on more than two variables. can be represented as a constraint hypergraph. X Y alldiff(x,y,z) Z constraint hypergraph Course on Artificial Intelligence, winter term 2012/2013 8/35

Artificial Intelligence / 1. Constraint Satisfaction Problems n-ary Constraints n-ary constraints sometimes can be reduced to binary constraints in a trivial way. X Y X X!= Y Y alldiff(x,y,z) X!= Z Y!= Z Z Z constraint hypergraph binarized constraint graph Course on Artificial Intelligence, winter term 2012/2013 9/35

Artificial Intelligence / 1. Constraint Satisfaction Problems n-ary Constraints n-ary constraints always can be reduced to binary constraints by introducing additional auxiliary variables with the cartesian product of the original domains as new domain and the original n-ary constraint as unary constraint on the auxiliary variable. X X < Y X + Y = Z Y X X = A1 X < Y A:=(X,Y,Z) Y=A2 Y Z constraint hypergraph Z Z = A3 binarized constraint graph Course on Artificial Intelligence, winter term 2012/2013 10/35

Artificial Intelligence / 1. Constraint Satisfaction Problems Auxiliary Variables Sometimes auxiliary variables also are necessary to represent a problem as CSP. Example: cryptarithmetic puzzle. Assign each letter a figure s.t. the resulting arithmetic expression is true. T W O + T W O F O U R O + O = R + 10X 1 X 1 + W + W = U + 10X 2 X 2 + T + T = O + 10X 3 X 3 = F Course on Artificial Intelligence, winter term 2012/2013 11/35

Artificial Intelligence 1. Constraint Satisfaction Problems 2. Backtracking Search 3. Local Search 4. The Structure of Problems Course on Artificial Intelligence, winter term 2012/2013 12/35

Artificial Intelligence / 2. Backtracking Search Depth-First Search: Backtracking Uninformed Depth-First search is called backtracking for CSPs. 1 backtracking(variables X, constraints C, assignment A) : 2 if X = return A fi 3 X := choose(x ) 4 A := failure 5 for v values(x, A, C) while A = failure do 6 A := backtracking(x \ {X}, C, A {X = v}) 7 od 8 return A where values(x, A, C) := {v dom X C C with var C var A {X} : C(A, X = v) = true} denotes the values for variable X consistent with assignment A for constraints C. Course on Artificial Intelligence, winter term 2012/2013 12/35

Artificial Intelligence / 2. Backtracking Search Backtracking / Example Backtracking example Course on Artificial Intelligence, winter term 2012/2013 13/35

Artificial Intelligence / 2. Backtracking Search Backtracking / Example Backtracking example Course on Artificial Intelligence, winter term 2012/2013 13/35

Artificial Intelligence / 2. Backtracking Search Backtracking / Example Backtracking example Course on Artificial Intelligence, winter term 2012/2013 13/35

Artificial Intelligence / 2. Backtracking Search Backtracking / Example Backtracking example Course on Artificial Intelligence, winter term 2012/2013 13/35

Artificial Intelligence / 2. Backtracking Search Variable Ordering / MRV Which variable is selected in line 3 can be steered by heuristics: minimum remaining values (MRV): Select theminimum variable with the smallest remaining number of values remaining choices: X := argmin X X values(x, A, C) Minimum remaining values (MRV): choose the variable with the fewest legal values Course on Artificial Intelligence, winter term 2012/2013 14/35

Artificial Intelligence / 2. Backtracking Search Variable Ordering / Degree Heuristics Degree heuristic degree heuristic: Select the variable that is involed in the largest number of unresolved constraints: Tie-breaker among MRV variables Degree heuristic: X := argmax X X {C C X var C, var C var A {X}} choose the variable with the most constraints on remaining variables Usually one first applies MRV and breaks ties by degree heuristics. Course on Artificial Intelligence, winter term 2012/2013 15/35

CourseCombining on Artificial Intelligence, these winter term heuristics 2012/2013 makes 1000 queens feasible 16/35 Artificial Intelligence / 2. Backtracking Search Value Ordering The order in which values for the selected variable are tried can also be steered by a heuristics: least constraining value: Order the values by descending number of choices for the remaining variables: Least constraining value values(y, A {X = v}, C), v values(x, A, C) Y X \{X} Given a variable, choose the least constraining value: the one that rules out the fewest values in the remaining variables Allows 1 value for SA Allows 0 values for SA

Artificial Intelligence / 2. Backtracking Search Forward Checking The minimum remaining values (MRV) heuristics can be implemented efficiently by keeping track of the remaining values values(x, A, C) of all unassigned variables. This is called forward checking. 1 backtracking-fc(variables X, (values(x)) X X, constraints C, assignment A) : 2 if X = return A fi 3 X := argmin X X values(x) 4 A := failure 5 for v values(x) while A = failure do 6 illegal(y ) := {w values(y ) C C : X, Y var C, var C var A {X, Y }, 7 C(A, X = v, Y = w) = false}, Y X \ {X} 8 A := backtracking(x \ {X}, (values(y ) \ illegal(y )) Y X \{X}, C, A {X = v}) 9 od 10 return A Course on Artificial Intelligence, winter term 2012/2013 17/35

Artificial Intelligence / 2. Backtracking Search Forward checking Forward Checking Idea: Keep track of remaining legal values for unassigned variables Terminate search when any variable has no legal values WA NT Q NSW V SA T Course on Artificial Intelligence, winter term 2012/2013 17/35

Artificial Intelligence / 2. Backtracking Search Forward checking Forward Checking Idea: Keep track of remaining legal values for unassigned variables Terminate search when any variable has no legal values WA NT Q NSW V SA T Course on Artificial Intelligence, winter term 2012/2013 17/35

Artificial Intelligence / 2. Backtracking Search Forward checking Forward Checking Idea: Keep track of remaining legal values for unassigned variables Terminate search when any variable has no legal values WA NT Q NSW V SA T Course on Artificial Intelligence, winter term 2012/2013 17/35

Artificial Intelligence / 2. Backtracking Search Forward checking Forward Checking Idea: Keep track of remaining legal values for unassigned variables Terminate search when any variable has no legal values WA NT Q NSW V SA T Course on Artificial Intelligence, winter term 2012/2013 17/35

Artificial Intelligence / 2. Backtracking Search Constraint propagation Constraint Propagation Forward checking propagates information from assigned to unassigned variables, but doesn t provide early detection for all failures: WA NT Q NSW V SA T NT and SA cannot both be blue! Constraint propagation repeatedly enforces constraints locally Course on Artificial Intelligence, winter term 2012/2013 18/35

Artificial Intelligence / 2. Backtracking Search Arc Consistency One also could use a stronger consistency check: if there is for some unassigned variable X a possible value v, there is a constraint C linking X to another unassigned variable Y, and setting X = v would rule out all remaining values for Y via C, then we can remove v as possible value for X. Example: values(sa) = {b}, values(nsw) = {r, b}, C : NSW SA NSW = b is not possible as C would lead to values(sa) =. Removing such a value may lead to other inconsistent arcs, thus, has to be done repeatedly. Course on Artificial Intelligence, winter term 2012/2013 19/35

Artificial Intelligence / 2. Backtracking Search Arc Consistency 1 arc-consistency(variables X, (values(x)) X X, constraints C) : 2 arcs := ((X, Y, C) X 2 C var C = {X, Y }) in any order 3 while arcs do 4 (X, Y, C) := remove-first(arcs) 5 illegal := {v values(x) w values(y ) : C(X = v, Y = w) = false} 6 if illegal 7 values(x) := values(x) \ illegal 8 append(arcs, ((Y, X, C ) X 2 C X = X, Y Y, var C = {X, Y })) 9 fi 10 od 11 return (values(x)) X X Course on Artificial Intelligence, winter term 2012/2013 20/35

Artificial Intelligence / 2. Backtracking Search Simplest form of propagation makes Arc Consistency each arc consistent X Y is consistent iff for every value x of X there is some allowed y WA NT Q NSW V SA T Chapter 5 27 Course on Artificial Intelligence, winter term 2012/2013 20/35

Artificial Intelligence / 2. Backtracking Search Simplest form of propagation makes Arc Consistency each arc consistent X Y is consistent iff for every value x of X there is some allowed y WA NT Q NSW V SA T Chapter 5 28 Course on Artificial Intelligence, winter term 2012/2013 20/35

Artificial Intelligence / 2. Backtracking Search Simplest form of propagation makes Arc Consistency each arc consistent X Y is consistent iff for every value x of X there is some allowed y WA NT Q NSW V SA T If X loses a value, neighbors of X need to be rechecked Chapter 5 29 Course on Artificial Intelligence, winter term 2012/2013 20/35

Artificial Intelligence / 2. Backtracking Search Simplest form of propagation makes Arc Consistency each arc consistent X Y is consistent iff for every value x of X there is some allowed y WA NT Q NSW V SA T If X loses a value, neighbors of X need to be rechecked Arc consistency detects failure earlier than forward checking Can be run as a preprocessor or after each assignment Chapter 5 30 Course on Artificial Intelligence, winter term 2012/2013 20/35

Artificial Intelligence / 2. Backtracking Search k-consistency k-consistency: any consistent assignment of any k 1 variables can be extended to a consistent assignment of k variables with any k-th variable. 1-consistency: node consistency same as forward checking. 2-consistency: arc consistency 3-consistency: path consistency strong k-consistent: 1-consistent and 2-consistent and... and k-consistent. strong n-consistency (where n is the number of variables) renders a CSP trivial: select a value for X 1, compute the remaining values for the other variables, then pick on for X 2 etc. strong n-consistency guarantees that there is no step where backtracking is necessary. Course on Artificial Intelligence, winter term 2012/2013 21/35

Artificial Intelligence 1. Constraint Satisfaction Problems 2. Backtracking Search 3. Local Search 4. The Structure of Problems Course on Artificial Intelligence, winter term 2012/2013 22/35

Artificial Intelligence / 3. Local Search min conflicts sort of greedy local search: states: complete assignments neighborhood: re-assigning a (randomly picked) conflicting variable goal: no conflicts 1 min-conflicts(variables X, constraints C) : 2 A := random complete assignment for X 3 for i := 1... maxsteps while C C : C(A) = false do 4 X := random({x X C C : C(A) = false and X var C}) 5 v := argmin v dom X {C C C(A, X = v) = false, X var C} 6 A X := v 7 od 8 return A, if C C : C(A) = true, failure else 2 2 3 3 1 2 3 1 2 2 3 2 3 0 Course on Artificial Intelligence, winter term 2012/2013 22/35

Artificial Intelligence / 3. Local Search min conflicts / performance min conflicts finds solution for n-queens problem very quickly even for very large n, e.g., n = 10, 000, 000 (starting from a random initial state). min conflicts also can solve large randomly-generated CSPs very quickly except in a narrow range of the constraints / variables ratio R := number of constraints number of variables CPU time critical ratio R Course on Artificial Intelligence, winter term 2012/2013 23/35

Artificial Intelligence 1. Constraint Satisfaction Problems 2. Backtracking Search 3. Local Search 4. The Structure of Problems Course on Artificial Intelligence, winter term 2012/2013 24/35

Artificial Intelligence / 4. The Structure of Problems Connected Components / Graphs Let G := (V, E) be an undirected graph. A sequence p = (p 1,..., p n ) V of vertices is called path of G if (p i, p i+1 ) E for i = 1..., n 1 G denotes the set of paths on G. x, y V are called connected if there is a path in G between x and y, i.e., it exists p G with p 1 = x and p p = y. G is called connected if all pairs of vertices are connected. A maximal connected subgraph G := (V, E ) of G is called connection component of G. A C E F G B D H I Course on Artificial Intelligence, winter term 2012/2013 24/35

Artificial Intelligence / 4. The Structure of Problems Connected Components / Graphs Let G := (V, E) be an undirected graph. A sequence p = (p 1,..., p n ) V of vertices is called path of G if (p i, p i+1 ) E for i = 1..., n 1 G denotes the set of paths on G. x, y V are called connected if there is a path in G between x and y, i.e., it exists p G with p 1 = x and p p = y. G is called connected if all pairs of vertices are connected. A maximal connected subgraph G := (V, E ) of G is called connection component of G. A C E F G B D H I Course on Artificial Intelligence, winter term 2012/2013 24/35

Artificial Intelligence / 4. The Structure of Problems Connected Components / Hypergraphs Let G := (V, E) be a hypergraph, i.e., E P(V ). A sequence p = (p 1,..., p n ) E of edges is called path of G if p i p i+1 for i = 1..., n 1 G denotes the set of paths on G. x, y V are called connected if there is a path in G between x and y, i.e., it exists p G with x p 1 and y p p. G is called connected if all pairs of vertices are connected. A maximal connected subgraph G := (V, E ) of G is called connection component of G. A B E F G C D H I J Course on Artificial Intelligence, winter term 2012/2013 25/35

Artificial Intelligence / 4. The Structure of Problems Connected Components / Hypergraphs Let G := (V, E) be a hypergraph, i.e., E P(V ). A sequence p = (p 1,..., p n ) E of edges is called path of G if p i p i+1 for i = 1..., n 1 G denotes the set of paths on G. x, y V are called connected if there is a path in G between x and y, i.e., it exists p G with x p 1 and y p p. G is called connected if all pairs of vertices are connected. A maximal connected subgraph G := (V, E ) of G is called connection component of G. A B E F G C D H I J Course on Artificial Intelligence, winter term 2012/2013 25/35

Artificial Intelligence / 4. The Structure of Problems Independent Subproblems Let (X, C) be a constraint satisfaction problem. The CSP (X, C ) with X X and C := {C C var C X } is called subproblem of (X, C) on the variables X. Two subproblems on the variables X 1 and X 2 are called independent if there is no joining constraint, i.e., no C C with var C X 1 and var C X 2 (and thus X 1 X 2 = ). I.e., if the respective constraint sub-hypergraphs are unconnected. Western Australia Northern Territory South Australia Tasmania Queensland New South Wales Victoria WA NT SA Q NSW V T Course on Artificial Intelligence, winter term 2012/2013 26/35

Artificial Intelligence / 4. The Structure of Problems Independent Subproblems Consistent assignments of independent subproblems can be joined to consistent assignments of the whole problem. The other way around: if a probem decomposes into independent subproblems we can solve each one separately and joint the subproblem solutions afterwards. Course on Artificial Intelligence, winter term 2012/2013 27/35

Artificial Intelligence / 4. The Structure of Problems Tree Constraint Graphs The next simple case: If the constraint graph is a tree, there is a linear-time algorithm to solve the CSP: 1. choose any vertex as the root of the tree, 2. order the variables from root to leaves s.t. parents precede their children in the ordering. (topological ordering) Denote variables by X (1), X (2),..., X (n). 3. For i = n down to 2: apply arc consistency to the edge (parent(x (i) ), X (i) ) i.e., eventually remove values from dom parent(x (i) ). 4. For i = 1 to n: choose a value for X (i) consistent with the value already choosen for parent(x (i) ). Course on Artificial Intelligence, winter term 2012/2013 28/35

Artificial Intelligence / 4. The Structure of Problems Tree Constraint Graphs A E B D A B C D E F C (a) F (b) Course on Artificial Intelligence, winter term 2012/2013 29/35

Artificial Intelligence / 4. The Structure of Problems General Constraint Graphs Idea: try to reduce problem to constraint trees. Approach 1: cycle cutset remove some vertices s.t. the remaining vertices form a tree. for binary CSPs: 1. find a subset S X of variables s.t. the constraint graph of the subproblem on X \S becomes a tree. 2. for each consistent assignment A on S: (a) remove from the domains of X \S all values not consistent with A, (b) search for a solution of the remaining CSP. if there is one, an overall solution has been found. Course on Artificial Intelligence, winter term 2012/2013 30/35

Artificial Intelligence / 4. The Structure of Problems General Constraint Graphs / Cycle cutset WA NT Q SA NSW V T The smaller the cutset, the better. Finding the smallest cutset is NP-hard. Course on Artificial Intelligence, winter term 2012/2013 30/35

Artificial Intelligence / 4. The Structure of Problems General Constraint Graphs / Cycle cutset WA NT Q WA NT Q SA NSW NSW V V T T The smaller the cutset, the better. Finding the smallest cutset is NP-hard. Course on Artificial Intelligence, winter term 2012/2013 31/35

Artificial Intelligence / 4. The Structure of Problems General Constraint Graphs / Tree Decompositions Approach 2: tree decomposition decompose the constraint graph in overlapping subgraphs s.t. the overlapping structure forms a tree Tree decomposition (X i ) i=1,...,m : 1. each vertex appears in at least one subgraph. 2. each edge appears in at least one subgraph. 3. if a vertex appears in two subgraphs, it must appear in every subgraph along the path connecting those two vertices. Course on Artificial Intelligence, winter term 2012/2013 32/35

Artificial Intelligence / 4. The Structure of Problems General Constraint Graphs / Tree Decompositions WA NT Q SA NSW V T Course on Artificial Intelligence, winter term 2012/2013 33/35

Artificial Intelligence / 4. The Structure of Problems General Constraint Graphs / Tree Decompositions WA NT SA Q NSW WA NT SA NT SA Q V Q T SA NSW SA NSW T V Course on Artificial Intelligence, winter term 2012/2013 33/35

Artificial Intelligence / 4. The Structure of Problems General Constraint Graphs / Tree Decompositions To solve the CSP: view each subgraph as a new variable and apply the algorithm for trees sketched earlier. Example: (WA,SA,NT) = (r,b,g) (SA,NT,Q) = (b,g,r) In general, many tree decompositions possible. The treewidth of a tree decomposition is the size of the largest subgraph minus 1. The smaller the treewidth, the better. Finding the tree decomposition with minimal treewidth is NP-hard. Course on Artificial Intelligence, winter term 2012/2013 34/35

Artificial Intelligence / 4. The Structure of Problems Summary CSPs allow to describe problems by variables and constraints between them. Depth-first search assigning one variable a time (called backtracking) can be used to solve CSPs. Heuristics for choosing the next variable to assign (MRV; degree heuristics) and for ordering the values (least constraining value) can accelerate backtracking. MRV can be efficiently implemented keeping book of the remaining values for each unassigned variable (forward checking). More complex methods of constraint propagation (such as arc consistency) can be used to lower the risk of having to backtrack. Local search (min conflicts) can be used to solve CSPs quickly. Tree-structured CSPs can be solved in linear time. Course on Artificial Intelligence, winter term 2012/2013 35/35