Principles of Ar.ficial Intelligence
|
|
- Amanda Hudson
- 5 years ago
- Views:
Transcription
1 Principles of Ar.ficial Intelligence Vasant Honavar Ar3ficial Intelligence Research Laboratory College of Informa3on Sciences and Technology Bioinforma3cs and Genomics Graduate Program The Huck Ins3tutes of the Life Sciences Pennsylvania State University hhp://vhonavar.ist.psu.edu hhp://faculty.ist.psu.edu/vhonavar
2 Heuris3c func3ons revisited E.g., for the 8- puzzle Avg. solu3on cost is about 22 steps Exhaus3ve search to depth 22: 3.1 x states. A good heuris3c func3on can reduce search
3 Heuris3c func3ons E.g., for the 8- puzzle two commonly used heuris3cs h 1 = the number of misplaced 3les h 1 (s)=8 h 2 = the sum of the distances of the 3les (not coun3ng blank) from their desired posi3ons (ManhaHan distance). h 2 (s)= =18
4 How good is a heuris3c? Effec3ve branching factor b* N = the number of nodes generated by a heuris3c search algorithm (e.g., A*) The effec3ve branching factor of search = the branching factor of a tree of depth d needs to have in order to contain N+1 nodes. N +1=1+ b * +(b*) (b*) d Provides a good guide to the heuris3c s overall usefulness b* = 1 for A* search with a perfect heuris3c
5 Calcula3on of effec3ve branching factor N=20, d=4 [ ] N +1=1+ b*+(b*) (b*) d = (b*)d +1 1 b * 1 ( ) Solve for b * : 21 =1+ b * + ( b * ) 2 + ( b * ) 3 + ( b * ) 4 b * 1.5
6 Dominance Given two admissible heuris3cs h 1 and h 2 such that h 2 (n) h 1 (n) for all n then h 2 dominates h 1 then h 2 is more informa3ve than h 1 If h 2 is more informa3ve than h 1 then every par3al path that is expanded by A* using h 2 is necessarily expanded by A* using h 1 Typical search costs (average number of nodes expanded) using the two heuris3cs for 8- puzzle (averaged over 100 instances for each depth) d=12: IDA*with no heuris3cs: 3,644,035 nodes A * (h 1 ) = 227 nodes A * (h 2 ) = 73 nodes d=24 IDA* with no heuris3cs: too many nodes A * (h 1 ) = 39,135 nodes A * (h 2 ) = 1,641 nodes h 2 h 1
7 Combining admissible heuris3cs h ( n) = { h ( n), h ( n),! h ( n) } max 1 2 m h * h ( n) 3 h ( n) h ( n) 2 1 ( n)
8 Inven3ng admissible heuris3cs: Relaxa3on Admissible heuris3cs can be derived from the exact solu3on cost of a relaxed version of the problem that is easy to solve (without search): Relaxed 8- puzzle for h 1 : a 3le can be exchanged with any other 3le on the board Relaxed 8- puzzle for h 2 : a 3le can move to any adjacent square. The op3mal solu3on cost of a relaxed problem is no greater than the op3mal solu3on cost of the real problem Heuris3c func3on based on exact cost of the solu3on of the relaxed problem is guaranteed to be consistent
9 Inven3ng admissible heuris3cs: sub- problems Admissible heuris3cs can be derived from the solu3on cost of a sub problem of a given problem This cost is a lower bound on the cost of the real problem PaHern databases store the exact solu3on to for every possible sub problem instance. The complete heuris3c is constructed using the paherns in the DB
10 Inven3ng admissible heuris3cs: learning Admissible heuris3c can be learned from experience: Experience = solving lots of 8- puzzles An induc3ve learning algorithm can be used to predict costs for other states that arise during search (we will revisit this when we consider learning agents)
11 Problem Reduc3on Representa3on Divide and conquer Reduce solu3on recursively to problems to solu3ons to sub- problems Problem is solved when all sub- problems are solved
12 Example Problem solving an integral Sub- problems easier integrals to solve Operators rules of integral calculus and algebra Primi3ve problems problems whose solu3ons can be looked up or computed by execu3ng a known procedure
13 Problem solving an integral Sub- problems easier integrals to solve Operators rules of integral calculus and algebra Primi3ve problems problems whose solu3ons can be looked up or computed by execu3ng a known procedure Example
14 Problem reduc3on representa3on (PRR) A PRR problem is specified by a 3- tuple (G, O, P) G is a problem to be solved O is a set of operators for decomposing problems into sub- problems through AND or OR decomposi3ons P is a set of primi3ve problems Solu3on An AND decomposi3on is solved when each of the sub- problems is solved An OR decomposi3on is solved when at least one of the sub- problems is solved A problem is unsolvable if it is neither a primi3ve problem nor can it be further decomposed PRR is a generaliza3on of the state space representa3on (why?)
15 Problem reduc3on representa3on Solving a problem in PRR reduces to searching an AND- OR graph Nodes correspond to problems Connectors correspond to arcs Connectors correspond to AND or OR decomposi3ons Connectors of arity k are called k- connectors (3- connector)
16 Problem Example Primi:ve problem Unsolvable problem 3 solu:ons
17 Solu3on to an SRR problem A sub- graph s q of an AND- OR graph is said to be solu3on to a problem q if s q is rooted at q Each non- leaf node y in s q, has exactly one connector out of it that belongs to s q Each leaf node in s q is a primi3ve problem (i.e. a member of P) A problem q is said to be solvable if a sub- graph s q of an AND- OR graph is a solu3on to q Solving a problem G using a PRR (G, O, P) entails finding a sub- graph S G of the corresponding AND- OR graph that is a solu3on of G
18 Ques3on How can we solve an SRR problem? Basic idea: Generalize state- space search How? par3al paths à sub graphs of the SRR AND- OR graph Expanding a node must comply with the seman3cs of AND and OR connectors Termina3on test must comply with the defini3on of a solu3on
19 Example BFS Primi:ve problem Unsolvable problem List Exercise: Solve the same problem using DFS
20 Op3mal (minimum cost) solu3on of AND- OR graphs Cost of an unsolvable primi:ve problem infinity Cost of connectors and primi:ve problems are assumed to be strictly posi:ve and bounded
21 Op3mal solu3on of an SRR problem
22 Branch and Bound Search for Op3mal Solu3on List (A) (B & C) (D) (E & C) (D) (F & C) (D) (E & G & H) (F & C) (E & G & H) (I & J) (F & C) Cost( A) = Cost( E & G & H) = = 9
23 Using Heuris3cs f ( C & D) = Cost( A) h( C) + h( D) f ( E) = Cost( A) h( E) Admissible heuris:c func:on ( ) * n h ( n) Cost( n) h = Cost of the cheapest solu:on of n
24 AO* - Searching AND- OR graphs (A) (I & J) (C & D) (K & M & N) (C & D) (L & M & N)
25 Proper3es of AO* AO* is a generaliza3on of A* for AND- OR graphs AO*, like A*, is admissible if the heuris3c func3on is admissible and the usual assump3ons (finite branching factor etc) hold AO*, like A* is also op3mal among the class of heuris3c search algorithms that use an addi3ve cost / evalua3on func3on Proofs lel as exercises
26 Pathless search problems Previously: systema3c explora3on of search space. Path to goal is solu3on to problem For some problems path is irrelevant Pathless search problems E.g 8- queens Different algorithms can be used Local search
27 Local search and op3miza3on Local search Maintain a single current state and move to neighboring state(s) Advantages: Uses very lihle memory Find olen reasonable solu3ons in large or infinite state spaces Are useful for pure op3miza3on problems Find the best state according to some objec:ve func:on.
28 Hill- climbing search Chooses locally best moves, with random choice to break 3es Terminates when a state corresponding to a locally op3mal evalua3on is reached Does not look beyond the immediate neighbors of the current state. a.k.a. greedy local search
29 Local search and op3miza3on
30 Hill- climbing example 8- queens problem (complete- state formula3on) Successor func3on: move a single queen to another square in the same column Heuris3c func3on h(n): the number of pairs of queens that are ahacking each other (directly or indirectly)
31 Drawbacks of hill- climbing search Ridge sequence of local maxima difficult for greedy algorithms to navigate Plateau an area of the state space where the evalua3on func3on is flat
32 Hill- climbing varia3ons Stochas3c hill- climbing Random selec3on among the uphill moves The selec3on probability can vary with the steepness of the uphill move First- choice hill- climbing stochas3c hill climbing by genera3ng successors randomly un3l a beher one is found Random- restart hill- climbing Tries to avoid geqng stuck in local maxima
33 Simulated annealing Escape local maxima by allowing occasional bad moves but gradually decrease their size and frequency Origin of idea annealing of metal Bouncing ball analogy: Shaking hard (= high temperature) Shaking less (= lower the temperature) The probability P(s) of being in state s P(s) e E(s) kt where E ( s is ) the energy at state s, given by the objec3ve func3on applied to state s, k a posi3ve constant and T the temperature If T decreases slowly enough, then simulated annealing is guaranteed to find a global op3mum with probability approaching 1
34 Simulated annealing Escape local maxima by allowing occasional bad moves If T decreases slowly enough, then simulated annealing search will find a global op3mum with probability approaching 1 Used for VLSI layout, airline scheduling, convex op3miza3on, etc.
35 func.on SIMULATED- ANNEALING( problem, schedule) return a solu3on state input: problem, a problem schedule, a mapping from 3me to temperature local variables: current, a node. next, a node. T, a temperature controlling the probability of downward steps current MAKE- NODE(INITIAL- STATE[problem]) for t 1 to do T schedule[t] if T = 0 then return current next a randomly selected successor of current E VALUE[next] - VALUE[current] if E > 0 then current next else current next only with probability e E /T
36 Local beam search Keep track of k states instead of one Ini3ally: k random states Next: determine all successors of k states If any of successors is goal finished Else select k best from successors and repeat Major difference from random- restart search Informa3on is shared among k search threads Can suffer from lack of diversity Stochas3c variant: choose k successors with probability propor3onal to difference between the state s value rela3ve to the value of the current state
37 Gene3c algorithms A successor state is generated by combining two parent states Start with k randomly generated states (popula3on) A state is represented as a string over a finite alphabet (olen a string of 0s and 1s) Evalua3on func3on (fitness func3on) Higher values for beher states. Produce the next genera3on of states by selec3on, crossover, and muta3on
38 Gene3c algorithms Variant of local beam search with gene:c recombina:on
39 func.on GENETIC_ALGORITHM( popula:on, FITNESS- FN) return an individual input: popula:on, a set of individuals FITNESS- FN, a func3on which determines the quality of the individual repeat new_popula:on empty set loop for i from 1 to SIZE(popula:on) do x RANDOM_SELECTION(popula:on, FITNESS_FN) y RANDOM_SELECTION(popula:on, FITNESS_FN) child REPRODUCE(x,y) if (small random probability) then child MUTATE(child ) add child to new_popula:on popula:on new_popula:on un.l some individual is fit enough or enough 3me has elapsed return the best individual
40 Gene3c Algorithms GA emphasizes combining informa3on from good parents (crossover) EA emphasize genera3ng variants of current solu3ons (muta3on) many variants, e.g., reproduc3on models, operators
41 Gene3c algorithms The original gene3c algorithm due to John Holland is now known as the simple gene3c algorithm (SGA) Other GA use different: Representa3ons E.g. program parse trees (gene3c programming) Different variants and combina3ons of gene3c operators E.g. evolu3onary programs Selec3on mechanisms
42 SGA summary State Representa3on Recombina3on Muta3on Parent selec3on Survivor selec3on Speciality Binary strings N- point or uniform Bit- flipping with fixed probability Fitness- Propor3onate All children replace parents Emphasis on crossover
43 Representa3on State space Encoding (representa:on) Genotype space = {0,1} L Decoding (inverse representa:on)
44 SGA 1. Select parents for the ma:ng pool (size of ma:ng pool = popula:on size) 2. Shuffle the ma:ng pool 3. For each consecu:ve pair apply crossover with probability p c, otherwise copy parents 4. For each offspring apply muta:on (bit- flip with probability p m independently for each bit) 5. Replace the whole popula:on with the resul:ng offspring
45 SGA operators: 1- point crossover Choose a random point on the two parents Split parents at this crossover point Create children by exchanging fragments P c typically in range (0.6, 0.9)
46 SGA operators: muta3on Alter each gene independently with a probability p m p m is called the muta3on rate Typically between 1/pop_size and 1/ chromosome_length
47 SGA operators: Selec3on Main idea: beher individuals have higher chance of being selected Chances propor3onal to fitness Implementa3on: roulehe wheel technique Assign to each individual a part of the roulehe wheel Spin the wheel n 3mes to select n individuals 1/6 = 17% fitness(a) = 3 fitness(b) = 1 fitness(c) = 2 A 3/6 = 50% B C 2/6 = 33%
48 Simple problem: max x 2 over {0,1,,31} An example (Source: Goldberg 89) GA approach: Representa3on: binary code, e.g Popula3on size: 4 1- point crossover, bitwise muta3on RouleHe wheel selec3on Random ini3aliza3on We show one genera3on of GA
49 x 2 example: selec3on
50 X 2 example: crossover
51 X 2 example: muta3on
52 The simple GA Has been subject of many (early) studies s3ll olen used as benchmark for novel GAs Has many shortcomings, e.g. Representa3on is too restric3ve Selec3on mechanism too simplis3c
53 Alterna3ve Crossover Operators Performance with 1 Point Crossover depends on the order that variables occur in the representa3on more likely to keep together genes that are near each other Can never keep together genes from opposite ends of string This is known as Posi:onal Bias Can be exploited if we can match the bias to the structure of our problem, but this is not usually the case
54 n- point crossover Choose n random crossover points Split along those points Glue parts, alterna3ng between parents Generaliza3on of 1 point crossover
55 Uniform crossover Assign 'heads' to one parent, 'tails' to the other Flip a coin for each gene of the first child Make an inverse copy of the gene for the second child Inheritance is independent of posi3on
56 Crossover OR muta3on? It depends on the problem in general, it is good to have both Muta3on generates variants of good states by making small changes (analogous to hill climbing) Crossover generates states by combing features of good states Muta3on- only- EA is possible, crossover- only- EA would not work
57 Crossover OR muta3on? (cont d) Crossover helps explore the state space: Discover promising areas in the search space, i.e. gaining informa3on about the state space Makes a big jump to an area somewhere in between two (parent) states Muta3on helps exploit the known parts of the state space Muta3on makes small changes to already explored states so as to reach nearby states
58 Crossover OR muta3on? (cont d) Only crossover can combine informa3on from two explored states Only muta3on can introduce new informa3on (alleles or values of state variables) Crossover does not change the allele frequencies of the popula3on (If we start with 50% 0 s on first bit in the popula3on, we will have 50% 0 s aler performing n crossovers) To hit the op3mum you olen need a lucky muta3on
59 Tailoring gene3c operators to representa3ons Some problems naturally have integer variables Others take categorical values from a fixed set e.g. {blue, green, yellow, pink} n- point / uniform crossover operators work Extend bit- flipping muta3on to allow creep i.e. more likely to move to similar value Random choice (esp. categorical variables)
60 Permuta3on Representa3ons Ordering/sequencing problems Task is (or can be solved by) arranging some objects in a certain order Example: sort algorithm: important thing is which elements occur before others (order) Example: Travelling Salesman Problem (TSP) : important thing is which elements occur next to each other (adjacency) Solu3ons are generally expressed as a permuta3on: if there are n variables then the representa3on is as a list of n integers, each of which occurs exactly once
61 Problem: Given n ci3es Permuta3on representa3on: TSP example Find a complete tour with minimal length Encoding: Label the ci3es 1, 2,, n One complete tour is one permuta3on (e.g. for n =4 [1,2,3,4], [3,4,2,1] are tours) Search space is huge: for 30 ci3es there are 30! possible tours
62 Muta3on operators for permuta3ons Normal muta3on operators lead to invalid states e.g. bit- wise muta3on : let gene i have value j changing to some other value k would mean that k shows up twice and j does not show up at all Must change at least two values Muta3on parameter now reflects the probability that some operator is applied once to the state rather than individually in each posi3on
63 Insert Muta3on for permuta3ons Pick two values at random Move the second to follow the first, shiling the rest along to ensure that the result is valid tour Note that this preserves most of the order and the adjacency informa3on
64 Swap muta3on for permuta3ons Pick two values at random and swap their posi3ons Preserves most of adjacency informa3on (4 links broken), disrupts order more
65 Inversion muta3on for permuta3ons Pick two values at random and then reverse the substring between them Preserves most adjacency informa3on (only breaks two links) but disrupts order informa3on
66 Scramble muta3on for permuta3ons Pick a subset of genes at random Randomly rearrange the alleles in those posi3ons (note subset does not have to be con3guous)
67 Crossover operators for permuta3ons Normal crossover operators will olen lead to inadmissible solu3ons Many specialized operators have been devised which focus on combining order or adjacency informa3on from the two parents
68 Order 1 crossover Idea is to preserve rela3ve order that elements occur in Informal procedure: 1. Choose an arbitrary part from the first parent 2. Copy this part to the first child 3. Copy the values that are not in the first part, to the first child: star3ng right from cut point of the copied part, using the order of the second parent and wrapping around at the end 4. Analogous for the second child, with parent roles reversed
69 Order 1 crossover example Copy randomly selected set from first parent Copy rest from second parent in order 1,4,9,3,7,8,2,6,5
70 Popula3on Models SGA uses a Genera3onal model: each individual survives for exactly one genera3on the en3re set of parents is replaced by the offspring At the other end of the scale are Steady- State models: one offspring is generated per genera3on one member of popula3on replaced Genera3on Gap the propor3on of the popula3on replaced 1.0 for SGA, 1/pop_size for steady state GA
71 Fitness Based Compe33on Selec3on can occur in two places: Selec3on from current genera3on to take part in ma3ng (parent selec3on) Selec3on from parents + offspring to go into next genera3on (survivor selec3on) Selec3on operators work on whole individual i.e. they are representa3on- independent
72 Implementa3on example: SGA Expected number of copies of an individual i E( n i ) = µ f(i)/ f µ = pop.size, f(i) = fitness of i, f avg. fitness in pop.) RouleHe wheel algorithm: Given a probability distribu3on, spin a 1- armed wheel n 3mes to make n selec3ons
73 Fitness- Propor3onate Selec3on Problems include One highly fit member can rapidly take over if rest of popula3on is much less fit: Premature Convergence At end of runs when fitness values are similar, selec3on pressure is lost
74 Rank Based Selec3on AHempt to avoid problems of FPS by basing selec3on probabili3es on rela:ve rather than absolute fitness Rank popula3on according to fitness and then base selec3on probabili3es on rank where fihest has rank µ and worst rank 1 This imposes a sor3ng overhead on the algorithm, but this is usually negligible compared to the fitness evalua3on 3me
75 Tournament Selec3on All methods above rely on global popula3on sta3s3cs Could be a bohleneck esp. on parallel machines Relies on presence of external fitness func3on which might not exist: e.g. evolving game players Informal Procedure: Pick k members at random then select the best of these Repeat to select more individuals
76 Further variants Gene3c Programming Candidate solu3ons represented by parse trees of programs e.g. (+ 2 3 (* 2 5)) Gene3c operators manipulate the popula3on guided by a fitness func3on
77 Problem solving as constraint sa3sfac3on Focus: A subclass of search problems with special structure
78 Problem solving as constraint sa3sfac3on Constraint sa3sfac3on problems (CSP) Proper3es of CSP Backtracking for CSP Local search for CSP Problem structure and decomposi3on Constraint propaga3on
79 Constraint sa3sfac3on problem Scene labeling problem (Waltz, 1975) Assump3ons trihedral ver3ces, no shadows, no cracks, general viewing posi3on Vertex types: L, T, Y, /I\ Line types: +, -, à, ß Given a 2- d drawing of a 3- d scene, label the lines so as to make explicit, the 3- d interpreta3on of the scene +
80 Constraint A constraint is a (typically logical) rela3onship between variables in a domain Constraints are declara3ve Specify what proper3es must hold without specifying how to enforce them Each constraint applies to one or more variables Constraints are rarely independent of each other The solu3on is independent of the order in which the constraints are enforced
81 Constraint sa3sfac3on problems What is a CSP? Finite set of variables V 1, V 2,, V n Finite set of constraints C 1, C 2,, C m Non- empty domain of possible values for each variable D V1, D V2, D Vn Each constraint C i rules out certain assignments of values to variables e.g., V 1 V 2 A state is defined as an assignment of values to some or all variables. Consistent assignment: assignment does not not violate the constraints
82 Constraint sa3sfac3on problems An assignment is complete when every variable has been assigned a value A solu:on to a CSP is a complete assignment that sa3sfies all constraints. Some CSPs require a solu3on that maximizes an objec:ve func:on. CSP Applica3ons: Scheduling Temporal reasoning Design Floor planning Map coloring 3- d interpreta3on of line drawings
83 CSP example: map coloring Variables: WA, NT, Q, NSW, V, SA, T Domains: D i ={red,green,blue} Constraints:adjacent regions must have different colors. E.g. WA NT i.e., E.g. (WA,NT) {(red,green),(red,blue),(green,red), }
84 CSP example: map coloring Solu3ons are assignments sa3sfying all constraints, e.g. {WA=red,NT=green,Q=red,NSW=green,V=red,SA=blue,T=green}
85 Constraint graph CSP solvers exploit special proper3es of CSP (more on this later) CSP solvers benefit from Standard representa3on Generic goal and successor func3ons Generic heuris3cs Constraint graph = nodes are variables, edges show constraints. Graph can be used to simplify search e.g. Tasmania is an independent sub problem.
86 Map coloring Using 3 colors (R, G, & B), color the US map such that no two adjacent states have the same color Variables? Domains? Constraints?
87 Map coloring Constraint graph representa3on Using 3 colors (R, G, & B), color the US map such that no two adjacent states have the same color NE UT CO WY { red, green, blue } KS { red, green, blue } AR { red, green, blue } AZ NM { red, green, blue } OK LA TX
88 Example: Resource Alloca3on { a, b, c } { a, b } { a, c, d } { b, c, d } What is the CSP formula3on?
89 Resource Alloca3on Constraint Graph Representa3on { a, b, c } { a, b } { a, c, d } { b, c, d }
90 Train, elevator, car, etc. Example: Product Configura3on Given: Components and their ahributes (variables) Domain covered by each characteris3c (values) Rela3ons among the components (constraints) A set of required func3onali3es (more constraints) Find: a product configura3on an acceptable combina3on of components that realizes the required func3onali3es
91 Example: cryptarithme3c
92 Varie3es of CSPs Discrete variables Finite domains; size d O(d n ) possible assignments E.g. Boolean CSPs, include Boolean sa3sfiability (NP- complete). Infinite domains (integers, strings, etc.) E.g. job scheduling, variables are start/end days for each job Need a constraint language to express constraints e.g StartJob 1 +5 StartJob 3. Linear constraints solvable, nonlinear undecidable Con3nuous variables e.g. start/end 3mes for Hubble Telescope observa3ons Linear constraints solvable in poly 3me by LP methods
93 Varie3es of constraints Unary constraints involve a single variable e.g. SA green Binary constraints involve pairs of variables e.g. SA WA Higher- order constraints involve 3 or more variables. e.g. cryptharithme3c column constraints Preference (sol constraints) e.g. red is beher than green olen represented by a cost for each variable assignment constrained op3miza3on problems
94 CSP as a standard search problem A CSP can easily expressed as a standard search problem Ini:al State: the empty assignment {}. Successor func:on: Assign value to unassigned variable provided that there is no conflict Goal test: the current assignment is complete Arc cost: constant cost for each step
95 CSP as a standard search problem Solu3on is found at depth n (if there are n variables). Hence depth first search can be used Path is irrelevant Branching factor b at the top level is nd. b=(n- l)d at depth l, hence n!d n leaves (d n complete assignments)
96 Commuta3vity CSPs are commuta3ve. The order of any given set of ac3ons has no effect on the outcome Example: choose colors for Australian territories one at a 3me [WA=red then NT=green] same as [NT=green then WA=red] All CSP search algorithms consider a single variable assignment at a 3me there are d n leaves
97 Backtracking search Depth- first search Chooses values for one variable at a 3me and backtracks when a variable has no legal values lel to assign General performance not good (see table p. 143 of text)
98 Backtracking search func.on BACKTRACKING- SEARCH(csp) return a solu3on or failure return RECURSIVE- BACKTRACKING({}, csp) func.on RECURSIVE- BACKTRACKING(assignment, csp) return a solu3on or failure if assignment is complete then return assignment var SELECT- UNASSIGNED- VARIABLE (VARIABLES[csp],assignment,csp) for each value in ORDER- DOMAIN- VALUES(var, assignment, csp) do if value is consistent with assignment according to CONSTRAINTS[csp] then add {var=value} to assignment result RRECURSIVE- BACTRACKING(assignment, csp) if result failure then return result remove {var=value} from assignment return failure
99 Backtracking example
100 Backtracking example
101 Backtracking example
102 Example
103 Subtrees have similar topologies Consider 4 variables W, X, Y, Z with domains of cardinality 4, 3, 2 and 1 respec3vely W w 1 w 2 w 3 w 4 X X x 1 x 2 x 3 x 1 x 2 x 3 Y y 1 y 2 Similar Subtrees Y y 1 y 2
104 Variable ordering and Search space size Variable ordering W, X, Y, Z W Y w 1 w 2 w 3 w 4 X X x 1 x 2 x 3 x 1 x 2 x 3 y 1 y 2 Search space size = (4)(3)(2)(1)+(4)(3)(2)+(4)(3)+(4) Z z 1 Z y 1 = = 64
105 Variable ordering and Search space size Z z 1 Y Variable ordering Z, Y, Z, W X y 1 y 2 X x 1 x 2 x 3 W w 1 w 2 w 3 w 4 x 1 x 2 x 3 Search space size = (4)(3)(2)(1)+(3)(2)(1)+(2)(1)+1 = =33
106 Backtracking search Standard backtracking fails to exploit special proper3es of CSP Subtrees have similar topologies: Search space has minimal size under a certain ordering of variables (most constrained to least constrained)
107 Backtracking search Subtrees have similar topologies: If we find an assignment of values to a pair of variables is incompa3ble, it should rule out all assignments that include the incompa3ble par3al assignment Search space has minimal size under a certain ordering of variables (most constrained to least constrained) Choose ordering of variables to assign that minimizes the search space size
108 Improving backtracking efficiency Which variable should be assigned next? In what order should its values be tried? Can we detect inevitable failure early? Can we take advantage of problem structure?
109 Minimum remaining values var SELECT- UNASSIGNED- VARIABLE(VARIABLES[csp],assignment,csp) a.k.a. most constrained variable heuris3c Rule: choose variable with the fewest legal moves Which variable shall we try first?
110 Degree heuris3c Use degree heuris3c Rule: select variable that is involved in the largest number of constraints on other unassigned variables. Degree heuris3c is very useful as a 3e breaker. In what order should its values be tried?
111 Least constraining value Least constraining value heuris3c Rule: given a variable choose the least constraining value i.e. the one that leaves the maximum flexibility for subsequent variable assignments.
112 Forward checking Can we detect inevitable failure early? And avoid it later? Forward checking idea: keep track of remaining legal values for unassigned variables. Terminate search when any variable has no legal values
113 Forward checking Assign {WA=red} Effects on other variables connected by constraints with WA NT can no longer be red SA can no longer be red
114 Forward checking Assign {Q=green} Effects on other variables connected by constraints with WA NT can no longer be green NSW can no longer be green SA can no longer be green
115 Forward checking If V is assigned blue Effects on other variables connected by constraints with WA SA is empty NSW can no longer be blue FC has detected that par3al assignment is inconsistent with the constraints and backtracking can occur
116 4 Queens Revisited Importance of representa3on N- queens: formula3on Variables? Domains? Size of CSP? 4 Rows 4 Column posi3ons = 4 = 2 8 V 1 V 2 V 3 V 4 N- queens: formula3on 2 V 11 V 12 V 13 V 14 Variables? Domains? Size of CSP? 16 Cells {0,1} 16 2 V 21 V 22 V 23 V 24 V 31 V 32 V 33 V 34 V 41 V 42 V 43 V 44
117 Variables columns Values rows Example: 4- Queens Problem X1 {1,2,3,4} X3 {1,2,3,4} X2 {1,2,3,4} X4 {1,2,3,4}
118 Example: 4- Queens Problem X1 {1,2,3,4} X2 {1,2,3,4} X3 {1,2,3,4} X4 {1,2,3,4}
119 Example: 4- Queens Problem X1 {1,2,3,4} X2 {,,3,4} X3 {,2,,4} X4 {,2,3, }
120 Example: 4- Queens Problem X1 {1,2,3,4} X2 {,,3,4} X3 {,2,,4} X4 {,2,3, }
121 Example: 4- Queens Problem X1 {1,2,3,4} X2 {,,3,4} X3 {,,, } X4 {,2,3, } Note X3 has empty domain Cannot backtrack to X2 (X2=4 is ruled out) Must backtrack to X1
122 Example: 4- Queens Problem X1 {,2,3,4} X2 {1,2,3,4} X3 {1,2,3,4} X4 {1,2,3,4}
123 Example: 4- Queens Problem X1 {,2,3,4} X2 {,,,4} X3 {1,,3, } X4 {1,,3,4}
124 Example: 4- Queens Problem X1 {,2,3,4} X2 {,,,4} X3 {1,,3, } X4 {1,,3,4}
125 Example: 4- Queens Problem X1 {,2,3,4} X2 {,,,4} X3 {1,,, } X4 {1,,3, }
126 Example: 4- Queens Problem X1 {,2,3,4} X2 {,,,4} X3 {1,,, } X4 {1,,3, }
127 Example: 4- Queens Problem X1 {,2,3,4} X2 {,,,4} X3 {1,,, } X4 {,,3, }
128 Example: 4- Queens Problem X1 {,2,3,4} X2 {,,,4} X3 {1,,, } X4 {,,3, }
129 Constraint propaga3on Solving CSPs with combina3on of heuris3cs plus forward checking is more efficient than either approach alone Constraint propaga3on repeatedly enforces constraints locally
130 Arc consistency X Y is consistent iff for every value x of X there is some allowed y SA NSW is consistent iff SA=blue and NSW=red
131 Arc consistency X Y is consistent iff for every value x of X there is some allowed y NSW SA is consistent iff NSW=red and SA=blue NSW=blue and SA=??? Arc can be made consistent by removing blue from NSW
132 Arc consistency Arc can be made consistent by removing blue from NSW RECHECK neighbours!! Remove red from V
133 Arc consistency Arc can be made consistent by removing blue from NSW RECHECK neighbours!! Remove red from V Arc consistency detects failure earlier than FC Can be run as a preprocessor or aler each assignment. Repeated un3l no inconsistency remains
134 Arc consistency algorithm func.on AC- 3(csp) return the CSP, possibly with reduced domains inputs: csp, a binary csp with variables {X 1, X 2,, X n } local variables: queue, a queue of arcs ini3ally the arcs in csp while queue is not empty do (X i, X j ) REMOVE- FIRST(queue) if REMOVE- INCONSISTENT- VALUES(X i, X j ) then for each X k in NEIGHBORS[X i ] do add (X i, X j ) to queue func.on REMOVE- INCONSISTENT- VALUES(X i, X j ) return true iff we remove a value removed false for each x in DOMAIN[X i ] do if no value y in DOMAIN[X i ] allows (x,y) to sa3sfy the constraints between X i and X j then delete x from DOMAIN[X i ]; removed true return removed
135 K- consistency Arc consistency does not detect all inconsistencies: Par3al assignment {WA=red, NSW=red} is inconsistent. Stronger forms of propaga3on can be defined using the no3on of k- consistency. A CSP is k- consistent if for any set of k- 1 variables and for any consistent assignment to those variables, a consistent value can always be assigned to any kth variable. E.g. 1- consistency or node- consistency E.g. 2- consistency or arc- consistency E.g. 3- consistency or path- consistency
136 K- consistency A graph is strongly k- consistent if It is k- consistent and Is also (k- 1) consistent, (k- 2) consistent, all the way down to 1- consistent. This is ideal since a solu3on can be found in 3me O(nd) instead of O(n 2 d 3 ) YET no free lunch: any algorithm for establishing n - consistency must take 3me exponen3al in n, in the worst case
137 Further variants Intelligent backtracking Standard form is chronological backtracking i.e. try different value for preceding variable. More intelligent backtrack to conflict set. Set of variables that caused the failure or set of previously assigned variables that are connected to X by constraints. Backjumping moves back to most recent element of the conflict set. Forward checking can be used to determine conflict set.
138 Local search for CSP Use complete- state representa3on For CSPs allow states with unsa3sfied constraints operators reassign variable values Variable selec3on: randomly select any conflicted variable Value selec3on: min- conflicts heuris:c Select new value that results in a minimum number of conflicts with the other variables
139 Local search for CSP func.on MIN- CONFLICTS(csp, max_steps) return solu3on or failure inputs: csp, a constraint sa3sfac3on problem max_steps, the number of steps allowed before giving up current an ini3al complete assignment for csp for i = 1 to max_steps do if current is a solu3on for csp then return current var a randomly chosen, conflicted variable from VARIABLES[csp] value the value v for var that minimize CONFLICTS(var,v,current,csp) set var = value in current return failure
140 Min- conflicts example 1 h=5 h=3 h=1 Use of min- conflicts heuris3c in hill- climbing
141 Min- conflicts example 2 A two- stage solu3on for an 8- queens problem using min- conflicts heuris3c At each stage a queen is chosen for reassignment in its column. The algorithm moves the queen to the min- conflict square breaking 3es randomly
142 Advantages of local search The run3me of min- conflicts is roughly independent of problem size. Solving the millions- queen problem in roughly 50 steps. Local search can be used in an online seqng. Backtrack search requires more 3me
143 Problem structure How can the problem structure help to find a solu:on quickly? Iden3fy rela3vely independent subproblems: Coloring Tasmania and mainland are independent sub- problems Iden3fiable as connected components of constrained graph Improves performance
144 Problem structure Suppose each problem has c variables out of a total of n. Worst case solu3on cost is O(n/c d c ), i.e. linear in n Instead of O(d n ), exponen3al in n E.g. n= 80, c= 20, d= = 4 billion years at 1 million nodes/sec. 4 * 2 20 =.4 second at 1 million nodes/sec
145 Tree- structured CSPs Theorem: if the constraint graph has no loops then CSP can be solved in O(nd 2 ) 3me Compare difference with general CSP, where worst case is O(d n )
146 Tree- structured CSPs In many cases subproblems of a CSP are connected as a tree Any tree- structured CSP can be solved in 3me linear in the number of variables. Choose a variable as root, order variables from root to leaves such that every node s parent precedes it in the ordering. (label var from X 1 to X n ) For j from n down to 2, apply REMOVE-INCONSISTENT- VALUES(Parent(X j ),X j ) For j from 1 to n assign X j consistently with Parent(X j )
147 Nearly tree- structured CSPs Can more general constraint graphs be reduced to trees? Two approaches: Remove certain nodes Collapse certain nodes
148 Nearly tree- structured CSPs Cutset Condi3oning Idea: assign values to some variables so that the remaining variables form a tree. Assume that we assign {SA=x} cycle cutset And remove any values from the other variables that are inconsistent. The selected value for SA could be the wrong one so we have to try all of them
149 Nearly tree- structured CSPs Feasible if cycle cutset is small. Finding the smallest cycle cutset is NP- hard Approxima3on algorithms exist
150 Nearly tree- structured CSPs Tree decomposi3on of the constraint graph in a set of connected subproblems Solve each subproblem independently Combine resul3ng solu3ons Necessary condi3ons: Every variable appears in ar least one of the subproblems If two variables are connected in the original problem, they must appear together in at least one subproblem If a variable appears in two subproblems, it must appear in each node on the path
151 Summary CSP is a special kind of search problem: states defined by values of a fixed set of variables, goal test defined by constraints on variable values CSP solu3on exploits the special proper3es of CSP Variable ordering and value selec3on heuris3cs help significantly Forward checking prevents assignments that lead to failure Tree structured CSPs can be solved in linear 3me Itera3ve min- conflicts is usually effec3ve in prac3ce
Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Thursday: Chapter 8
Constraint t Satisfaction Problems Reading: Chapter 6 (3 rd ed.); Chapter 5 (2 nd ed.) For next week: Tuesday: Chapter 7 Thursday: Chapter 8 Outline What is a CSP Backtracking for CSP Local search for
More informationSpezielle Themen der Künstlichen Intelligenz
Spezielle Themen der Künstlichen Intelligenz 2. Termin: Constraint Satisfaction Dr. Stefan Kopp Center of Excellence Cognitive Interaction Technology AG A Recall: Best-first search Best-first search =
More informationConstraint Satisfaction Problems
Last update: February 25, 2010 Constraint Satisfaction Problems CMSC 421, Chapter 5 CMSC 421, Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local
More informationConstraint Satisfaction
Constraint Satisfaction Philipp Koehn 1 October 2015 Outline 1 Constraint satisfaction problems (CSP) examples Backtracking search for CSPs Problem structure and problem decomposition Local search for
More informationConstraint Satisfaction Problems
Constraint Satisfaction Problems Chapter 5 Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local search for CSPs Chapter 5 2 Constraint satisfaction
More informationConstraint Satisfaction Problems. Chapter 6
Constraint Satisfaction Problems Chapter 6 Office hours Office hours for Assignment 1 (ASB9810 in CSIL): Sep 29th(Fri) 12:00 to 13:30 Oct 3rd(Tue) 11:30 to 13:00 Late homework policy You get four late
More informationConstraint Satisfaction Problems. A Quick Overview (based on AIMA book slides)
Constraint Satisfaction Problems A Quick Overview (based on AIMA book slides) Constraint satisfaction problems What is a CSP? Finite set of variables V, V 2,, V n Nonempty domain of possible values for
More informationWhat is Search For? CS 188: Ar)ficial Intelligence. Constraint Sa)sfac)on Problems Sep 14, 2015
CS 188: Ar)ficial Intelligence Constraint Sa)sfac)on Problems Sep 14, 2015 What is Search For? Assump)ons about the world: a single agent, determinis)c ac)ons, fully observed state, discrete state space
More informationConstraint Satisfaction. AI Slides (5e) c Lin
Constraint Satisfaction 4 AI Slides (5e) c Lin Zuoquan@PKU 2003-2018 4 1 4 Constraint Satisfaction 4.1 Constraint satisfaction problems 4.2 Backtracking search 4.3 Constraint propagation 4.4 Local search
More informationCS 771 Artificial Intelligence. Constraint Satisfaction Problem
CS 771 Artificial Intelligence Constraint Satisfaction Problem Constraint Satisfaction Problems So far we have seen a problem can be solved by searching in space of states These states can be evaluated
More informationAustralia Western Australia Western Territory Northern Territory Northern Australia South Australia South Tasmania Queensland Tasmania Victoria
Constraint Satisfaction Problems Chapter 5 Example: Map-Coloring Western Northern Territory South Queensland New South Wales Tasmania Variables WA, NT, Q, NSW, V, SA, T Domains D i = {red,green,blue} Constraints:
More informationExample: Map-Coloring. Constraint Satisfaction Problems Western Australia. Example: Map-Coloring contd. Outline. Constraint graph
Example: Map-Coloring Constraint Satisfaction Problems Western Northern erritory ueensland Chapter 5 South New South Wales asmania Variables, N,,, V, SA, Domains D i = {red,green,blue} Constraints: adjacent
More informationLecture 6: Constraint Satisfaction Problems (CSPs)
Lecture 6: Constraint Satisfaction Problems (CSPs) CS 580 (001) - Spring 2018 Amarda Shehu Department of Computer Science George Mason University, Fairfax, VA, USA February 28, 2018 Amarda Shehu (580)
More informationConstraint Satisfaction Problems
Constraint Satisfaction Problems Chapter 5 Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local search for CSPs Chapter 5 2 Constraint satisfaction
More informationConstraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe
Constraint Satisfaction Problems slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Standard search problems: State is a black box : arbitrary data structure Goal test
More informationConstraint Satisfaction Problems
Revised by Hankui Zhuo, March 14, 2018 Constraint Satisfaction Problems Chapter 5 Chapter 5 1 Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local search
More informationConstraint Satisfaction Problems
Constraint Satisfaction Problems Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University References: 1. S. Russell and P. Norvig. Artificial Intelligence:
More informationIterative improvement algorithms. Today. Example: Travelling Salesperson Problem. Example: n-queens
Today See Russell and Norvig, chapters 4 & 5 Local search and optimisation Constraint satisfaction problems (CSPs) CSP examples Backtracking search for CSPs 1 Iterative improvement algorithms In many optimization
More informationWhat is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17
CSE 473: Artificial Intelligence Constraint Satisfaction Dieter Fox What is Search For? Models of the world: single agent, deterministic actions, fully observed state, discrete state space Planning: sequences
More informationCSE 473: Artificial Intelligence
CSE 473: Artificial Intelligence Constraint Satisfaction Luke Zettlemoyer Multiple slides adapted from Dan Klein, Stuart Russell or Andrew Moore What is Search For? Models of the world: single agent, deterministic
More informationCS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2008 Lecture 4: CSPs 9/9/2008 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 1 Announcements Grading questions:
More informationAnnouncements. CS 188: Artificial Intelligence Fall Large Scale: Problems with A* What is Search For? Example: N-Queens
CS 188: Artificial Intelligence Fall 2008 Announcements Grading questions: don t panic, talk to us Newsgroup: check it out Lecture 4: CSPs 9/9/2008 Dan Klein UC Berkeley Many slides over the course adapted
More informationAr#ficial Intelligence
Ar#ficial Intelligence Advanced Searching Prof Alexiei Dingli Gene#c Algorithms Charles Darwin Genetic Algorithms are good at taking large, potentially huge search spaces and navigating them, looking for
More informationConstraint Satisfaction Problems (CSPs)
1 Hal Daumé III (me@hal3.name) Constraint Satisfaction Problems (CSPs) Hal Daumé III Computer Science University of Maryland me@hal3.name CS 421: Introduction to Artificial Intelligence 7 Feb 2012 Many
More information10/11/2017. Constraint Satisfaction Problems II. Review: CSP Representations. Heuristic 1: Most constrained variable
//7 Review: Constraint Satisfaction Problems Constraint Satisfaction Problems II AIMA: Chapter 6 A CSP consists of: Finite set of X, X,, X n Nonempty domain of possible values for each variable D, D, D
More informationAnnouncements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong
CS 88: Artificial Intelligence Spring 2009 Lecture 4: Constraint Satisfaction /29/2009 John DeNero UC Berkeley Slides adapted from Dan Klein, Stuart Russell or Andrew Moore Announcements The Python tutorial
More informationChapter 6 Constraint Satisfaction Problems
Chapter 6 Constraint Satisfaction Problems CS5811 - Artificial Intelligence Nilufer Onder Department of Computer Science Michigan Technological University Outline CSP problem definition Backtracking search
More informationConstraint Satisfaction Problems
Constraint Satisfaction Problems In which we see how treating states as more than just little black boxes leads to the invention of a range of powerful new search methods and a deeper understanding of
More informationCS 4100 // artificial intelligence
CS 4100 // artificial intelligence instructor: byron wallace Constraint Satisfaction Problems Attribution: many of these slides are modified versions of those distributed with the UC Berkeley CS188 materials
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2006 Lecture 4: CSPs 9/7/2006 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore Announcements Reminder: Project
More informationConstraint Satisfaction Problems (CSPs) Introduction and Backtracking Search
Constraint Satisfaction Problems (CSPs) Introduction and Backtracking Search This lecture topic (two lectures) Chapter 6.1 6.4, except 6.3.3 Next lecture topic (two lectures) Chapter 7.1 7.5 (Please read
More informationSpace of Search Strategies. CSE 573: Artificial Intelligence. Constraint Satisfaction. Recap: Search Problem. Example: Map-Coloring 11/30/2012
/0/0 CSE 57: Artificial Intelligence Constraint Satisfaction Daniel Weld Slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer Space of Search Strategies Blind Search DFS, BFS,
More informationConstraint Satisfaction Problems
Constraint Satisfaction Problems Chapter 5 Section 1 3 Constraint Satisfaction 1 Outline Constraint Satisfaction Problems (CSP) Backtracking search for CSPs Local search for CSPs Constraint Satisfaction
More informationConstraint Satisfaction Problems
Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,
More informationCS 188: Artificial Intelligence. Recap: Search
CS 188: Artificial Intelligence Lecture 4 and 5: Constraint Satisfaction Problems (CSPs) Pieter Abbeel UC Berkeley Many slides from Dan Klein Recap: Search Search problem: States (configurations of the
More informationArtificial Intelligence
Artificial Intelligence Constraint Satisfaction Problems Marc Toussaint University of Stuttgart Winter 2015/16 (slides based on Stuart Russell s AI course) Inference The core topic of the following lectures
More informationWhat is Search For? CS 188: Artificial Intelligence. Example: Map Coloring. Example: N-Queens. Example: N-Queens. Constraint Satisfaction Problems
CS 188: Artificial Intelligence Constraint Satisfaction Problems What is Search For? Assumptions about the world: a single agent, deterministic actions, fully observed state, discrete state space Planning:
More informationConstraint satisfaction problems. CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop
Constraint satisfaction problems CS171, Winter 2018 Introduction to Artificial Intelligence Prof. Richard Lathrop Constraint Satisfaction Problems What is a CSP? Finite set of variables, X 1, X 2,, X n
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2010 Lecture 4: A* wrap-up + Constraint Satisfaction 1/28/2010 Pieter Abbeel UC Berkeley Many slides from Dan Klein Announcements Project 0 (Python tutorial) is due
More informationCS 188: Artificial Intelligence Fall 2011
CS 188: Artificial Intelligence Fall 2011 Lecture 5: CSPs II 9/8/2011 Dan Klein UC Berkeley Multiple slides over the course adapted from either Stuart Russell or Andrew Moore 1 Today Efficient Solution
More informationWhat is Search For? CS 188: Artificial Intelligence. Constraint Satisfaction Problems
CS 188: Artificial Intelligence Constraint Satisfaction Problems What is Search For? Assumptions about the world: a single agent, deterministic actions, fully observed state, discrete state space Planning:
More informationArtificial Intelligence Constraint Satisfaction Problems
Artificial Intelligence Constraint Satisfaction Problems Recall Search problems: Find the sequence of actions that leads to the goal. Sequence of actions means a path in the search space. Paths come with
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence CSPs II + Local Search Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
More informationOutline. Best-first search
Outline Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Beam search Simulated annealing search Genetic algorithms Constraint Satisfaction Problems
More informationAnnouncements. CS 188: Artificial Intelligence Fall 2010
Announcements Project 1: Search is due Monday Looking for partners? After class or newsgroup Written 1: Search and CSPs out soon Newsgroup: check it out CS 188: Artificial Intelligence Fall 2010 Lecture
More informationAnnouncements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.
CS 188: Artificial Intelligence Fall 2008 Lecture 5: CSPs II 9/11/2008 Announcements Assignments: DUE W1: NOW P1: Due 9/12 at 11:59pm Assignments: UP W2: Up now P2: Up by weekend Dan Klein UC Berkeley
More informationCS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2008 Lecture 5: CSPs II 9/11/2008 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 1 1 Assignments: DUE Announcements
More informationArtificial Intelligence Application (Genetic Algorithm)
Babylon University College of Information Technology Software Department Artificial Intelligence Application (Genetic Algorithm) By Dr. Asaad Sabah Hadi 2014-2015 EVOLUTIONARY ALGORITHM The main idea about
More informationCS 188: Artificial Intelligence Fall 2011
Announcements Project 1: Search is due next week Written 1: Search and CSPs out soon Piazza: check it out if you haven t CS 188: Artificial Intelligence Fall 2011 Lecture 4: Constraint Satisfaction 9/6/2011
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Constraint Satisfaction Problems Prof. Scott Niekum The University of Texas at Austin [These slides are based on those of Dan Klein and Pieter Abbeel for CS188 Intro to
More informationARTIFICIAL INTELLIGENCE (CS 370D)
Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) (CHAPTER-6) CONSTRAINT SATISFACTION PROBLEMS Outline What is a CSP CSP applications Backtracking search
More informationAnnouncements. Homework 4. Project 3. Due tonight at 11:59pm. Due 3/8 at 4:00pm
Announcements Homework 4 Due tonight at 11:59pm Project 3 Due 3/8 at 4:00pm CS 188: Artificial Intelligence Constraint Satisfaction Problems Instructor: Stuart Russell & Sergey Levine, University of California,
More informationSuppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you?
Gurjit Randhawa Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you? This would be nice! Can it be done? A blind generate
More informationCS W4701 Artificial Intelligence
CS W4701 Artificial Intelligence Fall 2013 Chapter 6: Constraint Satisfaction Problems Jonathan Voris (based on slides by Sal Stolfo) Assignment 3 Go Encircling Game Ancient Chinese game Dates back At
More informationCS 188: Artificial Intelligence Spring Today
CS 188: Artificial Intelligence Spring 2006 Lecture 7: CSPs II 2/7/2006 Dan Klein UC Berkeley Many slides from either Stuart Russell or Andrew Moore Today More CSPs Applications Tree Algorithms Cutset
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Constraint Satisfaction Problems II Instructors: Dan Klein and Pieter Abbeel University of California, Berkeley [These slides were created by Dan Klein and Pieter Abbeel
More informationFoundations of Artificial Intelligence
Foundations of Artificial Intelligence 5. Constraint Satisfaction Problems CSPs as Search Problems, Solving CSPs, Problem Structure Wolfram Burgard, Bernhard Nebel, and Martin Riedmiller Albert-Ludwigs-Universität
More informationCS 188: Artificial Intelligence. What is Search For? Constraint Satisfaction Problems. Constraint Satisfaction Problems
CS 188: Artificial Intelligence Constraint Satisfaction Problems Constraint Satisfaction Problems N variables domain D constraints x 1 x 2 Instructor: Marco Alvarez University of Rhode Island (These slides
More informationGenetic Algorithms. Chapter 3
Chapter 3 1 Contents of this Chapter 2 Introductory example. Representation of individuals: Binary, integer, real-valued, and permutation. Mutation operator. Mutation for binary, integer, real-valued,
More informationConstraint Satisfaction Problems
Constraint Satisfaction Problems Constraint satisfaction problems Backtracking algorithms for CSP Heuristics Local search for CSP Problem structure and difficulty of solving Search Problems The formalism
More informationGames and Adversarial Search II Alpha-Beta Pruning (AIMA 5.3)
Games and Adversarial Search II Alpha-Beta Pruning (AIMA 5.) Some slides adapted from Richard Lathrop, USC/ISI, CS 7 Review: The Minimax Rule Idea: Make the best move for MAX assuming that MIN always replies
More informationMaterial. Thought Question. Outline For Today. Example: Map-Coloring EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 6, 4/20/2005 University of Washington, Department of Electrical Engineering Spring 2005 Instructor: Professor Jeff A. Bilmes Material Read all of chapter
More informationRecap: Search Problem. CSE 473: Artificial Intelligence. Space of Search Strategies. Constraint Satisfaction. Example: N-Queens 4/9/2012
CSE 473: Artificial Intelligence Constraint Satisfaction Daniel Weld Slides adapted from Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer Recap: Search Problem States configurations of the world
More informationDIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 30 January, 2018
DIT411/TIN175, Artificial Intelligence Chapter 7: Constraint satisfaction problems CHAPTER 7: CONSTRAINT SATISFACTION PROBLEMS DIT411/TIN175, Artificial Intelligence Peter Ljunglöf 30 January, 2018 1 TABLE
More informationConstraint Satisfaction Problems. Chapter 6
Constraint Satisfaction Problems Chapter 6 Constraint Satisfaction Problems A constraint satisfaction problem consists of three components, X, D, and C: X is a set of variables, {X 1,..., X n }. D is a
More informationConstraint Satisfaction Problems
Constraint Satisfaction Problems Tuomas Sandholm Carnegie Mellon University Computer Science Department [Read Chapter 6 of Russell & Norvig] Constraint satisfaction problems (CSPs) Standard search problem:
More informationAnnouncements. Homework 1: Search. Project 1: Search. Midterm date and time has been set:
Announcements Homework 1: Search Has been released! Due Monday, 2/1, at 11:59pm. On edx online, instant grading, submit as often as you like. Project 1: Search Has been released! Due Friday 2/5 at 5pm.
More informationExample: Map coloring
Today s s lecture Local Search Lecture 7: Search - 6 Heuristic Repair CSP and 3-SAT Solving CSPs using Systematic Search. Victor Lesser CMPSCI 683 Fall 2004 The relationship between problem structure and
More informationArtificial Intelligence. Some Problems. 15-Puzzle. Search and Constraint Satisfaction. 8/15-Puzzle n-queens Puzzle
Artificial Intelligence Search and Constraint Satisfaction 1 8/15-Puzzle n-queens Puzzle Some Problems 2 15-Puzzle 1 7 11 3 8 4 6 14 12 2 15 9 10 13 5 3 1 15-Puzzle 1 7 11 3 8 4 6 14 12 2 15 9 10 13 5
More informationToday. CS 188: Artificial Intelligence Fall Example: Boolean Satisfiability. Reminder: CSPs. Example: 3-SAT. CSPs: Queries.
CS 188: Artificial Intelligence Fall 2007 Lecture 5: CSPs II 9/11/2007 More CSPs Applications Tree Algorithms Cutset Conditioning Today Dan Klein UC Berkeley Many slides over the course adapted from either
More informationConstraint Satisfaction Problems
Constraint Satisfaction Problems Robert Platt Northeastern University Some images and slides are used from: 1. AIMA What is a CSP? The space of all search problems states and actions are atomic goals are
More informationArtificial Intelligence
Contents Artificial Intelligence 5. Constraint Satisfaction Problems CSPs as Search Problems, Solving CSPs, Problem Structure Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller What
More informationArtificial Intelligence
Artificial Intelligence 5. Constraint Satisfaction Problems CSPs as Search Problems, Solving CSPs, Problem Structure Wolfram Burgard, Andreas Karwath, Bernhard Nebel, and Martin Riedmiller SA-1 Contents
More informationLocal Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )
Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) 1 2 Outline Local search techniques and optimization Hill-climbing
More informationAnnouncements. Reminder: CSPs. Today. Example: N-Queens. Example: Map-Coloring. Introduction to Artificial Intelligence
Introduction to Artificial Intelligence 22.0472-001 Fall 2009 Lecture 5: Constraint Satisfaction Problems II Announcements Assignment due on Monday 11.59pm Email search.py and searchagent.py to me Next
More informationSet 3: Informed Heuris2c Search. ICS 271 Fall 2012
Set 3: Informed Heuris2c Search ICS 271 Fall 2012 Overview Heuris2cs and Op2mal search strategies heuris2cs hill- climbing algorithms Best- First search A*: op2mal search using heuris2cs Proper2es of A*
More informationLars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,
Course on Artificial Intelligence, winter term 2012/2013 0/35 Artificial Intelligence Artificial Intelligence 3. Constraint Satisfaction Problems Lars Schmidt-Thieme Information Systems and Machine Learning
More informationLocal Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )
Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) 1 2 Outline Local search techniques and optimization Hill-climbing
More informationWeek 8: Constraint Satisfaction Problems
COMP3411/ 9414/ 9814: Artificial Intelligence Week 8: Constraint Satisfaction Problems [Russell & Norvig: 6.1,6.2,6.3,6.4,4.1] COMP3411/9414/9814 18s1 Constraint Satisfaction Problems 1 Outline Constraint
More informationBIL 682 Ar+ficial Intelligence Week #2: Solving problems by searching. Asst. Prof. Aykut Erdem Dept. of Computer Engineering HaceDepe University
BIL 682 Ar+ficial Intelligence Week #2: Solving problems by searching Asst. Prof. Aykut Erdem Dept. of Computer Engineering HaceDepe University Today Search problems Uninformed search Informed (heuris+c)
More informationConstraint Satisfaction Problems Part 2
Constraint Satisfaction Problems Part 2 Deepak Kumar October 2017 CSP Formulation (as a special case of search) State is defined by n variables x 1, x 2,, x n Variables can take on values from a domain
More informationLocal Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )
Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) 1 Outline Local search techniques and optimization Hill-climbing
More informationConstraint Satisfaction Problems
Constraint Satisfaction Problems [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials are available at http://ai.berkeley.edu.] What is Search
More informationCS:4420 Artificial Intelligence
CS:4420 Artificial Intelligence Spring 2018 Beyond Classical Search Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Constraint Satisfaction Problems II and Local Search Instructors: Sergey Levine and Stuart Russell University of California, Berkeley [These slides were created by Dan Klein
More informationDIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 6 February, 2018
DIT411/TIN175, Artificial Intelligence Chapters 5, 7: Search part IV, and CSP, part II CHAPTERS 5, 7: SEARCH PART IV, AND CSP, PART II DIT411/TIN175, Artificial Intelligence Peter Ljunglöf 6 February,
More informationArtificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random
Example: n-queens Put n queens on an n n board with no two queens on the same row, column, or diagonal A search problem! State space: the board with 0 to n queens Initial state: the empty board or a board
More informationArtificial Intelligence
Artificial Intelligence Informed Search and Exploration Chapter 4 (4.3 4.6) Searching: So Far We ve discussed how to build goal-based and utility-based agents that search to solve problems We ve also presented
More informationInformed Search and Exploration
Artificial Intelligence Informed Search and Exploration Readings: Chapter 4 of Russell & Norvig. Best-First Search Idea: use a function f for each node n to estimate of desirability Strategy: Alwasy expand
More informationmywbut.com Informed Search Strategies-II
Informed Search Strategies-II 1 3.3 Iterative-Deepening A* 3.3.1 IDA* Algorithm Iterative deepening A* or IDA* is similar to iterative-deepening depth-first, but with the following modifications: The depth
More informationAnnouncements. CS 188: Artificial Intelligence Spring Today. Example: Map-Coloring. Example: Cryptarithmetic.
CS 188: Artificial Intelligence Spring 2010 Lecture 5: CSPs II 2/2/2010 Pieter Abbeel UC Berkeley Many slides from Dan Klein Announcements Project 1 due Thursday Lecture videos reminder: don t count on
More informationOutline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms
Outline Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Beam search Simulated annealing search Genetic algorithms Constraint Satisfaction Problems
More informationAnnouncements. CS 188: Artificial Intelligence Spring Production Scheduling. Today. Backtracking Search Review. Production Scheduling
CS 188: Artificial Intelligence Spring 2009 Lecture : Constraint Satisfaction 2/3/2009 Announcements Project 1 (Search) is due tomorrow Come to office hours if you re stuck Today at 1pm (Nick) and 3pm
More informationMap Colouring. Constraint Satisfaction. Map Colouring. Constraint Satisfaction
Constraint Satisfaction Jacky Baltes Department of Computer Science University of Manitoba Email: jacky@cs.umanitoba.ca WWW: http://www4.cs.umanitoba.ca/~jacky/teaching/cour ses/comp_4190- ArtificialIntelligence/current/index.php
More informationCS 188: Artificial Intelligence. Recap Search I
CS 188: Artificial Intelligence Review of Search, CSPs, Games DISCLAIMER: It is insufficient to simply study these slides, they are merely meant as a quick refresher of the high-level ideas covered. You
More informationAI Fundamentals: Constraints Satisfaction Problems. Maria Simi
AI Fundamentals: Constraints Satisfaction Problems Maria Simi Constraints satisfaction LESSON 3 SEARCHING FOR SOLUTIONS Searching for solutions Most problems cannot be solved by constraint propagation
More informationArtificial Intelligence
Artificial Intelligence Local Search Vibhav Gogate The University of Texas at Dallas Some material courtesy of Luke Zettlemoyer, Dan Klein, Dan Weld, Alex Ihler, Stuart Russell, Mausam Systematic Search:
More informationReview Adversarial (Game) Search ( ) Review Constraint Satisfaction ( ) Please review your quizzes and old CS-271 tests
Review Agents (2.1-2.3) Mid-term Review Chapters 2-6 Review State Space Search Problem Formulation (3.1, 3.3) Blind (Uninformed) Search (3.4) Heuristic Search (3.5) Local Search (4.1, 4.2) Review Adversarial
More informationK-Consistency. CS 188: Artificial Intelligence. K-Consistency. Strong K-Consistency. Constraint Satisfaction Problems II
CS 188: Artificial Intelligence K-Consistency Constraint Satisfaction Problems II Instructor: Marco Alvarez University of Rhode Island (These slides were created/modified by Dan Klein, Pieter Abbeel, Anca
More informationInformed Search and Exploration
Ch. 04 p.1/39 Informed Search and Exploration Chapter 4 Ch. 04 p.2/39 Outline Best-first search A search Heuristics IDA search Hill-climbing Simulated annealing Ch. 04 p.3/39 Review: Tree search function
More information