Heuristic Search: Intro

Similar documents
Informed Search and Exploration

Informed Search and Exploration

mywbut.com Informed Search Strategies-I

Informed Search Methods

Ar#ficial)Intelligence!!

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

A.I.: Informed Search Algorithms. Chapter III: Part Deux

Informed search methods

Uninformed Search Strategies AIMA

Informed Search Algorithms

Artificial Intelligence

Lecture 4: Search 3. Victor R. Lesser. CMPSCI 683 Fall 2010

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004

Artificial Intelligence

Informed Search. CS 486/686 University of Waterloo May 10. cs486/686 Lecture Slides 2005 (c) K. Larson and P. Poupart

CPS 170: Artificial Intelligence Search

Uninformed Search Strategies AIMA 3.4

COMP9414: Artificial Intelligence Informed Search

Heuristic (Informed) Search

COMP9414: Artificial Intelligence Informed Search

Artificial Intelligence

Informed search strategies (Section ) Source: Fotolia

Search : Lecture 2. September 9, 2003

Informed Search. CS 486/686: Introduction to Artificial Intelligence Fall 2013

ITCS 6150 Intelligent Systems. Lecture 5 Informed Searches

Outline. Best-first search

Lecture 4: Informed/Heuristic Search

Chapter 3: Solving Problems by Searching

Informed Search A* Algorithm

Problem Solving & Heuristic Search

CS 771 Artificial Intelligence. Informed Search

Informed (Heuristic) Search. Idea: be smart about what paths to try.

CS 331: Artificial Intelligence Informed Search. Informed Search

Informed search algorithms

3 SOLVING PROBLEMS BY SEARCHING

CS 331: Artificial Intelligence Informed Search. Informed Search

This lecture. Lecture 6: Search 5. Other Time and Space Variations of A* Victor R. Lesser. RBFS - Recursive Best-First Search Algorithm

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

Outline. Best-first search

CS 380: Artificial Intelligence Lecture #4

Artificial Intelligence

Mustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, Chapter 3 Informed Searching. Mustafa Jarrar. University of Birzeit

Solving problems by searching

Best-First Search! Minimizing Space or Time!! RBFS! Save space, take more time!

Contents. Foundations of Artificial Intelligence. General Algorithm. Best-First Search

Graphs vs trees up front; use grid too; discuss for BFS, DFS, IDS, UCS Cut back on A* optimality detail; a bit more on importance of heuristics,

Informed search algorithms

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

Informed Search and Exploration for Agents

Solving problems by searching

Informed search algorithms

Informed search algorithms

Lecture 2: Fun with Search. Rachel Greenstadt CS 510, October 5, 2017

Efficient memory-bounded search methods

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Route planning / Search Movement Group behavior Decision making

State Spaces

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

Problem Solving and Search

Informed State Space Search B4B36ZUI, LS 2018

Informed/Heuristic Search

Heuristic Search. Heuristic Search. Heuristic Search. CSE 3401: Intro to AI & LP Informed Search

Solving Problems by Searching

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

CS:4420 Artificial Intelligence

Solving Problems by Searching

Announcements. CS 188: Artificial Intelligence

Heuris'c Search. Reading note: Chapter 4 covers heuristic search.

Uninformed Search Methods

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

Artificial Intelligence

Set 3: Informed Heuristic Search. ICS 271 Fall 2017 Kalev Kask

Solving Problems by Searching. Artificial Intelligence Santa Clara University 2016

ARTIFICIAL INTELLIGENCE. Pathfinding and search

Heuristic Search and Advanced Methods

Informed search algorithms. Chapter 4

Informed Search Lecture 5

Lecture 5 Heuristics. Last Time: A* Search

Informed search algorithms. Chapter 3 (Based on Slides by Stuart Russell, Dan Klein, Richard Korf, Subbarao Kambhampati, and UW-AI faculty)

Vorlesung Grundlagen der Künstlichen Intelligenz

CS 4700: Foundations of Artificial Intelligence

SRI VIDYA COLLEGE OF ENGINEERING & TECHNOLOGY REPRESENTATION OF KNOWLEDGE PART A

CS 4700: Foundations of Artificial Intelligence

CS 188: Artificial Intelligence Fall Search Gone Wrong?

Heuristic Search. CPSC 470/570 Artificial Intelligence Brian Scassellati

ARTIFICIAL INTELLIGENCE. Informed search

Problem solving and search

Informed Search. Notes about the assignment. Outline. Tree search: Reminder. Heuristics. Best-first search. Russell and Norvig chap.

Uninformed (also called blind) search algorithms

Informed search algorithms. Chapter 4, Sections 1 2 1

Informed Search and Exploration

Informed Search and Exploration

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

CAP 4630 Artificial Intelligence

Searching with Partial Information

Informed Search CS457 David Kauchak Fall 2011

Informed Search. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison

Advanced Artificial Intelligence (DT4019, HT10)

The wolf sheep cabbage problem. Search. Terminology. Solution. Finite state acceptor / state space

Transcription:

Heuristic Search: Intro Blind search can be applied to all problems Is inefficient, as does not incorporate knowledge about the problem to guide the search Such knowledge can be used when deciding which node to expand next Heuristic search (aka informed, best first search) uses problem knowledge to select state that appears to be closest to goal state as next to expand This should produce a solution faster than systematically expanding all nodes til a goal is stumbled upon The general state evaluation function: f(n) = h(n) + g(n) f(n)) is estimated cost of getting from start state to goal state via state n g(n) is actual cost of getting from start state to n found so far h(n) is heuristic function: estimated cost of going from n to goal state This function is where knowledge about the problem comes into play h(goal) = 0 f (n) = h (n) + g (n) represents the minimal cost of getting from start state to goal state via state n when all paths through n are considered h (n) is actual cost to goal from n g (n) is cost of best path to n from initial state There are categories of heuistic search, based on variations in f(n)) 1

Heuristic Search: Intro (2) General algorithm (tree): function GIS-TREE (problem, evalfn) returns solution, or failure { node <- new(node) node.state <- problem.initial-state node.cost <- apply(evalfn, INITIAL-STATE) frontier <- new(priorityqueue) INSERT(node, frontier) explored <- new(set) loop { if (EMPTY?(frontier)) return failure node <- POP(frontier) if (problem.goal-test(node.state)) return SOLUTION(node) for (each action in problem.actions(node.state)) { child <- CHILD-NODE(problem, node, action) frontier <- INSERT(child, frontier) 2

Heuristic Search: Intro (3) General algorithm (graph): function GIS-GRAPH (problem, evalfn) returns solution, or failure { node <- new(node) node.state <- problem.initial-state node.cost <- apply(evalfn, INITIAL-STATE) frontier <- new(priorityqueue) INSERT(node, frontier) explored <- new(set) loop { if (EMPTY?(frontier)) return failure node <- POP(frontier) if (problem.goal-test(node.state)) return SOLUTION(node) ADD(node.STATE, explored) for (each action in problem.actions(node.state)) { child <- CHILD-NODE(problem, node, action) if ((child.state!in explored) AND ((!STATE-FOUND(child.STATE, frontier)) { frontier <- INSERT(child, frontier) else if (STATE-FOUND(child.STATE, frontier)) { node <- RETRIEVE-NODE(child.STATE, frontier) if (child.cost < node.cost)) frontier <- REPLACE(node, child, frontier) 3

Heuristic Search: Intro (4) The above is a modified version of the uniform cost algorithm it accepts an additional argument: the function used to assign a value to a node Function apply(f unction, node) evaluates f unction for a given node PATH-COST has been replaced by a more generic PATH attribute Attributes that need to be stored on a node are dependent on the search strategy 4

Heuristic Search: Greedy Search The greedy search algorithm selects the node that appears to be closest to the goal f(n) = h(n) I.e., g(n) = 0: We ignore cost to get to a node Algorithm GS: function GS (problem) returns solution, or failure { return GIS-TREE (problem, h) // OR return GIS-GRAPH (problem, h) Characteristics: Attempts to get to goal as quickly as possible May reach dead ends May expand nodes unnecessarily Not complete for tree rep (due to reversible actions) Complete for graph rep of finite state space Not optimal Complexity: Storage requirements O(b d ) (where d is max depth of structure Time requirements O(b d ) Note that quality of heuristic greatly affects search 5

Heuristic Search: A Combines aspects of uniform cost and greedy seraches UCS is optimal and complete, but inefficient UCS based on g(n) GS is neither optimal nor complete, but efficient GS based on h(n) For A, f(n) = h(n) + g(n) Algorithm A (for trees) function ASTAR-TREE (problem) returns solution, or failure { return GIS-TREE (problem, h + g) A is optimal and complete, providing that 1. For trees, h(n) is admissible Admissible means that h(n) h (n) I.e., h(n) never over-estimates cost from n to g Such an algorithm is optimistic 2. For graphs, h(n) is consistant (monotonic) h(n) is consistant if for every node n and successor nodes n generated by action a, the estimated cost of getting from n to the goal is never greater than the actual cost of getting fom n to a successor state n plus the estimated cost of getting from n to the goal I.e., h(n) c(n, a, n ) + h(n ) for all n 6

Heuristic Search: A (2) This is reflected in the following diagram: This is referred to as the triangle inequality Relevance: Estimated cost to goal cannot decrease by more than c ij when move from n i to n j Consistancy is a stronger requirement than admissiblity Consider The above represents a nonmonotonic function Since h(n i ) underestimates the cost of getting from n to g, the cost must be at least 4 Since c(n i, a, n j ) = 1, the cost from n j to g must be at least 3 h(n j ) = 2 is unrealistic (i.e., As you get closer to the goal, the overall cost should be increasing, not decreasing) 7

Heuristic Search: A Graphs As with blind search, using graphs instead of trees will result in extra effort Handling already-visited nodes more complex in A When encounter node that appears on frontier or closed, may have found an alternate path to the node New path may be cheaper than original If new path is not cheaper, just ignore newly found path If not, must adjust f of the node, and adjust path pointers between it and parent If the node is on closed, will also have to cascade updated cost of path through descendants 8

Heuristic Search: A Graphs (2) Algorithm: function ASTAR-GRAPH (problem) returns solution, or failure { node <- new(node) node.state <- problem.initial-state node.cost <- apply(evalfn, INITIAL-STATE) frontier <- new(priorityqueue) INSERT(node, frontier) explored <- new(set) loop { if (EMPTY?(frontier)) return failure node <- POP(frontier) if (problem.goal-test(node.state)) return SOLUTION(node) INSERT(node, explored-nodes) for (each action in problem.actions(node.state)) { child <- CHILD-NODE(problem, node, action) //** If state has not been generated yet, add it to frontier** if ((!STATE-FOUND(child.STATE, explored-nodes)) AND ((!STATE-FOUND(child.STATE, frontier)) { frontier <- INSERT(child, frontier) //** Replace existing node in frontier with new, cheaper node else if (STATE-FOUND(child.STATE, frontier)) { node <- RETRIEVE-NODE(child.STATE, frontier) if (child.cost < node.cost)) frontier <- REPLACE(node, child, frontier) //** Must propagate cheaper cost to children else if (STATE-FOUND(child.STATE, explored-nodes)) { node <- RETRIEVE-NODE(child.STATE, explored-nodes) if (child.cost < node.cost)) explored-nodes <- REPLACE(node, child, explored-nodes) UPDATE-PATH(child) 9

Heuristic Search: A Graphs (3) Note that this algorithm is essentially the same as GIS-GRAPH, except for two changes: 1. The last if-else: else if (child.state in explored) UPDATE-PATH(child) If child appears in explored, this means that it has been expanded and we have generated any number of ancestors for this state If we have found a cheaper path to child, then the cost associated with those ancestors will need to be adjusted, since the paths to the ancestors go through child Function UPDATE-PATH will id those states reached via child These nodes may be in frontier or explored-nodes The g values will be updated (to a lesser value), which will decrease their f values 2. explored-nodes The text s formulation maintains states - not nodes - on the explored list Since A graph search requires us to be able to adjust costs and traverse paths to explored nodes, we must retain nodes - not just state information - on the explored list The name explored-nodes emphasizes this change from graph search algorithms presented earlier 10

Heuristic Search: A Optimality This examines the optimality of A graph search 1. If h(n) is consistant, then values of f(n) are nondecreasing along any path Given g(n ) = g(n) + c(n, a, n ) Then, f(n ) = g(n )+h(n ) = g(n)+c(n, a, n )+h(n ) g(n)+h(n) = f(n) 2. At every step prior to termination of A, there is always a node n on f rontier with the following properties: n is on optimal path to a goal A has found an optimal path to n f(n ) f (n 0 ) 3. Proof (by induction): (a) Base case: At start, only n 0 on frontier n 0 on optimal path to goal f(n ) f(n 0 ) = h(n 0 ) (b) Inductive assumption: Assume m 0 nodes expanded and above holds for each (c) Proof: Consider expansion of (m + 1) st node from frontier Call this node n Let n be on optimal path Then either i. n is not selected as (m + 1) st node Regardless, it still has properties as noted 11

Heuristic Search: A Optimality (2) ii. n is selected In the second case, let n p be one of n s successors on an optimal path This node is on frontier Path to n p must be optimal, otherwise, there would be a better path to the goal n p becomes the new n for the next iteration of the algorithm (d) Proof that f(n ) f(n 0 ) Let n be on optimal path and assume A has found optimal path from n 0 to n Then ˆf(n ) = g(n ) + h(n ) (1) g(n ) + h(n ) (since g(n ) = g(n ) and h(n ) h(n )(2) f(n ) (since f(n ) = g(n ) + h(n )) (3) f(n 0 ) (since f(n ) = f(n 0 ) and n on optimal path) (4) (5) 12

Heuristic Search: A Optimality (3) 4. Given the conditions specified for A and h, and providing there is a path with finite cost from n 0 to goal, A is guaranteed to terminate with a minimal cost path to a goal 5. Proof (by contradiction): A terminates if there is an accessible goal Assume A doesn t terminate Then a point will be reached where f(n) > f(n 0 ) for some n frontier, since ɛ > 0 This contradicts assumption 6. Proof (by contradiction): Termination of A (a) A terminates when either frontier empty This contradicts assumption that there is an accessible goal (b) Or when a goal node is id d Suppose there is an optimal goal g 1 where f(g 1 ) = f(n 0 ), and A finds a non-optimal goal g 2 with f(g 2 ) > f(n 0 ) where g 1 g 2 When A terminates, f(g 2 ) f(g 2 ) > f(n 0 ) But prior to selection of g 2, there was a node n on frontier on an optimal path with f(n ) f(n 0 ) by previous lemma This contradicts assumptions Because f(n) is nondecreasing, contours can be drawn in the state space All nodes within a given contour f i have an f value less than those on contour f i, and the f value of contour f i < f j, where i < j 13

Heuristic Search: A Optimality (4) As the search progresses, the contours narrow and stretch toward the goal along the cheapest path The more accurate h(n), the more focused the contours become Note that UCS generates circular contours centered on the initial state A is optimally efficient for any given h function: No other algorithm is guaranteed to expand fewer nodes than A 14

Heuristic Search: A Complexity The number of states within the goal contour is exponential wrt the solution length For problems of constant step costs, analysis is based on 1. Absolute error: = h h 2. Relative error: ɛ = (h h)/h Complexity analysis depends on the characteristics of the state space When there is a single goal and reversible actions Time complexity O(b ) = O(b ɛd ), where d is the solution depth For heuristics of practical use, h, so ɛ is constant or growing and time complexity is exponential in d O(b ɛd ) = O((b ɛ ) d ) which means the effective branching factor is b ɛ If there are many goal states (especially near-optimal goal states), path may veer from the optimal path This will result in additional cost proportional to the number of goal states within a factor of ɛ of the optimal cost With a graph, there can be exponentially many states with f(n) < C The search usually runs out of space before time becomes an issue 15

Heuristic Search: Variations to A The main issue with A is memory usage As noted above, the algorithm usually uses up available memory before time becomes an issue The following algorithms attempt to limit memory usage while retaining the properties of A 16

Heuristic Search: Variations to A - Iterative Deepening A Search (IDA ) This is the A version of the depth first iterative deepening algorithm On each iteration, perform a depth first search Instead of using a depth limit, use a bound on f Algorithm IDA function IDASTAR (problem) returns solution, or failure { root <- new(node) root.state <- problem.initial-state root.fcost <- problem.get-initial-state-h() f-limit <- root.fcost while (TRUE) { [result, f-limit] <- DFS-CONTOUR(root, problem, f-limit) if (result!= NULL) return result else if (f-limit == infinity) return failure function DFS-CONTOUR (node, problem, f-limit) returns [solution, f-limit] { nextf <- infinity if (node.fcost > f-limit) return [node, node.fcost] if (problem.goal-test(node.state)) return [node, f-limit] for (each action in problem.actions(node.state)) { child <- CHILD-NODE(problem, node, action) [solution, newf] <- DFS-CONTOUR(child, f-limit) if (solution NOT NULL) return [solution, f-limit] nextf <- min(nextf, newf) return [NULL, nextf] 17

Heuristic Search: Variations to A - Iterative Deepening A Search (IDA ) (2) Expands nodes along contour lines of equal f IDA complete and optimal Complexity 1. Space Let δ = smallest operator cost Let f = optimal solution cost Worst case requires bf nodes 2. Time δ Dependent on range of values that f can assume Best case The fewer the values, the fewer the contours fewer iterations Thus, IDA A Also has less overhead as does not require priority queue Worst case When have many unique values for ĥ(n) (absolute worst is every h value unique) Requires many iterations 1 + 2 +... + n O(n 2 ) To reduce time complexity increase f limit by ɛ on every iteration Number of iterations 1/ɛ While reduces search cost, may produce non-optimal solution Such a solution is worse than optimal by at most ɛ Such algorithms called ɛ-admissible 18

Heuristic Search: Variations to A - Recursive Best First Search (RBFS) This uses more memory than IDA but generates fewer nodes Uses same approach as A with following differences Assigns a backed up value to each node Let n be a node with m i successors Backed up value of n = b(n) = min(b(m i )) Backed up value of leaf node (one on frontier) is f(n) Backed up value of a node represents descendant with lowest f value in tree rooted at node Only most promising path to goal maintained at any time Algorithm (description) When node is expanded, f for all successors computed If one of these (m) has f less than b value of any node on frontier Back up values of all ancestors of m based on this value Continue from m Otherwise Let n be node on frontier with b < f(m i ) Back up ancestors of n based on lowest successor f Find common ancestor of n and n; call this node k Let k n be the child of k that is the root of the subtree containg n Delete everything below k n (k n will have backed up value of this subtree) n will be next node to expand 19

Heuristic Search: Variations to A - Recursive Best First Search (RBFS) (2) Algorithm function RECURSIVE-BFS (problem) returns solution, or failure { return RBDF(problem, MAKE-NODE(problem.INITIAL-STATE), infinity) function RBFS (problem, node, f-limit) returns solution, or failure { if (problem.goal-test(node.state)) return SOLUTION(node.STATE) successors <- new(set) for (each action in problem.actions(node.state)) { successors <- INSERT(CHILD-NODE(problem, node, action), successors) if (EMPTY(successors) return failure for (each s in successors) s.f <- MAX(s.g + s.h, node.f) loop { best <- FIND-MIN-NODE(successors) //find node with smallest f value if (best.f > f.limit return failure //find second smallest f value alternative <- FIND-MIN-2-NODE-VALUE(successors, best.f) result <- RBFS(problem, best, min(f-limit, alternative) if (result!= failure) return result 20

Heuristic Search: Simplified Memory-bounded A Search (SMA ) IDA* and RBFS have problems due to using too little memory IDA holds only current f-cost limit between iterations RBFS holds more info, using linear space Both have no memory of what went before May expand same nodes multiple times May experience redundant paths and the computational overhead required to deal with them SMA uses all memory available Characteristics: Avoids repeated states within memory constraints Complete if memory sufficient to store shallowest solution path Optimal if memory sufficient to store optimal solution path Otherwise, finds best solution possible within memory constraints If entire search tree fits in memory, is optimally efficient Algorithm overview: A applied to problem until run out of memory To generate a successor when no memory available, will need to remove one from queue Removed nodes called forgotten nodes Remove node with highest F COST Want to remember cost of best path so far through a forgotten node (in case need to return to it later) This info retained in root of forgotten subtree These vales called backed-up values 21

Heuristic Search: Simplified Memory-bounded A Search (SMA ) (2) Algorithm SMA function SMASTAR (problem) returns solution, or failure { Q <- makenode(initialstate [problem]) node <- new(node) node.state <- problem.initial-state node.fcost <- problem.get-initial-state-h() if (problem.goal-test(node.state)) return SOLUTION(node) frontier <- new(queue) INSERT(node, frontier) loop { if (EMPTY?(frontier)) return failure node <- deepest, least-fcost node in frontier if (problem.goal-test(node.state)) return SOLUTION(node) child <- CHILD-NODE(problem, node, problem.next-action(node.state)) if (NOT problem.goal-test(node.state) AND MAX-DEPTH(child)) child.f <- infinity else child.f <- max(node.f, child.g + child.h) if (no more successors of node) update node s FCOST and those of ancestors to least cost path through node if (successors of node all in memory) pop(node) if (full(memory)) { delete shallowest, highest-fcost node r in frontier remove r from parent successor list frontier <- INSERT(r.PARENT, frontier) \\if necessary insert r s parent on frontier if necessary frontier <- INSERT(child, frontier) 22

Issues: Heuristic Search: Simplified Memory-bounded A Search (SMA ) (3) If all leaf nodes have same f value is problemmatic To preclude cycles of selecting and deselecting the same node Always expand newest best leaf always delete oldest worst leaf Evaluation: SMA can handle more difficult problems than A without the memory overhead Better than IDA on graphs For very hard problems, may have significant regeneration of paths May make solution intractable where A with limited memory would find a solution 23

Branching factor Heuristic Search: Heuristic Function Evaluation Let n be number of nodes expanded by A for a given problem Let d be solution depth Let B be effective branching factor I.e., average branching factor over whole problem Then, n = d i=1 B = (Bd 1)B B 1 B generally constant over a large range of instances for a given problem and generally independent of path length Ideal case is B = 1: converge directly to goal Want heuristic with smallest branching factor Can be used to estimate number of nodes expanded for a given B and depth Domination h a dominates h b if h a (n) h b (n) Dominating heuristic will always expand fewer nodes than dominated one The larger h(n), the more accurate it is Computational cost of h Must consider cost of computing h Frequently, the more accurate, the more expensive If computational cost outweighs cost of generating nodes, may not be worthwhile 24

Heuristic Search: Designing Heuristics There are a number of techniques that can be used to design heuristics: 1. Problem relaxation Relaxed problem is one with fewer constraints For example, consider the eight puzzle (a) A tile can move from A B if they are adjacent: Corresponds to the Manhatten distance (b) A tile can move from A B if B is empty: Gaschnig s heuristic (c) A tile can move from A B: Corresponds to the number of tiles out of place The state space graph of a relaxed problem is a super graph of the original It will have more edges than the original Any optimal solution in the original space will be a solution in the relaxed space Since the relaxed space has additional edges, some of its solutions may be better Cost of the optimal solution to a relaxed problem is an admissible heuristic in the original The derived heuristic must obey the triangle inequality, and so is consistent Good heuristics often represent exact costs to relaxed problem 2. Composite functions If have several heuristic functions, and none is dominant, use h(n) = max(h 1 (n), h 2 (n),..., h m (n)) If each h i is admissible, so is h(n) h(n) dominates each individual h i 3. Statistical info By generating random problem instances, can gather data about real v estimated costs If find that when h(n) = x true cost = y z% of the time, use y in those cases Admissability lost 25

4. Pattern databases Heuristic Search: Designing Heuristics (2) Find the cost of generating a subproblem of the original This cost will be a lower bound on the cost of the full problem A pattern database stores exact solutions for every subproblem of the original When solving the original, look up the solutions of the corresponding matching subproblems in the DB This is an admissible heuristic Take maximum of all possible matches for a given configuration 5. Features Identify those features that (should) contribute to h Base heuristic on them The agent learns which features are valuable over time 6. Use weighted functions Let ˆf(n) = ĝ(n) + wĥ(n) w > 0 As w decreases, ˆf optimal cost search As w increases, ˆf greedy search Experimental evidence suggests varying w inversely with tree depth 26

Heuristic Search: Learning Heuristics Can an agent learn a better search strategy? To do so, need a meta-level state space This space represents the internal state of the agent program during the search process The actual problem state space is the object-level state space A meta-level learning algorithm monitors the steps of the search process at the meta-level and compares them with properties in teh object-level to id which steps are not worthwhile 27