Chapter4. Tree Search (Reviewed, Fig. 3.9) Best-First Search. Search Strategies. Best-First Search (cont.-2) Best-First Search (cont.

Similar documents
CS 380: Artificial Intelligence Lecture #4

Informed search algorithms. Chapter 4

Informed Search. Notes about the assignment. Outline. Tree search: Reminder. Heuristics. Best-first search. Russell and Norvig chap.

Artificial Intelligence Informed search. Peter Antal

Lecture 4: Informed/Heuristic Search

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Artificial Intelligence

ARTIFICIAL INTELLIGENCE. Informed search

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

Artificial Intelligence

Informed Search and Exploration

Outline. Best-first search

TDT4136 Logic and Reasoning Systems

2006/2007 Intelligent Systems 1. Intelligent Systems. Prof. dr. Paul De Bra Technische Universiteit Eindhoven

Outline. Best-first search

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Informed Search and Exploration

Informed Search and Exploration

Informed Search Algorithms. Chapter 4

Informed Search and Exploration

Artificial intelligence 1: informed search

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed search algorithms. Chapter 4

Informed search algorithms

Informed Search and Exploration

Informed search algorithms. Chapter 4, Sections 1 2 1

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Informed search algorithms

Informed search algorithms

CS:4420 Artificial Intelligence

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Introduction to Artificial Intelligence. Informed Search

Chapter 3: Informed Search and Exploration. Dr. Daisy Tang

TDDC17. Intuitions behind heuristic search. Best-First Search. Recall Uniform-Cost Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

COMP9414/ 9814/ 3411: Artificial Intelligence. 5. Informed Search. Russell & Norvig, Chapter 3. UNSW c Alan Blair,

Artificial Intelligence

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

CS 771 Artificial Intelligence. Informed Search

Outline for today s lecture. Informed Search. Informed Search II. Review: Properties of greedy best-first search. Review: Greedy best-first search:

Informed Search and Exploration for Agents

Robot Programming with Lisp

CSE 473. Chapter 4 Informed Search. CSE AI Faculty. Last Time. Blind Search BFS UC-BFS DFS DLS Iterative Deepening Bidirectional Search

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

Foundations of Artificial Intelligence

Artificial Intelligence

Artificial Intelligence

Artificial Intelligence: Search Part 2: Heuristic search

A.I.: Informed Search Algorithms. Chapter III: Part Deux

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Solving Problems: Intelligent Search

Informed/Heuristic Search

Ar#ficial)Intelligence!!

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms. Wolfram Burgard & Bernhard Nebel

Chapters 3-5 Problem Solving using Search

Informed search algorithms

Foundations of AI. 4. Informed Search Methods. Heuristics, Local Search Methods, Genetic Algorithms

Informed search algorithms

Part I. Instructor: Dr. Wei Ding. Uninformed Search Strategies can find solutions to problems by. Informed Search Strategies

Planning, Execution & Learning 1. Heuristic Search Planning

Outline. Informed search algorithms. Best-first search. Review: Tree search. A search Heuristics. Chapter 4, Sections 1 2 4

Artificial Intelligence

Artificial Intelligence. Informed search methods

Wissensverarbeitung. - Search - Alexander Felfernig und Gerald Steinbauer Institut für Softwaretechnologie Inffeldgasse 16b/2 A-8010 Graz Austria

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

Artificial Intelligence Informed search. Peter Antal

DFS. Depth-limited Search

Foundations of Artificial Intelligence

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Informed search algorithms

Problem solving and search

CS414-Artificial Intelligence

Solving Problems using Search

Informed Search. Topics. Review: Tree Search. What is Informed Search? Best-First Search

Solving problems by searching

Downloaded from ioenotes.edu.np

Mustafa Jarrar: Lecture Notes on Artificial Intelligence Birzeit University, Chapter 3 Informed Searching. Mustafa Jarrar. University of Birzeit

Artificial Intelligence CS 6364

Contents. Foundations of Artificial Intelligence. General Algorithm. Best-First Search

Informed search methods

Informed Search Algorithms

Informed Search A* Algorithm

Expert Systems (Graz) Heuristic Search (Klagenfurt) - Search -

Review Search. This material: Chapter 1 4 (3 rd ed.) Read Chapter 18 (Learning from Examples) for next week

Artificial Intelligence, CS, Nanjing University Spring, 2018, Yang Yu. Lecture 3: Search 2.

Problem solving and search

Artificial Intelligence

COSC343: Artificial Intelligence

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 4: Beyond Classical Search

Informed (Heuristic) Search. Idea: be smart about what paths to try.

Overview. Path Cost Function. Real Life Problems. COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Heuristic (Informed) Search

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany, Course on Artificial Intelligence,

Solving problems by searching

S A E RC R H C I H NG N G IN N S T S A T T A E E G R G A R PH P S

DIT411/TIN175, Artificial Intelligence. Peter Ljunglöf. 23 January, 2018

CS 4700: Foundations of Artificial Intelligence

COMP219: Artificial Intelligence. Lecture 10: Heuristic Search

Artificial Intelligence Informed search. Peter Antal Tadeusz Dobrowiecki

Transcription:

Tree Search (Reviewed, Fig. 3.9) Chapter4 Informed Search and Exploration 20070322 chap4 1 20070322 chap4 2 Search Strategies A search strategy is defined by picking the order of node expansion Uninformed Search Strategies - By systematically generating new states and testing against the goal. Informed (Heuristic) Search Strategies - By using problem-specific knowledge to find solutions more efficiently. Best-First Search An instance of the general Tree Search. A node is selected for expansion based on an evaluation function, f(n), i.e. an estimate of desirablity Expand most desirable unexpanded node. Can be implemented via a priority queue that will maintain the fringe in ascending order of f-values. Best-first search is venerable but inaccurate. 20070322 chap4 3 20070322 chap4 4 Best-First Search (cont.-1) Heuristic search uses problem-specific knowledge: evaluation function. Choose the seemingly-best node based on some estimate of the cost of the corresponding solution. Need estimate of the cost to a goal e.g. Depth of the current node Sum of the distances so far Euclidean distance to goal etc. Heuristics: rules of thumb ( 概測法 ) Goal: to find solutions more efficiently 20070322 chap4 5 Best-First Search (cont.-2) Heuristic function h(n) = estimated cost of the cheapest path from node n to a goal node. (h(n) = 0, for a goal node) Special cases - Greedy Best-First Search (or Greedy Search) Minimizing estimated cost from the node to reach a goal Expanding the node that appears to be closest to goal - A* Search Minimizing the total estimated solution cost Avoid expanding paths that are already expensive 20070322 chap4 6 1

Heuristics Heuristic is derived from heuriskein in Greek, meaning to find or to discover The term heuristics is often used to describe rules of thumb ( 經驗法則 ) or advice that are generally effective, but are not guaranteed to work in every case. In the context of search, a heuristic is a function that takes a state as an argument and returns a number that is an estimate of the merit of the state with respect to the goal. 20070322 chap4 7 Heuristics (cont.) A heuristic algorithm improves the average-case performance, does not necessarily improve the worst-case performance. Not all heuristic functions are beneficial. - The time spent evaluating the heuristic function in order to select a node for expansion must be recovered by a corresponding reduction in the size of the search space explored. - Useful heuristics should be computationally inexpensive! 20070322 chap4 8 Romania with step costs in km Straight-line distances to Bucharest Greedy Best-First Search Heuristic function: h(n) = estimated best cost to goal from n h(n) = 0 if n is a goal e.g. h SLD (n) = straight-line distance for route-finding Function GREEDY-SEARCH(problem) return a solution or failure return BEST-FIRST-SEARCH(problem, h) 20070322 chap4 9 20070322 chap4 10 Greedy Best-First Search Example Greedy Best-First Search Example (cont.-1) Assume that we want to use greedy search to solve the problem of travelling from Arad to Bucharest. The initial state=arad The first expansion step produces: Sibiu, Timisoara and Zerind Greedy best-first will select Sibiu 20070322 chap4 11 20070322 chap4 12 2

Greedy Best-First Search Example (cont.-2) Greedy Best-First Search Example (cont.-3) If Sibiu is expanded we get: Arad, Fagaras, Oradea and Rimnicu Vilcea Greedy best-first search will select: Fagaras 20070322 chap4 13 If Fagaras is expanded we get: Sibiu and Bucharest Goal reached!! Yet not optimal (see Arad Sibiu Rimnicu Vilcea Pitesti) 20070322 chap4 14 Analysis of Greedy Search Complete?? No (start down an infinite path and never return to try other possibilities) (e.g. from Iasi to Fagaras) - Susceptible to false starts Iasi Neamt (dead end) - No Repeated states Checking Iasi Neamt Iasi Neamt Iasi.. (oscillation) Analysis of Greedy Search (cont.) Optimal?? No (same as depth-first search) (e.g. from Arad to Bucharest) Arad Sibiu Fagaras Bucharest (450=140+99+211, is not shortest) Arad Sibiu Rim Pitesti Bucharest (418=140+80+97+101) Time?? (like depth-first search) -worst:o(b m ), but a good heuristic can give dramatic improvement m: the maximum depth of the search space Space?? O(b m ): keep all nodes in memory 20070322 chap4 15 20070322 chap4 16 A* Search function A*-SEARCH(problem) returns a solution or failure return BEST-FIRST-SEARCH(problem, g+h) To minimizing the total estimated solution cost Evaluation function: f(n) = estimated cost of the cheapest solution through n to goal = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to the goal 20070322 chap4 17 Admissible Heuristic A* search uses an admissible heuristic i.e. h(n) h * (n) where h * (n) is the true cost from n (Also h(n) 0, so h(g) = 0 for any goal G) Theorem: If h(n) is admissible, A* using tree-search is optimal. It is both complete and optimal if h never overestimates the cost to the goal. 20070322 chap4 18 3

A* Search Example A* Search Example (cont.-1) Find Bucharest starting at Arad f(arad) = c(??,arad)+h(arad)=0+366=366 Expand Arrad and determine f(n) for each node f(sibiu)=c(arad,sibiu)+h(sibiu)=140+253=393 f(timisoara)=c(arad,timisoara)+h(timisoara)=118+329=447 f(zerind)=c(arad,zerind)+h(zerind)=75+374=449 Best choice is Sibiu 20070322 chap4 19 20070322 chap4 20 A* Search Example (cont.-2) A* Search Example (cont.-3) Expand Sibiu and determine f(n) for each node f(arad)=c(sibiu,arad)+h(arad)=280+366=646 f(fagaras)=c(sibiu,fagaras)+h(fagaras)=239+179=415 f(oradea)=c(sibiu,oradea)+h(oradea)=291+380=671 f(rimnicu Vilcea)=c(Sibiu,Rimnicu Vilcea)+h(Rimnicu Vilcea) =220+192=413 Best choice is Rimnicu Vilcea 20070322 chap4 21 Expand Rimnicu Vilcea and determine f(n) for each node f(craiova)=c(rimnicu Vilcea, Craiova)+h(Craiova)=360+160=526 f(pitesti)=c(rimnicu Vilcea, Pitesti)+h(Pitesti)=317+100=417 f(sibiu)=c(rimnicu Vilcea,Sibiu)+h(Sibiu)=300+253=553 Best choice is Fagaras 20070322 chap4 22 A* Search Example (cont.-4) A* Search Example (cont.-5) Expand Fagaras and determine f(n) for each node f(sibiu)=c(fagaras, Sibiu)+h(Sibiu)=338+253=591 f(bucharest)=c(fagaras,bucharest)+h(bucharest)=450+0 =450 Best choice is Pitesti 20070322 chap4 23 Expand Pitesti and determine f(n) for each node f(bucharest)=c(pitesti,bucharest)+h(bucharest)=418+0=418 Best choice is Bucharest Optimal solution (only if h(n) is admissable) Note values along optimal path 20070322 chap4 24 4

Optimality of A* Suppose some suboptimal goal G 2 has been generated and is in the queue. Let n be an unexpanded node on a shortest path to an optimal goal G. Optimality of A* (cont.-1) C* : cost of the optimal solution path A* may expand some nodes before selecting a goal node. Assume: G is an optimal and G 2 is a suboptimal goal. f(g 2 ) = g(g 2 ) + h(g 2 ) = g(g 2 ) > C* -------- (1) For some n on an optimal path to G, if h is admissible, then f(n) = g(n) + h(n) C* -------- (2) From (1) and (2), we have f(n) C* < f(g 2 ) So, A* will never select G 2 for expansion. 20070322 chap4 25 20070322 chap4 26 Optimality of A* (cont.-2) BUT graph search (instead of tree-search) Discards new paths to repeated state. - Previous proof breaks down Solution: - Add extra bookkeeping i.e. remove more expensive of two paths. - Ensure that optimal path to any repeated state is always first followed. Extra requirement on h(n): consistency (monotonicity) 20070322 chap4 27 Monotonicity (Consistency) of Heuristic A heuristic is consistent if h(n) c(n, a, n ) + h(n ) The estimated cost of reaching the goal from n is no greater than the step cost of getting to successor n plus the estimated cost of reaching the goal from n If h is consistent, we have f(n ) = g(n ) + h(n ) = g(n) + c(n, a, n ) + h(n ) g(n) + h(n) = f(n) f(n ) f(n) i.e. f(n) is non-decreasing along any path. Theorem: If h(n) is cosisient, A* using graph search is optimal. 20070322 chap4 28 Optimality of A* (cont.-3) A* expands nodes in order of increasing f value Gradually adds f-contours of nodes Contour i has all nodes with f = f i, where f i < f i+1 All nodes with f(n) > C* (optimal solution) are pruned. Analysis of A* Search Complete?? Yes unless there are indefinitely many nodes with f f(g) Space?? O(b d ), keep all nodes in memory Optimal?? Yes, if the heuristic is admissible Optimally Efficient?? Yes i.e. No other optimal algorithm is guaranteed to expand fewer nodes than A* A* is not practical for many large-scale problems. (Since A* usually runs out of space long before it runs out of time.) 20070322 chap4 29 20070322 chap4 30 5

Memory Bounded Heuristic Search Overcome the space problem of A*, without sacrificing optimality or completeness IDA* (Iterative Deepening A*) - a logical extension of iterative deepening search to use heuristic - the cutoff used is the f-cost (g+h) rather than the depth RBFS (Recursive best-first search) MA* (Memory-bounded A*) SMA* (Simplified MA*) - is similar to A*, but restricts the queue size to fit into the available memory - drop the worst-leaf node when memory is full 20070322 chap4 31 RBFS (Recursive Best-First Search) 20070322 chap4 32 RBFS Example RBFS Example (cont.-1) 140 118 75 140 =140+253 99 151 80 =118+329 =75+374 =(140+140)+366 =(140+99) +176 =(140+80) +193 Path until Rumnicu Vilcea is already expanded Above node; f-limit for every recursive call is shown on top. Below node: f(n) The path is followed until Pitesti which has a f-value worse than the f-limit. 20070322 chap4 33 Unwind recursion and store best f-value for current best leaf Pitesti result, f [best] RBFS(problem, best, min(f_limit, alternative)) best is now Fagaras. Call RBFS for new best best value is now 450 20070322 chap4 34 RBFS Example (cont.-2) Analysis of RBFS Complete?? Yes Time?? Exponential Space?? O(bd) Optimal?? Yes, if the heuristic is admissible Unwind recursion and store best f-value for current best leaf Fagaras result, f [best] RBFS(problem, best, min(f_limit, alternative)) best is now Rimnicu Viclea (again). Call RBFS for new best. Subtree is again expanded. Best alternative subtree is now through Timisoara Solution is found since because 447 > 417. 20070322 chap4 35 IDA* and RBFS suffer from too little memory. IDA* retains only one single number (the current f-cost limit) RBFS is a bit more efficient than IDA* Still excessive node generation 20070322 chap4 36 6

Heuristic Functions e.g. for the 8-puzzle branching factor: 3 depth: 22 # of states: 3 22 = 3.1 * 10 10 9! /2= 181,440 (reachable distinct states) for the 15-puzzle # of states: 10 13 How to find a good heuristic function? 20070322 chap4 37 Heuristic Functions (cont.) Two commonly-used candidates h 1 (n) = number of misplaced tiles =8 h 2 (n) = total Manhattan distance (i.e. no. of squares from desired location to each tile) = 3+1+2+2+2+3+3+2= 18 20070322 chap4 38 Effect of Heuristic Accuracy on Performance If h2(n) >= h1(n) for all n (both admissible) then h2 dominates h1 and is better for search 1200 random problems with solution lengths from 2 to 24. Heuristic Quality Effective branching factor b* is defined by * * 2 N + 1 = 1+ b + ( b ) + L+ ( b ) * d A well-designed heuristic would have a value of b* close to 1, allowing fairly large problems to be solved. A heuristic function h 2 is said to be more informed than h 1 (or h 2 dominates h 1 ) if both are admissible and h 2 dominates (is more informed than) h 1 20070322 chap4 39 n h ( n) h ( ), 2 1 n A* using h 2 will never expand more nodes than A* using h 1 20070322 chap4 40 Inventing Admissible Heuristics Relaxed problems A problem with fewer restrictions on the actions The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem. ABSolver is a program that can generate heuristics automatically from problem definitions, using the relaxed problem method and various other techniques. generated a new heuristic for 8-puzzle better than preexisting heuristic found the first useful heuristic for Rubik s cube puzzle. 20070322 chap4 41 Inventing Admissible Heuristics (cont.-1) e.g. A tile can move from square A to square B if A is horizontally or vertically adjacent to B and B is blank. (P if R and S) 3 Relaxed problems: (a) A tile can move from square A to square B if A is horizontally or vertically adjacent to B. (P if R) --- derive h 2 (b) A tile can move from square A to square B if B is blank. (P if S) (c) A tile can move from square A to square B. (P) --- derive h 1 20070322 chap4 42 7

Inventing Admissible Heuristics (cont.-2) Composite heuristics Given h 1, h 2, h 3,, h m ; none dominates any others. h(n) = max{h 1 (n), h 2 (n),, h m (n)} Subproblem The cost of the optimal solution of the subproblem is a lower bound on the cost of the complete problem. (To get tiles 1,2,3 and 4 into their correct positions, without worrying about what happens to the other titles.) Pattern databases store the exact solution to for every possible subproblem instance. - The complete heuristic is constructed using the patterns in the DB 20070322 chap4 43 Inventing Admissible Heuristics (cont.-3) Weighted evaluation function : f w ( n) = (1 w) g( n) + wh( n) Through learning from experience Experience = solving lots of 8-puzzles An inductive learning algorithm can be used to predict costs for other states that arise during search. Search cost Good heuristics should be efficiently computable. 20070322 chap4 44 Local Search and Optimization Previously: systematic exploration of search space. - Path to goal is solution to problem YET, for some problems path is irrelevant. e.g 8-queens What quantities of quarters, nickels, and dimes add up to $17.45 and minimizes the total number of coins Different algorithms can be used - Local search Local Search Algorithms Local search does not keep track of previous solutions - Using a single current state (rather than multiple paths) and move only to neighboring states Advantages - Use a small amount of memory (usually constant amount) - Often find reasonable (note we aren t saying optimal) solutions in infinite search spaces 20070322 chap4 45 20070322 chap4 46 Optimization Problems Looking for global maximum (or minimum) To find the best state according to an Objective Function Example f (q, d, n) = 1,000,000, if q*0.25 + d*0.1 + n*0.05!= 17.45 To minimize f = q + n + d, otherwise 20070322 chap4 47 20070322 chap4 48 8

Hill-Climbing Search Hill-Climbing Search (cont.-1) Like climbing Everest in thick fog with amnesia 20070322 chap4 49 Only record the state and it evaluation instead of maintaining a search tree is a loop that continuously moves in the direction of increasing value - It terminates when a peak is reached. Hill climbing does not look ahead of the immediate neighbors of the current state. Hill-climbing chooses randomly among the set of best successors, if there is more than one. Hill-climbing i.e. greedy local search 20070322 chap4 50 Hill-climbing example a) b) Hill-Climbing Search (cont.-2) Problems: Local Maxima: search halts prematurely Plateaux: search conducts a random walk a) shows a state of h=17 and the h-value for each possible successor. (h=3+4+2+3+2+2+1) b) A local minimum in the 8-queens state space (h=1). Ridges: search oscillates with slow progress Creating a sequence of local maximum that are not directly connected to each other. From each local maximum, all the available actions point downhill. 20070322 chap4 51 20070322 chap4 52 Hill-climbing variations Simulated Annealing Stochastic hill-climbing Random selection among the uphill moves. The selection probability can vary with the steepness of the uphill move. First-choice hill-climbing Like stochastic hill climbing but by generating successors randomly until a better one is found. Random-restart hill-climbing Tries to avoid getting stuck in local maxima. 20070322 chap4 53 20070322 chap4 54 9

Simulated Annealing (cont.-1) Idea: escape local maxima by allowing some "bad" moves but gradually decrease their frequency A term borrowed from metalworking - We want metal molecules to find a stable location relative to neighbors - heating causes metal molecules to move around and to take on undesirable locations - during cooling, molecules reduce their movement and settle into a more stable position - annealing is process of heating metal and letting it cool slowly to lock in the stable locations of the molecules Simulated Annealing (cont.-2) Select a random move at each iteration Move to the selected node if it is better than the current node. The probability of moving to a worse node decreases exponentially with the ``badness'' of the move, E T i.e. e Δ / The temperature T changes according to a schedule. 20070322 chap4 55 20070322 chap4 56 Simulated Annealing (cont.-3) One can prove: If T decreases slowly enough, then simulated annealing search will find a global optimum with probability approaching 1 Widely used in VLSI layout, airline scheduling, etc Local Beam Search Keep track of k states rather than just one Start with k randomly generated states At each iteration, all the successors of all k states are generated If any one is a goal state, stop; else select the k best successors from the complete list and repeat. 20070322 chap4 57 20070322 chap4 58 Genetic algorithms Genetic Algorithms (cont.-1) A successor state is generated by combining two parent states Start with k randomly generated states (population) A state is represented as a string over a finite alphabet (often a string of 0s and 1s) Evaluation function (fitness function). Higher values for better states. Produce the next generation of states by selection, crossover, and mutation Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 7/2 = 28) 24/(24+23+20+11) = 31% 23/(24+23+20+11) = 29% etc 20070322 chap4 59 20070322 chap4 60 10

Genetic Algorithms (cont.-2) 20070322 chap4 61 11